Are integrated neighborhoods stable?

Its rare that some obscure terminology from sociology becomes a part of our everyday vernacular, but “tipping point” is one of those terms. Famously, Thomas Schelling used the tipping point metaphor to explain the dynamics of residential segregation in the United States.  His thesis was that white residents were willing to live in a mixed race neighborhood, but only when whites were still a comfortable majority of its population. Above some level–the tipping point–whites would continue to live in a mixed race neighborhood only when whites remained a comfortable majority of its residents.  The notion of a tipping point has a dour implication for neighborhood change, it implies that mixed race neighborhoods, when they occur, are unstable and temporary transitional states between longer and more durable periods of segregation.

A paper published earlier this year by Kwan Ok Lee in the Journal of Urban AffairsTemporal Dynamics of Racial Segregation in the United States: An Analysis of Household Residential Mobility–looks at the processes of neighborhood change by race in the United States over the past four decades to see whether the instability of integrated neighborhoods implied by the “tipping point” theory is actually borne out in practice.  The results are surprising.

Lee’s paper looks at data on the racial and ethnic composition of census tracts in the United States.  Tracts are neighborhood-sized units developed by the Census Bureau that have an average population of about 4,000 persons.  Lee classified each of these census tracts according to the race and ethnicity of its population into one of six groups (predominantly white, predominantly black, predominantly other, black-white, white-other, black-other and multiethnic).  The exact definitions are complicated, but in general tracts with more than 80 percent of the population in one group were classified as predominantly in that group; multi-ethnic neighborhoods were those where no one group was a majority of the tract’s population (more details below). Lee’s paper traces neighborhood change in each of these tracts over two 20-year periods, 1970 to 1990 and 1990 to 2010. There’s a lot in this paper, but we think there are three particularly interesting findings.

First, the data show the growing diversity and modestly declining segregation of US neighborhoods.  The share of all neighborhoods that were predominantly white in the US declined from 67 percent in the 1970-1990 period to 57 percent in the 1990-2010 period.  Over this time period, the pace of transition to more racially mixed neighborhoods accelerated.  One in four predominantly white neighborhoods in 1970 became racially mixed over the next two decades; in 1990 one in three of predominantly white neighborhoods became racially mixed.   Similarly, the rate of transition in predominantly black neighborhoods also accelerated; about 19 percent of predominantly black neighborhoods in 1970 became racially mixed over the next 20 years; that fraction increased to about 24 percent between 1990 and 2010, as illustrated on the following chart.

lee_transition

Second, black-white neighborhoods became much more stable.  Black-white neighborhoods were those between 10% and 50% non-Hispanic black, and less than 10% Hispanic or non-Hispanic Asian. Of black-white neighborhoods in 1970, forty percent transitioned away from being racially mixed in the 20 years between 1970 and 1990.  Of the black-white neighborhoods in 1990, only 20 percent transitioned away from being racially mixed between 1990 and 2010; in effect, the rate of “tipping out” of integration declined by half.

Third, the number of truly multi-ethnic neighborhoods nearly doubled, from about 1.6 percent of all neighborhoods in 1970-1990 to about 3 percent of all neighborhoods in 1990-2010.  The definition of multiethnic is tracts that were at least 10% non-Hispanic black, at least 10 percent Hispanic or non-Hispanic Asian, and at least 40 percent non-Hispanic white. Once they became multi-ethnic, from 1990 to 2010, about 90 percent of them remained multi-ethnic for the next twenty years.

In all, its now the case that predominantly white neighborhoods are more likely to become racially mixed (one in three) than racially-mixed neighborhoods are likely to become dominated by a single racial/ethnic group (one in five).  And though they constitute a small share of the total, multi-ethnic neighborhoods are growing, and, once-established, persistent.

Lee also used data from the Panel Survey of Income Dynamics to follow the actual moves of thousands of families over several decades.  She found that once families moved into racially mixed neighborhoods, they tended to stay in those neighborhoods, or when they moved, they moved to other racially mixed neighborhoods.  She found that about 68 percent to 86 percent of black and white movers residing in racially mixed neighborhoods moved within their current neighborhoods or moved to other mixed neighborhoods during 1991–2009.

While much of our nation remains substantially segregated by race, Lee’s analysis points to at least a couple of hopeful signs.  The pace of desegregation, as measured by the transition of neighborhoods from predominantly black or predominantly white to a more multi-racial mix has accelerated. And once established, it appears that multi-racial neighborhoods tend to stay that way, and that few households in such neighborhoods make subsequent moves that lead to re-segregation.

 

The price of autonomous cars: why it matters

If you believe the soothsayers–including the CEO of Lyft–our cities will soon be home to swarms of autonomous vehicles that ferry us quietly, cleanly and safely to all of our urban destinations. The technology is developing–and rolling out–at a breakneck pace. Imagine some combination of Uber, electrically powered cars, and robotic control.  You’ll use your handheld device to summon a robotic vehicle to pick you up, then drop you off at your destination. Vast fleets of these vehicles will flow through city streets, meeting much of our transportation demand and reducing the ownership of private cars. Big players in the automobile and technology industries are making aggressive bets that this will happen.  But, the big question behind this, as we asked in part one of this series yesterday, is “how much will it cost?”

While the news that Uber is now street-testing self-driving cars in Pittsburgh–albeit with full time human supervisors–has heightened expectations that a massive deployment is just around the corner, some are still expressing doubts.  The Wall Street Journal points out that the initial deployment of autonomous vehicles may be restricted to well-mapped urban areas, slow speeds (under 25 miles per hour) and good weather conditions.  It could be twenty years before we have “go anywhere” autonomous vehicles.

And those looking forward and contemplating the widespread availability of self-driving cars are predicting everything from a new urban nirvana to a hellish exurban dystopia.  The optimists see a world where parking spaces  are beaten into plowshares, the carnage from car crashes is eliminated, where greenhouse gas emissions fall sharply and where the young, the old and the infirm, those who can’t drive have easy access to door-to-door transit. The pessimists visualize a kind of exurban dystopia with mass unemployment for those who now make their living driving vehicles, and where cheap and comfortable autonomous vehicles facilitate a new wave of population decentralization and sprawl.

To an economist, all of these projections hinge on a single fact about autonomous vehicles that we don’t yet know:  how much they will cost to operate.  If they’re cheap, they’ll be adopted more quickly and widely and have a much more disruptive effect.  If they’re more expensive than private cars or transit or biking or walking, they’ll be adopted more slowly, and probably have less impact on the transport system. (It’s worth noting that despite their notoriety, today’s Uber and Lyft ridesharing services have been used by less than 15 percent of the population).  Whether autonomous vehicles become commonplace–or dominant–or whether they remain a niche product, for a select segment of the population or some restricted geography, will depend on how much they cost.

As we reported yesterday, the consensus of estimates is that fleets of autonomous vehicles would likely cost between about 30 and 50 cents per mile to operate sometime in the next one to two decades. That’s potentially a good deal cheaper than the 50 to 85 cents average operating cost for a conventional privately owned vehicle.  All of these estimates assume that the hardware and software for navigation and vehicle control, including computers, sensors and communications, though expensive today, will decline in cost as the technology quickly matures. Some of those savings come from a combination of electric propulsion, and perhaps smaller, purpose built “pod” vehicles. But most of the savings comes from greater utilization. Privately owned cars, it is frequently noted generally sit idle 90 percent of the time. In theory, at least, fleets of autonomous vehicles would be more nearly in constant motion, taking up less space for storage, and doing more work.

av_cost_ests

Peak demand and surge pricing

A couple of things to keep in mind as we ponder the meaning of these estimates:  First, cost is not the same as price. While these figures represent what it might cost fleet owners to operate such vehicles, the prices they charge customers will likely be higher, both because they’ll want a profit, and because travel demand at some peak times (and locations) will exceed capacity.  

And that’s the big obstacle to realizing the theoretical higher utilization of autonomous vehicles. Demand for travel isn’t spread evenly throughout the day. Many more of us want to travel as certain times (especially early in the morning and late in the afternoon) and the presence of these peaks, as we all know, is the defining feature of our urban transportation problem. Whiz-bang technology or not, there simply won’t be enough autonomous vehicles to handle the demand at the peak hour, for two reasons:  first, fleet operators won’t want to own enough vehicles to meet the peak, as those vehicles would be idle all the rest of the time.  The second issue is what Jarrett Walker has called the “geometry” problem: there simply isn’t enough room on city streets and highways to accommodate all the potential peak travelers if they are each in a personal vehicle.  

Consider a practical example. One prominent study, by Columbia University’s Earth Institute, predicts that it would be possible to run autonomous vehicles in Manhattan for 40 cents per mile.  That’s far cheaper than current modes of travel–including taxi, ridesharing, private cars and even the subway or bus for trips of less than five miles–so it’s likely that many more people will want to take advantage of autonomous vehicles than there will be vehicles to accommodate them. So, at the peak, autonomous vehicles will undoubtedly charge a surge fare, just Uber and Lyft do now.

The competitive challenge to transit, especially off-peak

Most of the estimates presented here suggest fully autonomous vehicles will be cheaper than privately owned conventional vehicles.  It’s also likely that they may be less expensive than transit for many trips. In many cities the typical bus trip is only 2 or 3 miles in length; if the price of an autonomous vehicle is less than 50 cents per mile, the cost of such a trip (door-to-door, in an non-shared vehicle) will be less than the transit fare.  Autonomous vehicles could easily cannibalize much of the transit market, especially in off-peak hours.

And because they can charge fares much higher than costs at the peak, operators will likely discount off-peak fares to below cost.  That may mean at non-peak times, autonomous vehicles may be available to travelers at prices lower than the estimates shown here.  Simply put, as long as operators cover their variable costs–which are likely to be electricity and tires–they needn’t worry about covering their fixed costs (which can be paid for from peak period profits).

Behavioral effects of per-mile pricing

The silver lining here–if there is one–is that the kind of per mile pricing that fleet vendors are likely to employ for autonomous vehicle fleets will send much stronger signals to consumers about the effects of their travel decisions than our current mostly flat-rate travel pricing.  Today, most households own automobiles, and have pay the same level of fixed costs (car payments, insurance) whether they use their vehicle or not for an additional trip. Because the marginal cost of a trip is often perceived to be just the cost of fuel (perhaps 15-20 cents per mile), households use cars for trips that could easily be taken by other modes. That calculus changes  if each trip has a separate additional cost–and consumers are likely to alter their behavior accordingly. Per mile pricing will make travelers more aware–and likely more sensitive to–the tradeoffs of different modes and locations. The evidence from evaluations of car-sharing programs, like Zip-car, show that per mile pricing tends to lead many households to reduce the number of cars they own–or give up car ownership altogether.

The price of disruption

If these cost estimates are correction, and if autonomous cars are actually feasible any time soon, the lower cost of single occupancy vehicle travel and a different pricing scheme will likely trigger greater changes in travel behavior. At the same time, other institutions, like road-building agencies and transit providers may see a major disruption of their business models.  A move to electric cars threatens the principal revenue source of road-building agencies, the gas tax. And an overall decline in vehicle ownership coupled with more intense peak demand could be a state or city transportation department’s fiscal nightmare.  Whether that happens depends a lot on whether these forecasts of relatively inexpensive autonomous vehicles pan out.

How much will it cost?
How much will it cost–and who will end up paying?

 

 

 

What price for autonomous vehicles?

It’s easy to focus on technology, but pricing will determine autonomous vehicles impact.

Everyone’s trying hard to imagine what a future full of autonomous cars might look like. Sure, there are big questions about whether a technology company or a conventional car company will succeed, whether the critical factor will be manufacturing prowess or software sophistication, and all manner of other technical details.

How much will it cost?
How much will it cost?

But for economists — and also for urbanists of all stripes — a very big question has to be:  How much will autonomous cars cost?  We’re going to tackle this important question in two parts.  Part one–today–assembles some of the estimates that have been made.  We’ll aim to ballpark the approximate cost per mile of autonomous vehicles.  In part two–tomorrow–we’ll consider what this range of estimates implies for the future of urban transportation, and for cities themselves, because transportation and urban form are so closely interrelated.

So here is a first preliminary list of some of the estimates of the cost per mile of operating autonomous vehicles.  We’ve reproduced data from a number of sources, including universities, manufacturers, and consulting firms. Its difficult to make direct comparisons between these estimates, because they not only employ different assumptions, but also forecast costs for different future years (with unstated assumptions about inflation). There’s some significant disagreement about the cost of operation of current vehicles, which range from 59 cents per mile to 84 cents per mile.  (For this commentary, we’ve assembled these estimates without undertaking our own analysis of their accuracy or reliability; we encourage interested readers to click through and read each of these studies and draw their own conclusions about their utility).

ford_av$1.00 per mile.
Ford (2016) thinks it can reduce the cost of highly automated vehicles to about $1.00 per mile, making them highly competitive with taxis which it estimates cost $6.00 per mile.
rmi_av_203551 cents per mile (2025), 33 cents per mile (2040)
Rocky Mountain Institute (2016) estimates that in 2018, autonomous vehicle costs will be roughly competitive with current vehicles (about 84 cents per mile), but will steadily decline, to 51 cents per mile by 2025 and 33 cents per mile by 2040.
morgan_stanley_share_owned50 cents per mile (2030).
Morgan Stanley (2016) estimates autonomous vehicles will cost about 50 cents per mile by 2030, compared to about 74 cents per mile for privately owned standard vehicles."


kpmg_201643 cents per mile.
KPMG (2016) estimates costs of 43 cents per mile total. It estimates current cars have variable costs of 21 cents per mile and fixed costs of 61 cents per mile for a total of 84 cents per mile. KPMG estimates new shared AVs would cost 17 cents per mile variable, and 26 cents per mile fixed (43 cents per mile total) with $25K car fully depreciated in 3 years being driven about 40K miles per year.
deloitte31 to 46 cents per mile.
Deloitte (2016) estimates costs of 46 cents to as little as 31 cents per mile for autonomous vehicles; the latter estimate corresponds to low speed purpose built pods.
barclays_201629 cents per mile (2040)
Barclay’s (2016) estimates the costs of autonomous vehicles at .29 per mile by 2040, compared to about 66 cents per mile for conventional, privately owned vehicles today.
earthinstitute_201315-41 cents per mile.
Columbia University Earth Institute (2013) estimates costs of autonomous vehicles would be about 41 cents per mile for full-sized vehicles and could be as little as 15 cents per mile for purpose-built low speed vehicles. This compares to costs of 59 to 75 cents per mile for conventional privately owned automobiles.

The estimates for future costs range from as much as a dollar per mile (Ford’s near term estimate of its cost of operation for what it refers to as “highly automated vehicles),” to an estimate of 15 cents per mile a decade or more from now for the operation of small purpose-built low-speed urban “pods”–like Google’s prototype autonomous vehicle.  Overall, the estimates imply that fleets of autonomous vehicles could be operated in US cities in the next decade or two for something between 30 and 50 cents per mile.

And, for a variety of reasons–which we’ll explore in more detail tomorrow–the deployment of autonomous vehicles is much more likely to occur in cities. The critical factor is that market demand will be strongest in cities. According to the Wall Street Journal, autonomous vehicles will initially  be restricted to low speeds, avoid bad weather and stay within carefully circumscribed territories (given the cost and complexity of constructing the detailed maps  autonomous vehicles to navigate streets), all factors that point to cities.

These estimates hinge on a number of important assumptions about operating costs. The highest estimates usually assume some form of automating something resembling existing vehicles; operating costs are assumed to be lower with electric propulsion and smaller vehicles. A key cost driver is vehicle utilization and lifetime; fleets of autonomous vehicles are assumed to be used much more intensively than today’s privately owned cars, with a big reduction in capital cost per mile traveled.

There are some other big assumptions about whole categories of costs, and the policy environment looking forward. Todd Litman raises the concern that autonomous vehicles will require relatively high expenditures for cleaning, maintenance and vandalism repair, as much as hundreds of dollars per week.  Its not clear that any of the estimates for the costs of operating electric vehicles include any kind of road user fee to replace gas tax revenues now paid by internal combustion powered vehicles.

Despite they uncertainties, the available estimates suggest that successful autonomous vehicles could be substantially cheaper than today’s cars. And if they’re available on-demand and a la carte–freeing users from the cost of ownership, parking, maintenance and insurance–this may engender large changes in consumer and travel behavior.

 

How much will autonomous vehicles cost?

Everyone’s trying hard to imagine what a future full of autonomous cars might look like. Sure, there are big questions about whether a technology company or a conventional car company will succeed, whether the critical factor will be manufacturing prowess or software sophistication, and all manner of other technical details.

How much will it cost?
How much will it cost?

 

But for economists — and also for urbanists of all stripes — a very big question has to be:  How much will autonomous cars cost?  We’re going to tackle this important question in two parts.  Part one–today–assembles some of the estimates that have been made.  We’ll aim to ballpark the approximate cost per mile of autonomous vehicles.  In part two–tomorrow–we’ll consider what this range of estimates implies for the future of urban transportation, and for cities themselves, because transportation and urban form are so closely interrelated.

So here is a first preliminary list of some of the estimates of the cost per mile of operating autonomous vehicles.  We’ve reproduced data from a number of sources, including universities, manufacturers, and consulting firms. Its difficult to make direct comparisons between these estimates, because they not only employ different assumptions, but also forecast costs for different future years (with unstated assumptions about inflation). There’s some significant disagreement about the cost of operation of current vehicles, which range from 59 cents per mile to 84 cents per mile.  (For this commentary, we’ve assembled these estimates without undertaking our own analysis of their accuracy or reliability; we encourage interested readers to click through and read each of these studies and draw their own conclusions about their utility).

ford_av$1.00 per mile.
Ford (2016) thinks it can reduce the cost of highly automated vehicles to about $1.00 per mile, making them highly competitive with taxis which it estimates cost $6.00 per mile.
rmi_av_203551 cents per mile (2025), 33 cents per mile (2040)
Rocky Mountain Institute (2016) estimates that in 2018, autonomous vehicle costs will be roughly competitive with current vehicles (about 84 cents per mile), but will steadily decline, to 51 cents per mile by 2025 and 33 cents per mile by 2040.
morgan_stanley_share_owned50 cents per mile (2030).
Morgan Stanley (2016) estimates autonomous vehicles will cost about 50 cents per mile by 2030, compared to about 74 cents per mile for privately owned standard vehicles."


kpmg_201643 cents per mile.
KPMG (2016) estimates costs of 43 cents per mile total. It estimates current cars have variable costs of 21 cents per mile and fixed costs of 61 cents per mile for a total of 84 cents per mile. KPMG estimates new shared AVs would cost 17 cents per mile variable, and 26 cents per mile fixed (43 cents per mile total) with $25K car fully depreciated in 3 years being driven about 40K miles per year.
deloitte31 to 46 cents per mile.
Deloitte (2016) estimates costs of 46 cents to as little as 31 cents per mile for autonomous vehicles; the latter estimate corresponds to low speed purpose built pods.
barclays_201629 cents per mile (2040)
Barclay’s (2016) estimates the costs of autonomous vehicles at .29 per mile by 2040, compared to about 66 cents per mile for conventional, privately owned vehicles today.
earthinstitute_201315-41 cents per mile.
Columbia University Earth Institute (2013) estimates costs of autonomous vehicles would be about 41 cents per mile for full-sized vehicles and could be as little as 15 cents per mile for purpose-built low speed vehicles. This compares to costs of 59 to 75 cents per mile for conventional privately owned automobiles.

The estimates for future costs range from as much as a dollar per mile (Ford’s near term estimate of its cost of operation for what it refers to as “highly automated vehicles),” to an estimate of 15 cents per mile a decade or more from now for the operation of small purpose-built low-speed urban “pods”–like Google’s prototype autonomous vehicle.  Overall, the estimates imply that fleets of autonomous vehicles could be operated in US cities in the next decade or two for something between 30 and 50 cents per mile.

And, for a variety of reasons–which we’ll explore in more detail tomorrow–the deployment of autonomous vehicles is much more likely to occur in cities. The critical factor is that market demand will be strongest in cities. According to the Wall Street Journal, autonomous vehicles will initially  be restricted to low speeds, avoid bad weather and stay within carefully circumscribed territories (given the cost and complexity of constructing the detailed maps  autonomous vehicles to navigate streets), all factors that point to cities.

These estimates hinge on a number of important assumptions about operating costs. The highest estimates usually assume some form of automating something resembling existing vehicles; operating costs are assumed to be lower with electric propulsion and smaller vehicles. A key cost driver is vehicle utilization and lifetime; fleets of autonomous vehicles are assumed to be used much more intensively than today’s privately owned cars, with a big reduction in capital cost per mile traveled.

There are some other big assumptions about whole categories of costs, and the policy environment looking forward. Todd Litman raises the concern that autonomous vehicles will require relatively high expenditures for cleaning, maintenance and vandalism repair, as much as hundreds of dollars per week.  Its not clear that any of the estimates for the costs of operating electric vehicles include any kind of road user fee to replace gas tax revenues now paid by internal combustion powered vehicles.

Despite they uncertainties, the available estimates suggest that successful autonomous vehicles could be substantially cheaper than today’s cars. And if they’re available on-demand and a la carte–freeing users from the cost of ownership, parking, maintenance and insurance–this may engender large changes in consumer and travel behavior. Tomorrow, we’ll explore what these effects might be.

 

The Week Observed: October 14, 2016

What City Observatory did this week

1.  More evidence job growth is shifting to city centers.  A recent paper by Nathaniel Baum-Snow and Daniel Hartley has some interesting data on the pattern of job growth in the nation’s largest metropolitan areas. They find that while suburban area job growth greatly outpaced that of the central city in the 1990s – basically, the further away from the core you were, the faster you grew – that this pattern totally changed in the decade 2000 to 2010. (Chart shows job growth rate by distance to the CBD, in kilometers).

screenshot-2016-09-30-09-07-16

2.  Vancouver’s foreign-buyer tax didn’t slash home prices. This summer, the Province of British Columbia imposed a 15 percent tax on the sale of residences to foreign buyers. Some press accounts described the tax increase as an instant cure for home price inflation, claiming it resulted in an overnight 16 percent decline in home prices. In fact, Vancouver home prices were basically flat; we explain how journalists made a fundamental error in examining the data; one that is unfortunately all too common in reporting on real estate trends.

3.  The most interesting neighborhood in the world.  Google Maps has added a new feature, areas of interest, which show up as peach-colored blotches. Its not entirely clear how Google selects these areas, and some observers are concerned that there may be hidden biases at work. Using City Observatory’s Storefront Index, we’ve looked to see how the clustering of consumer-facing retail and service businesses corresponds to the places Google characterizes as interesting.  Storefront clusters closely correspond to the areas google flags as “interesting.”

City Observatory Storefronts compared to Google's Areas of Interest
City Observatory Storefronts compared to Google’s Areas of Interest

4.  Where is ridesharing growing fastest?  A new report from the Brookings Institution uses federal data on so-called non-employer businesses to track the growth of independent contractors in the “rides and rooms” segment of the gig economy. We use the Brookings data as a proxy for the penetration of transportation network companies like Lyft and Uber in the top 50 metropolitan markets. The leaders are New York, San Francisco and Washington.

This week’s must reads

1.  The under-appreciated benefits of public housing. Public housing has long been associated with negative consequences. It’s emblematic of concentrated poverty, and it’s widely believed that large-scale, high density public housing projects breed social pathologies.  But a new paper from John Haltiwanger and colleagues challenges that view.  Part of the problem of assessing the impact of public housing is the problem of “selection effects” – people who are chosen to live in public housing, by definition, are low income, and suffer from many challenges.  The key question is, once chosen to be in public housing, how do children do? Haltiwanger, et al overcome this selection effect problem by looking at the different outcomes for different children within a single household based on the amount of time they live in public housing. Comparing siblings allows the researchers to net out the effects of families (parental education, income, employment, etc), and judge the separate impact of housing itself.

1843041739_f3a7daa9b4_z
Cabrini Green in 2007: Not so bad, after all? (Flickr:  Soumit Nandi)

While the impacts are different for boys and girls, the study shows that public housing actually benefits children of poor families over their lifetimes, as measured by income and likelihood of incarceration, (girls see larger gains in income and lower rates of incarceration than boys). This holds for both traditional public housing and housing vouchers. In large part this seems likely to be an income effect: families getting vouchers or living in public housing spend less on housing than they would if they rented in the private market, without subsidies, and therefore have more resources for all their other expenses, which may translate into better outcomes for their children. As we’ve noted at City Observatory, the big problem with our housing programs for the poor is their lack of scale: fewer than one-in-four households that technically qualify for housing assistance get anything. And this study suggests that there could be substantial benefits from expanding the reach of these programs.  The full report is available from the National Bureau of Economic Research (NBER): Childhood Housing and Adult Earnings: A Between-Siblings Analysis of Housing Vouchers and Public Housing.

2.  Tales of Rent Control  John McNellis tells us what rent control looks like from the perspective of a small landlord. His family inherited a small 5-unit San Francisco apartment building which fell under the city’s rent control program. Two of the apartments turned over after they inherited the building, but three tenants were ones who were in place. Rent control created a strong adversarial relationship between the landlord and tenants; one tenant even insisted all communication be in writing. Eventually, McNellis’s family tired of the burden of being landlords, and opted to sell . After the sale, McNellis reports finding out that three of his nominal tenants had actually sublet their apartments — at market rates, “Our apartments were thus rented at market rates: just not by us.” Tens of thousands of apartments, particularly smaller complexes that make up the “missing middle” housing are owned not by big corporations, but by smaller part-time and amateur landlords. Rent control creates strong incentives for them to take their capital and invest elsewhere, with the paradoxical result that the landlord business is further dominated by more ruthless and efficient corporate owners.

New knowledge

1.  Housing, Interest Rates and Inequality:  Writing at VoxEU, economist Gianni LaCava describes the results of his research looking at the relationship between home prices, interest rates and economic inequality. Digging in to state level data, LaCava confirms what Thomas Piketty and Matthew Rognlie have argued:  that the distribution of income and wealth in the United States has become more unequal, and that it has been the growth of housing wealth, in particular, that has been the big driver of inequality.  His work also shows that housing prices have accelerated most in those states with the most constrained (i.e. least elastic) housing supply, and that inequality accelerated as real interest rates declined steadily from the 1990s onward. If we’re concerned about inequality, LaCava argues, we need to pay more attention to the distribution of housing wealth, and the “imputed” income that homeowners receive.

2.  Home prices and economic mobility in California. California’s Legislative Analyst has produced some of the most insightful examinations of the connections between housing markets and the economy.  A new paper by LAO looks at the connection between high home prices in coastal counties (around San Francisco and Los Angeles) and the inland counties of the state.  It finds that historically, incomes were converging between the lower income inland counties and the higher income coast counties. But that pattern has reversed in recent years, and that’s attributable to higher housing prices on the coast. The lack of housing has forced more people to live in the interior counties, where economic opportunities and productivity are lower. And significantly, as measured by residual income—the amount of income left after paying housing and transportation costs—incomes are actually lower in these “cheaper” counties, meaning that these families would be economically better off if they could live in one of the coastal counties.

The Week Observed: October 21, 2016

What City Observatory did this week

1.  Cities for Everyone:  Our Birthday Wish.  October 15 marked City Observatory’s second birthday. We reviewed some of the highlights of the past year, focusing on the growing evidence of the economic resurgence building around the nation’s cities. For the coming year, we’re planning on focusing on what it takes to build and maintain diverse inclusive communities:  cities for everyone.

Many happy returns! (Flickr: Daniel Nelson)
Many happy returns! (Flickr: Daniel Nelson)

2.  The Price of Parking.  Using data from the website Parkme, we’ve constructed an index of typical monthly parking costs in the nation’s largest metropolitan areas. While the median price in large cities is around $200 per month, there’s huge variation. Prices range from more than $700 per month in New York to less than $30 monthly in Oklahoma City. See how your city’s parking prices compare to others.

3.  Cities and the Price of Parking. Using our city parking price index, we look at the relationship between parking prices and transportation behavior in different metropolitan areas.  Our analysis shows a strong correlation between parking prices and transit use: people are much more likely to take transit in cities with expensive parking. Parking prices also correlate closely to the penetration of ride-hailing businesses like Lyft and Uber. The population-adjusted number of transportation service non-employers (a proxy for ride sharing businesses) is highest in cities with the most expensive parking. Parking, as it turns out, is a surrogate form of road pricing, with parking charges discouraging peak period car trips to urban centers and shifting travel to other modes. The more widespread deployment of ride-hailing may transform the role that parking prices play.

4.  The Myth Rich Cities/Poor Suburbs. There’s a new narrative about cities that claims that we’ve already experienced a great inversion, with poverty in the suburbs and wealth in the cities. Despite the resurgence of city populations in the past two decades, however, it’s still the case that poverty, especially concentrated poverty, is disproportionately found in cities, and that suburbs, especially newer and more distant ones, have much lower rates of poverty. At current rates of change it will be many decades before city and suburban poverty rates are equal. Rather than assuming that any movement of better educated and higher income people into cities makes their problems worse, we ought to look to see how we can leverage the re-investment in cities in away that increases opportunity and maintains diversity.

juday_poverty

This week’s must reads

1.  Implementing a carbon tax: politics makes strange enemies. In less than three weeks, Washington State voters will decide the fate of a proposed carbon tax. I-732 would impose a $25 per ton carbon tax, and use the proceeds to reduce the state’s sales taxes and pay rebates to low income families, as well as cutting some business taxes. A key aspect of the plan is that it is revenue-neutral—with the funds raised by the tax entirely returned in the form of tax cuts. While that’s a feature for some advocates, it’s a fatal bug to other interests, which is why, surprisingly, some of the strongest opposition comes from a number of environmental and social justice groups. Writing at Vox, David Roberts tells the story of a how a progressive economist’s market-based pricing solution for climate change has run into a buzz-saw of political opposition from groups who’d like to see carbon tax revenue used to fund a slew of clean energy and transition support programs.  The practical difficulties of engineering an agreed-upon policy approach to climate change – among groups that agree something needs to be done –likely foreshadows the conflicts that will play out when this issue finally reaches the national stage, which one hopes will be sooner rather than later.

2.  Achieving economic integration:  Tales from Chappaqua.  Bill and Hilary Clinton live in (for the next little while at least) in Chappaqua, New York, an up-scale suburb in Westchester County. In addition to its famous residents, Chappaqua is also ground zero for the battle to build more affordable housing in an area that has traditionally been zoned almost exclusively for relatively expensive single-family homes.  Politco relates the history of the city’s exclusionary policies, and describes the present day conflicts in trying to site affordable multi-family housing.  As President Obama’s recent endorsement of “YIMBY”-Yes in my backyard zoning reforms indicates, this may become more of a national issue in the months ahead.

3.  A road much less traveled. Between Austin and San Antonio, there’s a brand new freeway where you can cruise along—legally—at 85 miles per hour. And you’ll find that it’s nearly free of traffic. And that’s the problem: the SH 130 toll-road, built by one of those vaunted “public-private partnerships” we hear so much about, is careening into bankruptcy. The San Antonio Express-News calls the project “a monument to failure” and tells the story of how wildly optimistic traffic estimates and federal loan guarantees led to the construction of a billion dollar highway, that nobody seems to think is worth paying to drive on.

sh130_tollbooth

New knowledge

Historical Maps of Redlining.  And this week, we have some “old knowledge” in a new form.  A team of researchers from three universities, including the University of Richmond, the University of Maryland, Virginia Tech and Johns Hopkins, has digitized the maps and neighborhood descriptions compiled by the Home Owners Loan Corporation (HOLC) in the 1930s.  These maps show the redlining of many urban neighborhoods that pre-figured decades of disinvestment and decline. In addition to the maps, you can also read individual, type-written descriptions of particular neighborhoods.

holc_map

Although the HOLC maps are often themselves blamed for redlining and disinvestment, its actually more likely that they mostly codified widespread community attitudes about older, poorer and minority neighborhoods.  The maps and descriptions were compiled by mortgage lenders, developers and real estate appraisers.  And some academic evidence suggestions that HOLC and others still made loans in the redlined areas—although at higher interest rates.  The maps and the narrative are well-worth a read:  they provide a real historical context for thinking about the way “neighborhood stigma” can become the kind of self-fulfilling prophecy that has long term economic consequences.

The Week Observed: October 7, 2016

What City Observatory did this week

1.  Bubble Logic.  A major and persistent change in the housing market from a decade ago has been the decline in the number of “trade-up” home-buyers. While some fret that recent first-time homebuyers have become locked in to so-called starter homes, we point out that in many ways, trade-up demand was a product of the unsustainable housing bubble of the last decade. The lingering effects of the bubble’s collapse, and an enduring change in expectations about home price inflation strongly suggest we won’t see a resurgence of trade-up demand anytime soon.

screenshot-2016-09-30-11-17-20

2.  Why a housing lottery won’t solve our affordability problems.  Inclusionary zoning programs require developers to set aside some units in new developments for low and moderate income households. Because tens of thousands of households are potentially eligible for a few hundred units, cities face the practical problem of choosing who gets to buy or rent these cut-price homes. Sally French tells the story of how she entered, and won, San Francisco’s housing lottery, and was able to buy a new condo for about one-third the going rate. That’s great for her—and a relative handful of others—but is hardly a scalable solution to our housing affordability problems. Her experience shows that you also need a good deal of savvy and persistence to negotiate the lottery process.

3.  Are integrated neighborhoods stable? Its long been popular to think of the process of neighborhood change as being characterized by “tipping points.” Once the demographics of a neighborhood start changing, from say mostly white, to more mixed race, it tends to “tip” to being entirely a community of color, because many whites may not feel comfortable if they’re not a majority.  New evidence of neighborhood change shows that once established, mixed race neighborhoods are in fact stable.
lee_transition
4.  A memo for Stockholm.  Next Monday, we’ll learn the name of the latest Nobel laureate in economics. We think a good case could be made that the award should go to Paul Romer, recently named as the new chief economist for the World Bank. Romer’s seminal contributions to New Growth Theory have long been recognized by his academic peers—and have important implications for urban economic policies. And in recent days, Romer has been making a strong case that the profession’s approach to macroeconomic policy has become profoundly unscientific and needs to be fundamentally re-thought.  This is the kind of thoughtful and provocative speaking truth to power that the Nobel prize ought to recognize.

This week’s must reads

1. How we talk about pedestrian deaths.  More than 4,000 pedestrians are killed in car crashes each year, and the way their deaths are reported in the media obscures the systemic nature of this problem.  In an essay at Streetsblog, Angie Schmidt points out that press stories routinely call deaths “accidents,” tend to blame the victims, describe the car, rather than its driver as the cause of the crash, and fail treat design of streets as a factor in the deaths. We know that multi-lane arterials and streets that encourage high travel speeds are responsible for a disproportionate share of pedestrian deaths.

2. Thinking hard about infrastructure investment. The two major party Presidential candidates may agree in some general way about infrastructure, but urban economist Ed Glaeser does not.  In an Interview with Vox, Glaeser points out we need to re-think the way we invest in infrastructure. Current approaches tend to systematically neglect maintenance in favor of shiny new projects, infrastructure investments are seldom subjected to serious cost-benefit analysis, and actual users rarely pay for the costs of projects they benefit from—which in some cases amplifies inequality. And in the case of roads, building more capacity without implementing some form of road pricing simply stimulates more demand—the fundamental law of road congestion.

3. Making mixed use developments legal.  President Obama got a lot of press in the urbanist world last week with the release of the White House’s housing toolkit—essentially a list of recommended policy changes that states and cities could undertake to allow more density. Writing this week in the Washington Post, Jonathan Coppage points out that the federal government could play a key role here as well, by changing its guidelines on residential mortgages to make it easier to include commercial space in residential buildings – think ground floor shops with apartments above.  Currently, federally purchased or guaranteed loans can only go to projects with no more than 15 to 25 percent non-residential uses, effectively precluding this important form of financing for developments that are less than four stories in height.

New knowledge

1.  Concentrated Poverty in the Wake of the Great Recession.  The Brookings Institution’s Elizabeth Kneebone and Natalie Holmes have sifted through the 2010-2014 five-year American Community Survey to track the growth of concentrated poverty in the US. Concentrated poverty is defined in their work as neighborhoods with a poverty rate of 40 percent or higher.  Their key finding:  since 2009, the number of people living in these extremely poor neighborhoods has increased 34 percent, from 8.7 million to 13.7 million.  That comes on top of a big increase in concentrated poverty since 2000; the number of people living in these high poverty neighborhoods in the US has more than doubled since 2000.  Concentrated poverty disproportionately affects people of color: blacks are nearly five times likelier than whites to live in neighborhoods of concentrated poverty. And despite the much noted increase in the number of people living in poverty in the suburbs, concentrated poverty is much more common in cities: about one in four poor persons in cities lives in a neighborhood of concentrated poverty, compared to about 1 in fourteen poor persons living in the suburbs. The Brookings report has detailed data on the top 100 metropolitan areas.

screenshot-2016-10-06-10-36-43

2.  US consumers pay some of the highest real estate commissions in the world.  A new survey of real estate broker commissions in 17 countries around the world shows that the US pays an average commission of about 5.5 percent, compared with about 1.5 to 3.0 percent in other high income countries.  While commissions in the US have declined slightly from an average of about 6.0 percent in 2002, the declines in other nations have on average been much sharper. Commissions in Canada have fallen from about 4.5 percent a decade ago, to about 3.0 percent per day.  Lower real estate commissions would make houses more affordable for everyone and lower transaction costs for real estate sales would make it easier for households to move to new homes and new neighborhoods.

The Week Observed: September 30, 2016

What City Observatory did this week

1. Where are African-American entrepreneurs?  A new Census Bureau survey, undertaken in cooperation with the Kauffman Foundation provides a detailed demographic profile of the owners of the nation’s businesses. It reports that there are about 108,000 African-American owned businesses with paid employees (i.e., not counting self-employed entrepreneurs). We look at the distribution of African-American owned businesses in the fifty largest metropolitan areas, and find some surprising patterns.

2.  Counting people and cars with Placemeter.  We review our experiences using Placemeter—a web cam that when pointed at a road or sidewalk, counts the number of cars, bikes and pedestrians. Its an inexpensive and flexible technology that puts traffic counting within easy reach of businesses, individuals and neighborhood groups. This kind of “little data” can help democratize the planning process.

How much will it cost?
How much will it cost?

3.  How much will autonomous vehicles cost?  Its easy to be captivated by the fast pace of technological developments in autonomous vehicles. But the big question that economists—and urbanists—should be focusing on is: how much will these vehicles cost? We assemble some of the studies that have been made so far. They show that initially fleets of autonomous vehicles might cost about a dollar mile to operate. But, as the technology matures and the business scales up, costs are likely to fall.  Many estimates fall in the range of 30 to 40 cents per mile—well below the cost of today’s conventional, privately owned cars.

4.  The price of autonomous cars: Why it matters. Part 2 of our examination of the likely cost of autonomous vehicles examines the potential impacts of fleets of inexpensive AVs on the nation’s cities. At 30 to 40 cents per mile, AVs would be highly competitive not just with cars, but many transit trips as well. The key limiting factors will continue to be the highly peaked demand for urban transportation and the limited capacity of city streets. A la carte per mile pricing could make travelers more willing to consider alternate modes on a trip-by-trip basis, and may reduce car ownership. Fleets of autonomous vehicles may also disrupt the business models—and fiscal stability—of road-building and transit operating agencies.

This week’s must reads

1.  President Obama on zoning:  OK urban policy wonks, try to get accustomed, however briefly to the glare of the national media spotlight. On Monday, President Obama proclaimed himself a YIMBY (Yes in my backyard) as his administration weighed in on how local zoning decisions affect housing affordability.  The administration released a “toolkit” of policy ideas that will be very familiar to City Observatory readers, calling for fewer limitations on building more densely, as a way of lessening housing costs. Importantly, the toolkit calls for communities to consider eliminating minimum parking requirements.  To paraphrase Vice President Biden, this is a big deal.

Official portrait of President Barack Obama in the Oval Office, Dec. 6, 2012. (Official White House Photo by Pete Souza) This official White House photograph is being made available only for publication by news organizations and/or for personal use printing by the subject(s) of the photograph. The photograph may not be manipulated in any way and may not be used in commercial or political materials, advertisements, emails, products, promotions that in any way suggests approval or endorsement of the President, the First Family, or the White House.
YIMBY-in-chief

2.  The high price of affordable housing.  One component of the solution to housing affordability is building more subsidized housing. But in practice the practical impact of subsidized units is limited due to very high construction costs.  In Portland, alt-weekly Willamette Week reports that the city is contracting with non-profits to rehab existing housing units at a cost more than double the per square foot cost of new construction. Private builders say the city fails to put much weight on cost-effectiveness and that it chooses “cool projects with lots of expensive bells and whistles.” The article estimates that if the city could reduce its costs of construction by even 10 percent, it could have built an additional 1,400 units of affordable housing in the past decade.

3. The mythology of HOT lanes.  The most widespread practical application of road-pricing is the growing implementation of high-occupancy toll lanes. At Streetsblog, Kevin Posey asks some hard questions as to whether HOT lanes are living up to the high policy expectations that have been set. He appraises the claims that HOT lanes reduce congestion in general purposes lanes, whether they overtax the capacity of high occupancy lanes, and whether they encourage (or discourage) transit and carpooling.

 

New knowledge

1.  The impact of carbon taxes.  Since 2008, British Columbia has had a real, live carbon tax. The tax has been gradually raised and now works out to about $30 per ton (rule of thumb: when it comes to carbon taxes, a dollar a ton works out to about 1 cent per gallon–when burned a gallon of gas produces about 20 pounds of carbon dioxide).  Werner Antweiler and Sumeet Gulati of the University of British Columbia explore how the carbon tax has influenced driving and vehicle purchases in the province. They find that the carbon tax has accounted for about half of the 15 percent decline in per capita gasoline consumption in BC since 2008, and is the product of both somewhat more efficient vehicles and less driving.

2.  Oil prices and housing markets.  A new working paper from the Federal Housing Finance Administration looks at how oil prices affect the pattern of home prices within metropolitan areas. Urban economists have long documented the presence of a “rent gradient”: home and land prices tend to decline as distance to the central business district increases. Using zip code level data on home prices, William Larson and Weihua Zhao show that as oil prices increase—increasing the cost of transportation—the rent gradient gets steeper (prices for homes closer to the center appreciate relative to more peripheral homes.

3.  Central neighborhood change.  Nathanial Baum-Snow and Daniel Hartley undertake an elaborate statistical decomposition of the factors driving population change in urban centers in a new working paper from the Federal Reserve Bank of Chicago. Looking at very small downtown core areas (a 2 kilometer/1.6 mile radius around the central business district), they look at who’s moving in and who’s moving out.  While population in these centers declined in the aggregate from 1980 to 2000, they rebounded sharply from 2000 to 2010, driven primarily by the in-migration of white, college-educated residents. Less educated minorities moved out of these neighborhoods throughout the entire 1980-2010 period.

education_cbd_baum_snow_2016
Chart shows change in fraction of adults with a college degree, by distance (kilometers) from the center of the central business district, by decade.

The Week Observed: September 23, 2016

What City Observatory did this week

1. America’s most creative metros, ranked by Kickstarter campaigns. One of the most popular ways to raise funds for a new creative project–music, a video, an artistic endeavor, or even a clever new product–is Kickstarter. Website Polygraph.cool has created an impressive visualization of nearly 100,000 kickstarter campaigns. We use that data to rank US metros by number of kickstarter campaigns per capita. The unsurprising leaders: Austin, Portland and San Francisco. See how your city compares, and use Polygraph’s data visualization to identify the top indie entrepreneurs in your area.

kickstarter

2. Successful cities and the civic commons.  Cities are more than just collections of businesses, buildings and infrastructure:  the social fabric of cities–they way they enable us to easily connect with one another–is important both to their economic function and to the civic realm. As we pointed out in our report Lost in Place, in many ways the social fabric of cities has been stressed and torn by growing segregation and privatization of many parts of our lives, from travel to entertainment to leisure. But there are growing signs of a revival of investments in the public realm that try to strengthen the social and civic functions of cities. Knight Foundation and others have launched a new initiative to reimagine the civic commons, with targeted funding for five cities around the country.

3. Caught in the prisoner’s dilemma of local planning. While the principle of local control has a lot of political resonance and popular support, when it comes to meeting our housing challenges, in creates a terrible conundrum. While every neighborhood in a city or metropolitan area would benefit from more affordable housing if greater density were more widely allowed, each individual neighborhood is reluctant to be the first (or only) place that allows more density, for fear that it alone will bear the brunt of change. This prisoner’s dilemma dominates many local zoning fights and is very much in evidence as New York tries to implement its new mandatory inclusionary zoning program.

4. Lessons in supply and demand: Housing Market edition. We recognize that many people bristle at the mention of economic terminology, but in our view its hard to make sense of our current housing affordability problems without explicitly thinking about supply and demand. Specifically, the demand for urban living has increased rapidly, and continues to do so; meanwhile the supply of great urban neighborhoods–and housing in those neighborhoods–has grown only slowly. The inevitable result is higher rents. Tackling our “shortage of cities” is a fundamental challenge.

This week’s must reads

1. Another NYC Affordable Housing Project gets shot down. In New York City, a proposal to build 209 units of affordable housing in Queens in an area currently zoned for manufacturing has apparently died, due to opposition by the local city councilor. As we’ve noted, the Achilles heel of Mayor de Blasio’s mandatory inclusionary zoning program is the need for project-by-project up-zonings. So far, in both of the cases that have come forward, the up-zonings have provoked neighborhood outcry, and led local city councilors to oppose the project, which given the City Council’s deference to issues in member’s own districts, is the kiss of death. Yet more evidence that hyper-localism in decision-making makes it extraordinarily difficult to tackle housing affordability.

2. The Jane Jacobs Centennial. Writing at the New Yorker, Adam Gopnick uses his review of two recently published biographies of Jane Jacobs to assess her contribution to our understanding of cities. He argues that some of her insights haven’t weathered the test of time well, but in many ways, her work continues to be as fresh and provocative as when she wrote it. And in important ways its prescient about the situation we now find ourselves in, as Gopnick notes: “The new crisis is the ironic triumph of Jacobs’s essential insight. People want to live in cities, and when cities are safe people do. Those with more money get more city than those with less.”

3. Bonus Must Watch: City Observatory’s Daniel Kay Hertz on Chicago Newsroom. You’ve read his commentaries here on City Observatory, now you can watch him on video as well, discussing a range of issues from urban sprawl, to changing demographics, and even optimal bus-boarding process (the latter really get him going). A full hour of urban wonkiness.

daniel_hertz

New knowledge

1. How zoning has re-shaped American cities.  For decades, the conjecture among many academics was that zoning simply ratified the kinds of land use patterns that were already in place. But a new study of Chicago comparing land use patterns prior to the adoption of that city’s zoning code in 1923 with current development shows that zoning strongly influenced subsequent development. In a new NBER working paper, entitled Zoning and the Economic Geography of Cities, Allison Shertzer, Tate Tinam and Randall Walsh, examine parcel level data on land uses and market values from the 1920s and look to see how they are related to today’s development patterns. Their key findings: zoning does have an impact, and may be more influential in the location of different activities than either geography or transportation networks.  They also find that exclusive residential zoning tends to drive up home prices. In addition, zoning seems to have greatly reduced mixed use development: in 1922, 82 percent of the developed blocks in Chicago had at least some commercial activity.

2. How city center service-exporting businesses drive the UK economy. A new report  Trading places: Why firms locate where they do from the UK Centre for Cities looks at the location and growth of different industries. It divides businesses into those that export goods, those that export services and those that serve local demand. This is especially important for the service exporting sector which has powered the UK economy, as goods production as continued to decline. Fully 32 percent of Britain’s high-skilled jobs in service exports are located in city centers, more than double the proportion (14 percent) of all jobs. The report concludes with some pointed advice for new Prime Minister Theresa May: “The geography of Britain’s jobs and firms means that supporting growth in our cities will become increasingly important for improving the performance of the national economy.”

2. The 2016 Census Planning Database.  The Census Bureau has released its annual compendium of geographically detailed data on population demographics and housing designed for use by planning technicians. This isn’t new information per se (the most recent data is taken from the 5-year American Community Survey results from 2010-2014). What it does do is assemble this information in a form–with census tract and block group estimates, and with baseline comparisons for similar geographies to Census 2010.  Even by Census Bureau standards this is a giant mass of data; the national block group data is a 160 MB file.

Pollution and poor neighborhoods: A blast from the past

It’s been widely noted that poor neighborhoods tend to bear a disportioncate share of the exposure to environmental disamenities of all kinds. In the highway building era of the 1950s and 1960s, states and cities found it cheaper and politically easier to route new roads through poor neighborhoods, not only dislocating the local populace, but exposing the remaining residents to higher levels of air pollution. So, as environmental justice advocates regularly point out, we’ve made policy decisions that shift the burden of pollution on to the poor.

Hazardous to your health, and neighborhood (Flickr: Otodo)
Hazardous to your health–and your neighborhood (Flickr: Otodo)

Its widely recognized that environmental pollution (like other disamenities, such as high crime rates) depresses property values and rents.  If a neighborhood is highly polluted or crime-ridden, people with the economic wherewithal to move elsewhere typically will. When they abandon dirty or dangerous places, the rents fall, and by definition, the residents of these neighborhoods disproportionately become those who lack the resources to afford a better alternative:  the poor.  While it is undoubtedly the case that polluting activities tend to locate near poor neighborhoods, it also turns out to be the case that the poor end up living in more polluted places.

A new study from the St. Andrews University–“East Side Story: Historical Pollution and Persistent Neighborhood Sorting“–by Stephan Heblich, Alex Trew and Yanos Zylberberg provides an interesting historical perspective on this process.  It has long been noted that the “East End” of many industrial cities is the location of the greatest concentrations of poverty. In these cities, the prevailing wind direction is from the West, with the result that smoke and other air pollutants from the city tend to be most severe in the East (and air quality is generally better in the West).  By digitizing data on the location of Victorian-era smokestacks, and combining that data with modern atmospheric modeling, the authors were able to estimate 19th Century pollution levels by neighborhood, and examine the correlation between concentrations of poverty and air pollution.  (They proxied income levels by looking at the occupational composition of different neighborhoods, an approach akin to that used by Richard Florida).

The study shows that variations in pollution levels are significant factors in explaining the distribution of poverty within cities in the 19th century.  The authors conclude:

The negative correlation is both economically and statistically significant at the peak of pollution in 1881: pollution explains at least 15% of the social composition across neighborhoods of the same city.

This, of course, is an interesting finding in its own right, but there’s more.  Since the peak of unfettered coal burning a century ago, Britain, and other countries have done a lot to reduce air pollution.  Many of the mills and powerplants that produced all that Victorian pollution are long since gone, and the air in these formerly polluted neighborhoods is much cleaner. What’s interesting is that those 19th century levels of pollution are still correlated with concentrations of poverty today.  The authors find that 1881 pollution levels are a statistically significant explainer of the distribution of poverty in levels in the past decade.

This suggests that pollution played a critical role in initially establishing the concentration of poverty in these neighborhoods, but that once established, poverty was self-reinforcing. While pollution was the initial dis-amenity that attracted the poor and discouraged the rich; once the neighborhood was poor, poverty itself became the dis-amenity that fueled this sorting process.  Another study using historical data on marshes in New York, finds a similar historical persistence of poverty.  Economist Carlos Villereal has an interesting paper entitled “Where the Other Half Lives: Evidence on the Origin and Persistence of Poor Neighborhoods from New York City 1830-2012.” He finds that in the 19th century, the lower-lying marshy areas of Manhattan were regarded as less desirable, and generally were concentrations of poverty.  Many of these same patterns persist even today.

Ownership and Sorting

The St. Andrews study offers one other surprising insight about neighborhood change. One factor that over time ameliorated the concentration of poverty in UK cities was the construction of “council housing”–what we in the US would call public housing. In general, council housing was constructed in a very wide range of neighborhoods, was in public ownership, and was rented out to its tenants.  Because it was built in both the legacy polluted/poor neighborhoods and in less poor neighborhoods, it had the effect, over time, of reducing concentrated poverty. One of the reforms of the Thatcher era was shifting council housing to an ownership model–transferring title to tenants, and then letting them decide to stay, or to sell the property to others.  The St. Andrews study shows that the shift to the ownership model actually reinforced the concentration of poverty, as owners of former council houses in desirable, low-pollution neighborhoods sold them to higher income households. Meanwhile, council housing in formerly polluted, and chronically impoverished neighborhoods weren’t so attractive to higher income households, and so remained in the hands of lower income families. While the initial owners of the Council housing benefited financially from being able to sell their appreciated homes, the formerly affordable housing was no longer available to other families of modest means, and as a result, these neighborhoods became more economically homogeneous.  As the authors conclude:

While the original intent of Thatcher’s policy was to reduce inequality by providing a route for working class households to step on the housing ladder, its consequence appears to have been to lengthen the shadow of the Industrial Revolution and set back the slow decay of neighborhood sorting. Our estimates suggest that about 20% of the remaining gradient between polluted and spared neighborhoods can be attributed to this reform.

The St. Andrews study is an eclectic and clever combination of history and economics. The authors have pioneered some fascinating techniques for digitized historical data, had shed additional light on the tipping point dynamics, and even managed to include references to the evolution of moths in response to coal pollution. It’s well worth a read.

Editor’s Note:  Thanks to Daniel Kay Hertz for flagging Carlos Villereal’s New York City study.

Lessons in Supply and Demand: Housing Market Edition

 

Its apparent to almost everyone that the US has a growing housing affordability problem. And its generating more public attention and public policy discussions. Recent proposals to address housing affordability in California by Governor Jerry Brown and in New York, by Mayor Bill de Blasio have stumbled in the face of local opposition. Its a delicate moment in housing policy debates.

So now we’re being told, by our very smart friends at the Sightline Institute, that we ought not to talk about urban housing problems using the terms “supply and demand.”  Excuse us if we politely, if firmly–and wonkily–choose to disagree.  Housing affordability problems, in Seattle, San Francisco, and just about everywhere have everything to do with supply and demand.

No escaping the laws of demand and supply.
No escaping the laws of demand and supply.

 

OK, sure:  for general audiences, saying supply and demand may cause some people’s eyes to glaze over, and for others, it may be taken as a sure sign that one has succumbed to a heartless neo-liberal paradigm.  For many people, we know, any mention of economics reminds them of a painfully unpleasant under-graduate course. And Sightline has prudent advice about how to talk about the problem in the media.  They say:

Avoid supply and demand language; opt instead for messages that describe the housing shortage, such as building enough homes and plenty of housing; and a range of housing choices

But for us at City Observatory, this is a teachable moment. The demand for cities and for great urban neighborhoods is exploding.  Americans of all ages, but especially well-educated young adults are increasingly choosing to live in cities.  And in the face of that demand, our ability to build more such neighborhoods and to expand housing in the ones that we already have is profoundly limited, both by the relative slowness of housing construction (relative to demand changes), and also because of misguided public policies that constrain our ability to build housing in the places where people most want to live, to the point in many communities, we’ve simply made it illegal to build the dense, mixed-use, walkable neighborhoods that widely regarded as the most desirable.

Our key urban problems–housing affordability, concentrated poverty, gentrification, long commutes–are all either directly caused or significantly worsened by this imbalance between housing supply and demand.  

But there’s a studied disbelief in many media outlets that market forces have anything to do with housing. NIMBY’s believe that blocking new construction will keep prices down, when the opposite is true.  As a result, we paradoxically pursue strategies that make housing affordability problems worse.

Two recent bits of evidence remind us that supply and demand are very much at work.  A terrific analysis, written by Financial Times reporter Robin Harding and echoed by Vox’s Matthew Yglesias shows how even in a big dense city, increasing supply to meet demand keeps prices in check. In Tokyo, one of the world’s largest and densest metropolises, housing prices have barely budged in the past two decades, because Japan makes it relatively easy to build new housing. Local jurisdictions and neighbors don’t have effective veto power over new development, so when demand increases, supply responds relatively rapidly, and as a result house prices remain much more affordable.

Closer to home, Yardi Matrix is a real estate data research firm tracks and regularly reports on changes in rents and housing occupancy in major markets around the nation.  They’ve noted nearly 300,000 new apartments will be completed this year. As a result, in many markets supply is finally catching up to demand, and rental price inflation is going down in places like San Francisco, Denver, Austin and Houston.  For example, after seeing double digit growth in rents for several years, rents in San Francisco are up just 3.5 percent in the last 12 months, according to Yardi. In some places, such as the oil patch, where demand has declined due to layoffs in the energy sector, rental prices have actually declined.

While these trends are hopeful signs, and while they clearly illustrate that the market forces of supply and demand are very much at work, there’s still much to be done to re-work our public policies to address affordability, urban livability and equity.  We don’t expect the demand for urban living to abate any time soon–in fact, there’s good reason to believe that it will continue to increase.  And it’s still the case that we have a raft of public policies – from restrictions on apartment construction and density, to limits on mixed use development, to onerous parking requirements, and discretionary, hyper-local approval processes – that make it hugely difficult to build new housing in the places where it’s most needed.

Many of the problems we encounter in the housing market are a product of self-inflicted wounds that are based on naive and contradictory ideas about how the world works.  We believe that housing should both be affordable and a great investment (which is an impossible contradiction), and we tend to think the laws of supply and demand somehow don’t apply to one of the biggest sectors of the economy (housing). At their root, our housing problems–and their solutions–are about understanding the economics at work here. So in our view, it’s definitely time to talk about supply and demand.

Kickstarting your local creative economy

One of the cleverest adaptations of web-technology is the development of crowd-sourced funding for new products and business ideas.  The biggest of these crowd-sourced funding platforms is Kickstarter, which since its launch in 2009, has generated funding for ideas like the pebble smartwatch, the “coolest” cooler and a revival of the Mystery Science Theater 3000 television series.  Most Kickstarter campaigns are for relatively small amounts, raising between $1,000 and $10,000 and are popular ways of raising funding for creative projects ranging from music albums and films to games and art.  Since its inception, Kickstarter has raised more than $2.6 billion for more than 112,000 projects.

The website Polygraph.cool has used data on the location and industry of Kickstarter campaigns to create a city by city, industry-by-industry visualization of these business plans.  Using data from more than 90,000 Kickstarters, their interactive infographics display the relative size of the campaign (based on money pledged to each project), and projects are color coded by industry (music, film and video, design, publishing, art, theater and games, among other categories).

In these diagrams, the overall size of each cities constellation of dots corresponds to the volume of funding raised via Kickstarter, and the size of each dot represents the value of pledges.  The concentration of colors of each diagram is indicative of the industrial focus of Kickstarter campaigns in that city. You can mouse over any dot on the diagram to see details on the particular project.

kickstarter

Not surprisingly, the nation’s largest metro areas (New York and Los Angeles) account for the largest number of Kickstarter campaigns.  New York has more than 11,000 Kickstarters; Los Angeles has more than 8,000. The campaigns in some cities are heavily skewed toward local industrial specializations–Nashville’s for example, consists mostly of music projects (the red dots in the chart above).  Its worth exploring the infographics for different cities to see the number and variety of projects funded in different locations.

At City Observatory, we focus on metropolitan areas, so we’ve tabulated Polygraph.cool’s Kickstarter data by metro area (aggregating multiple cities within a metro, such as Tempe and Scottsdale along with Phoenix, and Cambridge and Lawrence along with Boston).  To get an idea of the relative importance of Kickstarter to each metro economy, we’ve computed the number of Kickstarter campaigns in each metropolitan area per 10,000 population. As you can see, there’s wide variation among metropolitan areas.

 

Perhaps unsurprisingly, many of the usual suspects of the creative economy show up at the top of the chart. On a per capita basis, Austin, Portland and San Francisco have the highest number of Kickstarter campaigns, with between 80 and 100 campaigns per 100,000 population. Among metropolitan areas with one million or more population, the median metropolitan area has about 20 campaigns per 100,000 population. The metro areas with the fewest Kickstarter’s per capita include Riverside, Virginia Beach and Hartford, each of which have fewer than 8 campaigns per 100,000 population.

As the authors of the Polygraph visualization note, many of our conventional yardsticks for measuring the creative economy are dominated by data sources that capture large scale enterprises, but not grass roots and D-I-Y activities. Because Kickstarter has few barriers to entry, and is accessible even to individual artists, its one way to measure creative efforts that simply don’t show up in other sources. So have a look at the Kickstarter data for your city, to see how it stacks up.

Portland considers inclusionary zoning

What should cities do to tackle growing housing affordability problems? Is inclusionary zoning a good way to provide more affordable housing, or will it actually worsen the constrained housing supply that’s a big cause of higher rents?  

Will Portland build more? (Flickr: A. Davey)
Will Portland build more? (Flickr: A. Davey)

 

In the next few months, the city of Portland, Oregon will be considering the terms of a new inclusionary zoning (IZ) policy. Like similar policies in other cities, the Portland IZ proposal will likely require developers of new multi-family housing projects to set aside some portion of newly built units to be rented at a discount from market rates. Earlier this year, the Oregon Legislature repealed the state’s ban on inclusionary housing requirements. (Oregon and Texas were reportedly the only two states that explicitly prohibited mandatory inclusionary zoning).

On September 12, the Northwest Chapter of the Urban Land Institute held a forum to discuss inclusionary zoning.  I was one of the panelists speaking at this event:  here’s a quick synopsis of my remarks and some observations about the presentations and discussion.

First, despite the enthusiasm among legislators and housing advocates for inclusionary zoning, there’s precious little evidence that it’s had a meaningful impact on alleviating the shortage of affordable housing in major US cities.  As we’ve reported at City Observatory, at least to date, these programs have produced remarkably few units in some of the nation’s largest and strongest real estate markets.

The bigger concern about inclusionary zoning is that it tends to drive up the cost of building new housing, thereby restricting supply, and actually aggravating market-wide affordability problems.  While the comparative handful of new units set aside for low or moderate income households are visible, there is an invisible cost in the form of units not built, and consequently, higher market rents for everyone. Whether, and how much, inclusionary zoning drives up costs is a subject of intense debate.

Mike Wilkerson of ECONorthwest presented a summary of his firm’s recent analysis of inclusionary policies (commissioned by the Urban Land Institute). They constructed pro-forma financial assessments of several common housing types (three-story stacked flats, “four-over-one” podium apartments and concrete and steel apartment towers) and examined their financial feasibility under a range of assumptions about market rents, land costs, incentives (tax breaks and density bonuses), and set-aside levels. The full report is well worth a read–and we’ll explore it in detail in a future commentary.

In his remarks, Wilkerson used the ECONorthwest analysis to examine the likely impacts of the proposed inclusionary zoning in the Portland market.  Portland has experienced some of the fastest rental growth in the nation in the past year, especially in close-in urban neighborhoods.  A key takeaway from Wilkerson’s remarks:  inclusionary zoning requirements are likely to skew developer choices away from higher density construction (like apartment towers) and toward lower density development (stacked flats and podium development). The combination of higher construction costs and higher needed market rents for towers means meeting IZ requirements is disproportionately burdensome for  denser construction. This is especially important in Portland, and its downtown and central neighborhoods, where the city’s comprehensive plan envisions high rise density as key method for meeting expected population growth.  On its face, the ECONorthwest findings should give policymakers in Portland pause about moving forward with inclusionary zoning.

But there’s an additional wrinkle. As thoughtful and comprehensive as the ECONorthwest analysis is, it is still just one firm’s pro-forma model of development costs. And there’s a good deal of uncertainty about some key assumptions that necessarily drive this kind of analysis.  

The ECONorthwest study joins a growing body of research that attempts to model current development costs and the impacts of inclusionary (and other) requirements on the cost and likelihood of housing development. We profiled several of these at City Observatory in July. Each of these analyses represents a solid, fact-based effort, but they come to quite different conclusions about whether and under what circumstances inclusionary requirements are feasible. As a result, no one knows for sure what will happen when these policies are implemented.

Uncertainty is a big risk.

A big challenge with Portland’s proposed inclusionary zoning program is that no developer can know, with any certainty, how big a cost the inclusionary zoning requirements impose, or how easy or difficult will be the process to get development approved under the new rules. As experience with New York’s new mandatory inclusionary zoning has shown, the entire program can be tripped up in the project-level approval process. As a result, many developers are likely to take a wait-and-see attitude–to let others go first, and then make an investment decision when the costs and contours of the new system are better understood.  The effect is almost certain to be a fall off in housing investment. Moreover, this could happen even if the program itself is well-designed and has incentives and cost offsets that lower developer costs; until this is proven in practice, it’s likely to be a deterrent to investment. And paradoxically, because less new housing will be built, prices are likely to be higher than they otherwise would be–worsening the affordability problem (except for those fortunate enough to get reduced price apartment).

It may be that the city will decide that inclusionary zoning requirements are fundamentally at odds with its development objectives (higher density in the urban core) or judge them to be counterproductive to solving the affordability problem.  But if it does decide to go ahead with an inclusionary zoning ordinance, there are several design features that might minimize the risk of the potential negative effects of such a policy. First, it could consider a long phase-in of the inclusionary requirements, to give developers time to carefully study and fully understand the costs and implications of the policy, and the effects of its incentives. Second, the city could establish a simple and clear approval standards for inclusionary projects; discretionary approvals and complex review processes are likely to magnify uncertainty.  Third, at least initially, it should set a relatively low in lieu-fee for developers who want to opt-out of building inclusionary units on-site. The availability of the in-lieu fee gives developers financial certainty that they know the costs the ordinance imposes, and is therefore likely to lessen the chilling effect on investment associated with the uncertainty of a new program. The fee could escalate over time to increase the incentive to build housing on site, once the costs and effectiveness of the program are demonstrated.

As we’ve frequently said at City Observatory, the underlying problem of affordability in Portland (and around the country) stems from our shortage of cities.  Simply put, the demand for urban living is growing much faster that the supply of great urban spaces. This is a strong market signal that we need to be improving urban neighborhoods in more places, and building more housing in the places that are already in high demand. There’s growing evidence in many markets that the supply response–building more apartments–is beginning to blunt the rate of rent increases. The last thing any city concerned about affordability should do is get in the way of increasing housing supply.

A standard rule of medicine–which turns out to be good advice for public policy, as well–is “first, do no harm.” Portland’s City Council would be well advised as it considers an inclusionary zoning policy to design one that doesn’t, even inadvertently, sidetrack the current supply response in the housing market.

Cities are powering the rebound in national income growth

Behind the big headlines about an national income rebound: thriving city economies are the driver.

As economic headlines go, it was pretty dramatic and upbeat news:  The US recorded an 5.2 percent increase in real household incomes, not only the first increase  since 2007, but also the biggest one-year increase ever recorded. Its a signal that the national economy is finally recovering from the Great Recession (the worst and most prolonged economic downturn in eight decades).

Fittingly, The Wall Street Journal headline proclaimed the good news:

wsj_income_gain_headline

But dig deeper into the data, and there’s an even more interesting development: The big growth in US incomes was powered by the growth in incomes in cities.  The following chart shows the inflation-adjusted change in incomes between 2014 and 2015 for the nation’s cities, suburbs and rural areas.  The key numbers here are seven, four and two:  the average city household’s income grew seven percent, the average suburban household’s income grew four percent and the average rural household’s income declined by two percent. (NOTE: This two percent decline appears to be an error based on changed geographic definitions for what constitutes rural areas, see our comment below).  The more urban you were in 2015, the faster your income rose.

median_income_city2015

Source:  Census Bureau, Income and Poverty in the United States: 2015

For those who follow this data closely, this is yet another strong piece of evidence that the US national economy is being powered by what’s happening inside cities.  If the nation’s incomes had grown only as fast as those in rural and suburban areas, the national income increase would have been cut roughly in half, to an underwhelming 2.5 percent. The gain in city incomes hasn’t escaped the attention of other analysts. At Vox, Tim Lee flagged the disparity between city and suburban and rural income gains, summarizing it as “a fundamentally urban recovery.”

As we pointed out last year, urban centers are, for the first time in decades, gaining jobs faster than their surrounding peripheries.  Measured by job growth, large metropolitan areas–those with a million or more population–have grown much faster than smaller metros and rural areas. The shift to the center is also reflected in housing prices; homes in vibrant urban centers have registered significant increases relative to the price of suburban homes.

There’s an unfortunate tendency to portray this data in a “winners” and “losers” frame: Vox headlines its story as cities getting richer and rural areas getting left behind. But really what’s at work here is a fundamental shift in the forces that are propelling national economic growth. The kinds of industries that are growing today, in technology, software and a range of high value services, are industries that depend on the talent, density and vibrancy of city economies for their success. It’s not that we’ve somehow simply reallocated some activities that could just as easily occur in rural areas to cities; much of this growth is uniquely the product of urban economies.

A particularly misleading connotation of the word “recovery” is that it seems to suggest that in the wake of a recession, economies rebound simply by restoring exactly the kinds and patterns of jobs and industries they lost. What really happens is what Joseph Schumpeter famously called “creative destruction”: the economy grows by creating new ideas, jobs, and industries, often in new locations. As we shift increasingly to a knowledge-driven economy, that process is occurring most and fastest in the nation’s cities, where talented workers are choosing to live, and where businesses seeking to hire them are starting, moving and expanding.

This is not your father’s or your grandmother’s recovery. The US economy is changing in a fundamental way to be more urban-centered and urban-driven. Its an open question as to whether we’ll recognize that this is now the dynamic that drives the national economy, and fashion policies that capitalize on cities as a critical source of economic strength.

A few technical notes

The data for these estimates come from the Current Population Survey which is used to generate national estimates, rather than the more fine grained geographies reported in the American Community Survey. For its annual report on income and poverty, the Census Bureau provides only a limited geographic breakdown of income data. Specifically, they report the differences in income and poverty for metropolitan and non-metropolitan areas, and within metropolitan areas, the differences between “principal cities”–generally the largest and first named city in a metro area–and the remainder of the metropolitan area.  Although city boundaries are less than ideal for making geographic comparisons at the national level, it is a rough, first-order way of charting the different trajectories of cities and suburbs.

When the 2015 ACS data becomes available later this year, we and others will want to examine in more closely to better understand the broad trends from this week’s report.

UPDATE: September 19:  Census Report likely under-estimated rural income growth

The New York Times Upshot points out that the reported decline in incomes in rural areas is probably an error, due to the changing definition of what constitutes “rural” areas in the Current Population Survey.

 

 

Counting women entrepreneurs

Entrepreneurship is both a key driver of economic activity and an essential path to economic opportunity for millions of Americans. For much of our history, entrepreneurship has been dominated by men. But in recent decades, women have overcome many of the social and other obstacles entrepreneurship and as a result, the number of women active in starting and growing their own businesses has been increasing.

A new survey, conducted by the Census Bureau, in cooperation with the Ewing Marion Kauffman Foundation, provides a rich source of data about the economic contributions of women-owned businesses. The Annual Survey of Entrepreneurship is the first iteration of a survey that gathers data which asks detailed questions about key demographic characteristics of business owners, including gender, race and ethnicity, and veteran’s status. And unlike other business data, the entrepreneurship survey reports data by age of business, allowing us to examine separately the economic contributions of newly formed businesses.

The survey focuses on businesses with paid employees, and so generally excludes self-employed individuals working on their own. In 2014, the survey reports that there were more than 5.4 million businesses with a payroll in the United States. Of these, about 270,000 businesses were public corporations (or other business entities for which the gender or other demographic characteristics of owners could not be ascertained). These businesses employed almost 60 million workers (52 percent of total payroll employment).  The remaining 5.1 million firms with identifiable owners employed about 55 million workers.  The survey concludes that nearly 1.1 million businesses, or 20.4 percent of those with individually identifiable owners, were owned exclusively by women and employed about 8.5 million workers.  About 10.8 percent of these women-owned businesses had started in the past two years, compared to about 8.9 percent of all employer firms.  Women-owned businesses are found in all economic sectors, but are disproportionately represented in education, health and social services, where they comprise about 28 percent of all employer businesses.

The report also offers data on business ownership patterns for the 50 largest US metropolitan areas.   We thought it would be interesting to see how different areas ranked in terms of the share of all businesses with employment that were owned by women.

Here’s a listing of the number of women-owned businesses, the share of total businesses owned by women for these fifty metropolitan areas.

 

Among the cities with the highest proportions of women-owned businesses with a payroll are Denver, Atlanta and Baltimore, with nearly 1 in 4 businesses (for which demographic characteristics of owners could be identified) being owned by women.  The metropolitan areas with the lowest fraction of women-owned businesses include Salt Lake City, Memphis, and Birmingham, where only about 17-18 percent of businesses are owned by women.

When we map the fraction of women-owned businesses, some geographic patterns become apparent.  In general, the proportion of women businesses is higher in Western metropolitan areas, and in many Southern metropolitan areas, particularly in Florida, Texas and Georgia.  In the Northeast, Midwest and in much of the South, the share of women-owned businesses tends to be much smaller. Washington and Baltimore appear to be outliers in their geographic region, as do St. Louis and Kansas City. From Philadelphia to Boston, the Northeast corridor has below average shares of women-owned businesses.

In addition to identifying the gender of business owners, the survey also provides insight on other ownership characteristics, including race and ethnicity; we’ll examine some of these findings in a future commentary. The Census plans to conduct its new survey of entrepreneurs on an annual basis. This promises to be a useful was of benchmarking efforts to draw more Americans of every stripe into business ownership.

McMansions Fading Away?

Just a few months ago we were being tolderroneously, in our view–that the McMansion was making a big comeback. Then, last week, there were a wave of stories lamenting the declining value of McMansions. Bloomberg published: “McMansions define ugly in a new way: They’re a bad investment –Shoddy construction, ostentatious design—and low resale values.”  The Chicago Tribune chimed in “The McMansion’s day has come and gone.” Whither are these monster homes headed?

downton_abbey
Even “Downton Abbey” is past its heyday (Highclere Castle)

First, as we’ve noted, its problematic to draw conclusions about the state of the McMansion business by looking at the share of newly built homes 4,000 feet or larger (one of the standard definitions of a McMansion). The problem is that in weak housing markets (such as what we’ve been experiencing for the better part of a decade in the wake of the collapse of the housing bubble) the demand for small homes falls far more than the demand for large, expensive ones. So the share of big homes increases (as does the measured median size of new homes). And indeed, that’s exactly what happened post–2007: the number of new smaller homes fell by 60 percent, while the number of new McMansions fell by only 43 percent, so the big homes were a bigger share (of a much smaller housing market).  Several otherwise quite numerate reports gullibly treated this increased market share as evidence of a rebound in the McMansion market; it isn’t.

We proposed a McMansion-per-millionaire measure as a better way of gauging the demand for these structures, and showed that the ratio of big new houses to multi-millionaire households did indeed peak in 2002, and has  failed to recover since. We built about 16 McMansion per 1,000 multi-millionaires in 2002, and only about 5 in 2014.

Another way of assessing the market demand for behemoth homes is by looking at the prices they command in the market. What triggered these recent downbeat stories about McMansions was an analysis entitled “Are McMansions Falling out of favor” by Trulia’s Ralph McLaughlin, looking at the comparative price trajectories of 3,000 to 5,000 square foot  homes built between 2001 and 2007 and all other homes in each metropolitan area.  McLaughlin found that since 2012, the premium that buyers paid for these big houses fell pretty sharply in most major metropolitan markets around the country.  Overall, the big house premium fell from about 137 percent in 2012 to 118 percent this year.

screenshot-2016-09-08-15-03-57

In a way, this shouldn’t be too surprising. Part of the luster of a McMansion is not just its size, but its newness. Like new cars, McMansions may have their highest value when they leave the showroom (or the “Street of Dreams” moves on). According to the Chicago Tribune’s reporting on this story, apparently today’s McMansion buyer wants dark floors, gray walls, and white kitchen cabinets, very different materials and color schemes than last decade’s big houses. As they age, we would expect all vintage 2005 houses to depreciate, relative to the market. This gradual decline in value is essential to the process of filtering–housing becomes more affordable as it ages. (And at some point, usually many decades later, when the surviving old homes acquire the cachet of “historical” — they may begin appreciating again, relative to the rest of the housing stock).

There’s another factor working against the McMansion, in our view. In general, these large homes have generally been built on the periphery of the metropolitan area, in suburban or exurban greenfields. As we’ve shown, the growing demand for walkability and urban amenities has meant an increase in prices for more central housing relative to more distant locations. Its likely that this trend is also hastening the erosion of the big house premium.

Finally, there is a financial angle here, too. McMansions were at the apex of the housing price appreciation frenzy of the bubble years. You took the sizable appreciation in your previous house, and rolled it over into an even larger house–hoping to reap further gains when it appreciated. The move-up and trade-up demand that fueled McMansion demand has mostly evaporated. Despite gains in recent months, nominal home values in most markets haven’t recovered to pre-recession levels, and adjusted for inflation, many home owners have yet to see a gain on their real estate investment. According to Zillow, the effective negative equity rate (homeowners who have less than 20 percent equity in their homes) was 35 percent.

There will always be people with more money than taste, so there will always be a market for McMansions (or whatever fashion they might evolve into next). But many of the market factors that combined to boost their fortunes a decade ago have changed. Consumers now know that home prices won’t increase without fail and the interest in ex-urban living has waned. Homeownership overall is down, and much of the growth in homeownership will be among older adults (who probably won’t be up-sizing).

Where are African-American entrepreneurs?

Entrepreneurship is both a key driver of economic activity and an essential path to economic opportunity for millions of Americans. Historically, discrimination and lower levels of wealth and income have been barriers to entrepreneurship by African-Americans, but that’s begun to change. According to newly released data from the Census Bureau, its now estimated that there are more than 108,000 African-American owned businesses with a payroll in the U.S.

The new survey, conducted by the Census Bureau, in cooperation with the Ewing Marion Kauffman Foundation, provides a rich source of data about the economic contributions of African-American-owned businesses. Called the Annual Survey of Entrepreneurship, this is the first iteration of a survey that gathers data which asks detailed questions about key demographic characteristics of business owners, including gender, race and ethnicity, and veteran’s status. And unlike other business data, the entrepreneurship survey reports data by age of business, allowing us to examine separately the economic contributions of newly formed businesses.

The survey focuses on businesses with paid employees, and so generally excludes self-employed individuals working on their own. In 2014, the survey reports that there were more than 5.4 million businesses with a payroll in the United States. Of these, about 270,000 businesses were public corporations (or other business entities for which the gender or other demographic characteristics of owners could not be ascertained). These large corporate businesses employed almost 60 million workers (52 percent of total payroll employment).  The remaining 5.1 million firms with identifiable owners employed about 55 million workers.

The survey concludes that about 108,000 businesses, or roughly two percent of those businesses with individually identifiable owners, were owned exclusively by African-Americans. Together these businesses employed more than 1 million workers nationally.  On average, African-American owned businesses are younger than other businesses; about 14.1 percent of these African-American-owned businesses had started in the past two years, compared to about 8.9 percent of all employer firms. Africanowned businesses are found in all economic sectors, but are disproportionately represented in  health and social services.  About 28 percent of African-American owned businesses are engaged in health and social services, compared to about 12 percent of all individually owned businesses.

The report also offers data on business ownership patterns for the 50 largest US metropolitan areas.   We thought it would be interesting to see how different areas ranked in terms of the share of all businesses with employment that were owned by African-Americans.

Here’s a listing of the number of African-American owned businesses per 1,000 African-Americans in the population in each of the fifty largest US metropolitan areas. Think of this as an indicator of the likelihood that an African-American owns a business with a payroll in each of these places. Overall, about three in one thousand African-Americans in these fifty large metropolitan areas own a business.

Among the cities with the highest proportions of business owners among the African-American population are San Jose, St. Louis, Denver and Seattle. Each of these cities has about six or seven African-American entrepreneurs per 1,000 African-American residents. San Jose is famously the capital of Silicon Valley, which may explain why such a relatively high fraction of its African-American residents own businesses with a payroll. In contrast, Louisville, Buffalo, Memphis and Cleveland have much lower rates of African-American entrepreneurship, each of these metro areas has fewer than two African-American entrepreneurs per 1,000 African-American residents.

Another way to think about this data is to compare the share of the population in each metropolitan area that is African American with the share of entrepreneurs who are African American. The following chart shows this information. As one would expect, as the share of the African-American population increases, so too does the fraction of entrepreneurs who are African-American. There are some clear outliers. As shown on the chart, St. Louis has somewhat more African-American entrepreneurs than one would expect, given the size of is African-American population, and conversely, New Orleans has fewer. But on average, entrepreneurship is much less common among African-Americans than the overall population, in every metro area. On average, the share of the African-Americans who are entrepreneurs is about one-fifth their share of the population of a given metropolitan area.

In a previous post, we examined the geography of women-owned businesses.   The Census plans to conduct its new survey of entrepreneurs on an annual basis. This promises to be a useful was of benchmarking efforts to draw more Americans of every stripe into business ownership.

The Week Observed: Sept. 9, 2016

What City Observatory did this week


1. Counting Women Entrepreneurs.  The Census Bureau has just released the results of its new survey of entrepreneurs, and we report its key findings on the extent and geography of women-owned businesses.  There are more than 1.1 million women-owned businesses with more than 5 million employees; about one in five businesses is now headed by a woman owner.  Female-headed businesses are still more common in the West and in some Southern cities than in the Northeast and Midwest.

2.  Back to School:  Three charts that make the case for cities.  Recognizing that students may not be the only one’s who may have taken a summer break, City Observatory offers a quick refresher on the economic case for urbanism. It comes in the form of three charts that show the growing importance of walkability to commercial land values, the key role that transit access places in promoting residential values, and more data that show the movement of young adults to city centers.  These three charts underscore the economic momentum behind the move to cities.

rca_walkability

3.  Why the median rent is a misleading indicator of housing affordability. Its pretty standard practice to use median rent (or housing prices) to compare the affordability of housing in different locations.  But this misses the key fact that while some cities and neighborhoods have a wide variety of housing, others are much more homogenous. It turns out that having more variety generally means greater affordability. We show how to use data on the 25th percentile of rents as an alternative measure of affordability.


The week’s must reads

1. Welcome to Uberville. Are transportation  transportation network companies (aka ride-sharing) the solution to transit’s last mile problem or an existential threat to public transit service? Altamonte Springs, Florida, a suburb of Orlando has formed a partnership with Uber and is subsidizing some trips. At The Verge, Spencer Woodman examines how this new public private arrangement is playing out, and what it might mean for other communities.

2. Parking reform hits bureaucratic resistance in Seattle.  Parking requirements have increasingly been identified as a key contributor to rising rents and fights over neighborhood parking impacts are a key objection to higher density. As part of its housing affordability “grand bargain,” Seattle endorsed the establishment of parking benefit districts, which would establish pay parking in some areas and return a portion of the net revenues to local neighborhoods. The idea is to promote more efficient use of on-street parking, and buffer local residents from the impact of pay parking. Pilot implementation of the idea has been stymied by bureaucratic opposition from the city’s transportation department. Sightline Institute’s Alan Durning describes the controversy and offers up a  point-by-point rebuttal of the objections that the agency has raised to the parking benefit district plans.

3.  The geography of poverty in Boston. Concentrated poverty amplifies all of the negative effects of poverty on the well-being and economic opportunities of the poor. The Boston Globe explores how the location of public housing and the allocation of Section 8 housing vouchers actually reinforces the segregation of the poor. More than two-thirds of public housing and vouchers go to house families in neighborhoods classified as “low” or “very low” opportunity areas. The opposition to a more balanced distribution of affordable housing is a key factor in perpetuating these historic patterns. The Globe story is a terrific combination of data-based reporting, first-hand accounts of how a better neighborhood can change a family’s opportunities, and insights into the seemingly insurmountable obstacles that reluctant jurisdictions can employ to block affordable housing.

 


New knowledge

 

1. Forty years of zip code level home price data. The Federal Housing Finance Agency has posted four decades worth of zip code level data on housing prices.  FHFA has calculated a quality-adjusted, repeat-sales home price index for 1975 through 2015. The data also include an interactive map that allows quick comparisons of home price changes. This map shows changes since 2000, with the biggest increases colored red. Dark green represents the biggest declines.

fhfa_zip

2. The Transport DataBook.  Yonah Freemark of Transport Politic has compiled an impressive list of trend and ranking data for urban transportation. You’ll find data and clearly presented charts on vehicle miles traveled, modes of travel, transit ridership, operations and finances. Yonah regularly updates these charts as new data becomes available.

3.  Where the kids are.  Chicago public radio station WBEZ has taken block-level Census data and used it to identify the pattern of households with and without children under 18.  You’ll see a recurring pattern here:  households without children (light blue dots) tend to predominate in city centers; households with children (yellow dots) are disproportionately in less central locations.  You can zoom to any location in the country.The Week Observed is City Observatory’s weekly newsletter. Every Friday, we give you a quick review of the most important articles, blog posts, and scholarly research on American cities.

wbez_kids_block


Our goal is to help you keep up with—and participate in—the ongoing debate about how to create prosperous, equitable, and livable cities, without having to wade through the hundreds of thousands of words produced on the subject every week by yourself.

If you have ideas for making The Week Observed better, we’d love to hear them! Let us know at jcortright@cityobservatory.org or on Twitter at @cityobs.

Transatlantic advice on city development strategies

We’ve all been paying a lot more attention to developments in Britain since June’s Brexit vote. As we noted at the time, some of the same kinds of political divides that play out in America—between globally-integrated, knowledge driven cities and more rural areas that are older, less-educated—also happen in Britain. (Population density helps explain the red blue division in the US and the leave/remain divide in England.)

A city across the pond (Flickr: Salerie)
A city across the pond (Flickr: Salerie)

The UK’s Centre for Cities, a London based think tank studies many of the same issues on the other side of the Atlantic that we find so interesting at City Observatory. We were particularly struck by one of their recent reports looking at economic development strategies for cities. Much of what is said in this report could be said with equal force — if in slightly different English — for cities in the U.S.

Paul Swinney and Elli Thomas of the Centre have written a strongly historically grounded description of urban economies, entitled A Century of Cities: Urban Economic Change Since 1911. Looking at economic trends over the past 100 years, they draw a stark contrast between cities that were dominated by mass production, and which have clung to older manufacturing industries, and those cities that embraced services and build a knowledge economy.

In Britain, this plays out as a North/South divide. London and other smaller cities in the South, like Reading and Brighton, developed a strong service sector and thriving new industries. The North the Midlands and Wales, all clung to manufacturing, and relatively low skill jobs like call centers and distribution. Not only did incomes in the South outpace those in the rest of the country, so did job creation: For every job created in the North, Midlands and Wales, 2.3 jobs were created in the South.

Swinney and Thomas offer sharp and clear policy advice. Three takeaways from their report make every but as much sense on this side of the Atlantic as in the UK. From their report:

  1. Improving the skills of the workforce. Knowledge businesses require high skilled workers. The ease with which they can recruit these workers is a key determinant of where they locate.
  1. Supporting innovation. High-skilled workers don’t just work anywhere – they cluster in successful cities. This is because a worker isn’t more productive just because of the qualifications that he or she holds, but also because of the workers he or she works with and the institutions that he or she works in. The ‘knowledge networks’ that workers are part of are place specific, and cities need to be able to facilitate innovation and the creation of new ideas via their knowledge networks to increase long-run productivity.
  1. Dealing with the scars of industrial legacy. The 21st century economy requires less employment space – an office has a smaller footprint than a factory. And this employment space tends to be in a different part of the city – jobs in our most successful cities have been concentrating in their city centres. This shift has left large swathes of empty land and buildings in some cities, so encouraging density of employment should be done alongside dealing with land remediation.

All three of these points mirror analyses we’ve done here at City Observatory. Businesses are increasingly choosing locations based on worker availability, and, for the first time in decades, jobs in city centers are growing as fast or faster in than the suburbs.

The critical factor is a city’s ability to reinvent itself and its economy in the face of major technological or economic changes. In many ways the argument made here is similar to one that Harvard’s Ed Glaeser has made about New York and Boston compared to other rust-belt cities in the US. Places that embraced trade and openness and had a well-educated population have been much more successful in adapting to industrial change than more insular, less-educated places.

Or, more simply put: Nostalgia is not an economic strategy. Economies don’t go backwards, and efforts to forestall, or reverse fundamental economic changes are usually costly and ineffective. The challenge of economic strategy is to plan for the kind of economy we’re likely to have in the future, not pine for the restoration often-imaginary glory of an economy past.

And that, in a way, is a sub-text of the Brexit vote: a majority of voters, dismayed with the changes wrought by globalization and epitomized by the European Union, and with their patience worn thin by the lingering effects of the worst economic downturn in eight decades, not surprisingly voted for the past. Meanwhile, the young, the best-educated, and those living in cities voted to remain, and move forward. The question of whether we go forward, or try to go back, is one that is equally relevant on both sides of the Atlantic.

 

Who patronizes small retailers?

 

Urban developers regularly wax eloquent over the importance of local small businesses.  But ultimately, businesses depend on customer support. So, in what markets do customers routinely support small businesses? Getting data that reflects on this question is often very difficult. A new source of “big data” on consumer spending patterns comes from the JPMorganChase Institute, which uses anonymized credit and debit card data from more than 16 billion transactions by the bank’s 50 million customers to measure consumer spending patterns across the United States.  Their “Local Consumer Commerce Index” index reports detailed data on spending patterns in 15 major metropolitan areas across the country.

Small Business (Flickr: La Citta Vita)
Small Business (Flickr: La Citta Vita)

The company’s proprietary credit and debit card data aren’t complete or perfect, of course. To the extent there are demographic variations in the bank’s market share in different metropolitan areas, or different patterns of credit and debit card use compared to cash purchases (or checks) these data won’t be completely representative. But they do represent a sizable sample of consumer spending and JPMI analysis show that they are roughly congruent with government measures of retail sales activity. The data cover many daily purchases of a wide range of non-durable goods and services; it’s likely that they under-report purchases of major durables, like cars and appliances, which are frequently financed through bank- or store-credit, rather than purchased with credit or debit cards.

The Institute is now publishing a monthly analysis of its index data that looks at changes in retail sales by metro market, by age, by income group, and by major product category (restaurants, fuel, etc). The report also estimates how much people spend in their home metropolitan area, as opposed to purchases in other metropolitan areas.

The Institute also classifies purchases according to size of business. We mined these data–which the Institute makes freely available here–to examine what fraction of consumer spending in each covered metropolitan market goes to “small” businesses.  The JPMC Institute classifies as “large” all those firms who have a market share of 8 percent or greater in a particular product category, and then divides the remaining businesses into “medium” and “small” establishments.

So what do these data tell us about where consumers are most likely to patronize smaller businesses?

First, there’s considerable variation among metropolitan areas.  Overall, small businesses account for about 32.6 percent of retail sales, according to the Institute’s estimates.  In New York City (think bodegas and boutiques) small establishments account for 36 percent of sales.  In Columbus, the comparable figure is 23 percent.  Here are the data for the 15 metropolitan areas covered in the JPMorgan Chase Institute’s study:

Second, the Institute reports that its own tabulations of retail spending data show that people who live in urban centers spend a larger fraction of their retail dollar at smaller businesses than those who live in suburbs.  They conclude: “central cities uniformly have more spending at small and medium enterprises than do their surrounding metropolitan areas.”  Their data show that purchases at small and medium sized firms are 10 to 15 percentage points  percent higher in central cities than in their surrounding suburbs.

The JPMC Institute data are an interesting and useful new window into consumer spending patterns. You can learn more about the data, and read the insights from the Institute’s analysts in their report that describes the methodology and key findings:  https://www.jpmorganchase.com/corporate/institute/document/jpmc-institute-local-commerce-report.pdf.

 

The Economic Value of Walkability: New Evidence

One of the hallmarks of great urban spaces is walkability–places with lots of destinations and points of interest in close proximity to one another, buzzing sidewalks, people to watch, interesting public spaces–all these are things that the experts and market surveys are telling us people want to have.

Walkable places. (Flickr: TMImages PDX)
Walkable places. (Flickr: TMImages PDX)

Its all well and good to acknowledge walkability in the abstract, but to tough-minded economists (and to those with an interest in public policy) we really want to know, what’s it worth?  How much, in dollar and cents terms, do people value walkable neighborhoods?  Thanks to the researcher’s at RedFin, we have a new set of estimates of the economic value of walkability.

Redfin used an economic tool called “hedonic regression” to examine more than a million home sales in major markets around the country, and to tease out the separate contributions of a house’s lot size, age, number of bedrooms and bathrooms, square footage and neighborhood characteristics (like average income). In addition, the RedFin model included an examination of each property’s Walk Score.  Walk Score is an algorithm that estimates the walkability of every address in the United States on a scale of 0 to  100 based on its proximity to a number common destinations like schools, stores, coffee shops, parks and restaurants.

What they found is that increased walkability was associated with higher home values across the country. On average, they found that a one point increase in a house’s Walk Score was associated with a $3,000 increase in the house’s market value. But their findings have some importance nuances.

First, the value of walkability varies from city to city. Its much more valuable in larger, denser cities, on average than it is in smaller ones. A one point increase in Walk Score is worth nearly $4,000 in San Francisco, Washington and Los Angeles, but only $100 to $200 in Orange County or Phoenix.

Second, the relationship between walkability and home value isn’t linear: a one point increase in the Walk Score for a home with a very low score doesn’t have nearly as much impact as an increase in Walk Score for a home with a high Walk Score.  This suggests that there is a kind of minimum threshold of walkability.  For homes with Walk Scores of less than 40, small changes in walkability don’t seem to have much effect on home values. In their book, Zillow TalkSpencer Raskoff and Stan Humphries reached a similar conclusion in their research by a somewhat different statistical route, finding that the big gains in home value were associated with changes toward the high end of the Walk Score scale.

For their benchmark comparison of different cities, RedFin computed how much a home’s value might be expected to increase if it went from a WalkScore of 60 (somewhat walkable) to a WalkScore of 80 (very walkable). The results are shown here.

Walk_Score_6080

Among the markets they studied, the average impact of raising a typical home’s Walk Score from 60 to 80 was to add more than $100,000 to its market value. In San Francisco, the gain is $188,000; in Phoenix, only a tenth that amount.

Redfin’s estimates parallel those reported by their real estate data rivals at Zillow. Raskoff and Humphries looked at a different set of cities, and examined the effect of a 15-point increase in Walk Score.  They found that this increased home values by an average of 12 percent, with actual increases ranging from 4 percent to 24 percent.

We think the new RedFin results have one important caveat. We know from a wide variety of research that proximity to the urban core tends to be positively associated with home values in most markets. And it turns out that there is some correlation between Walk Scores and centrality (older, closer-in and more dense neighborhoods tend, on average to have higher Walk Scores). RedFin’s model didn’t adjust its findings for distance to the central business district. What this means is that some of the effect that their model attributes to Walk Score may be capturing the value of proximity to the city center, rather than just walkability.  So as you read these results, you might want to think about them representing the combined effect of central, walkable neighborhoods.  (Our own estimates, which controlled for centrality, still showed a significant, positive impact for walkability on home values).

The RedFin study adds to a growing body of economic evidence that strongly supports the intuition of urbanists and the consumer research:  American’s attach a large and apparently growing value to the ability to live in walkable neighborhoods.  The high price that we now have to pay to get walkable places ought to be  a strong public policy signal that we should be looking for ways to build more such neighborhoods. Too often, as we’ve noted, our current public policies–like zoning–effectively make it illegal to build the kind of dense, interesting, mixed-use neighborhoods that offer the walkability that is in such high demand.

More Driving, More Dying (2016 First Half Update)

More grim statistics from the National Safety Council:  The number of persons fatally injured in traffic crashes in the first half of 2016 grew by 9 percent.  That means we’re on track to see more than 38,000 persons die on the road in 2016, an increase of more than 5,000 from levels recorded just two years ago.

Motor Vehicle Fatality Estimates - 6 month trends
Motor Vehicle Fatality Estimates – 6 month trends

 

Just two weeks ago, we wrote about the traditional summer driving season as a harbinger of the connection between the amount of driving we do and the high crash and fatality rates we experience. And these data show, for the first half of the year, that things are not going well.  As alarming as these statistics are, the bigger question that they pose is why are crash rates rising?  And what, if anything can we do about it?

It’s not the economy, stupid.

There are undoubtedly many factors at work behind the rise in crashes and crash deaths. There’s clearly much more we can do to make our city streets and roadways safer for all travelers.

We have to disagree with the National Safety Council on one key point: we shouldn’t mindlessly blame the economy for our safety woes. In their press release, they attribute the increase in fatalities to  an improving economy, saying:

While many factors likely contributed to the fatality increase, a stronger economy and lower unemployment rates are at the core of the trend.

That’s an unfortunate, and probably incorrect framing, in our view. Chalking the rise in traffic deaths up to an improving economy seems a bit fatalistic: implying that more traffic deaths are an sad but inevitable consequence of economic growth, one which might prompt some people to shrug off the increase in deaths.  That would be tragically wrong, because, at least through 2013, the nation experienced a decrease in traffic deaths and an improving economy.

What has changed, since 2014, is not the pace of job growth or the steady decline in the unemployment rate (both of which have been proceeding nicely since the economy bottomed out in 2009), but rather a reversal of the increase in gasoline prices which started in the summer of 2014.  As we pointed out a few weeks ago, gas prices have been steadily declining, and as a direct result, Americans have begun driving more.

Now it would be fair to point out that a three percent increase in driving has been accompanied by a nine percent increase in traffic deaths. But we have good reasons to believe that the additional driving (and additional drivers and additional trips) that are prompted by cheaper gasoline are exactly the the ones that involve some of the highest risks.  A study of gas prices and crash rates found that the relationship was indeed “non-linear”–that small changes in gas prices were associated with  disproportionately larger increases in crash rates.

Higher gas prices not only discourage driving generally, they seem to have the effect of reducing risky driving, and thus produce a safety dividend. Its time to do more than just lament tragic statistics: if we want to make any progress toward Vision Zero, we ought to be putting in place policies that bring the price of driving closer to the costs that it imposes on society. If people reduce their driving–as they did when gasoline cost more than it does today–there will be fewer crashes and fewer deaths.

The role of mixed income neighborhoods in lessening poverty

Its a truism that the zip code that you are born in (or grow up in) has a lot to do with your life chances. If you’re born to a poor household, a neighborhood with safe streets, good schools, adequate parks and public services, and especially some healthy and successful peers and neighbors has a material impact on whether you rise out of poverty or are trapped in it.  While this message often gets presented in fatalistic tones, there’s a growing body evidence that suggests how we build our cities, and specifically, how good a job we do of building mixed income neighborhoods that are open to everyone can play a key role in reducing poverty and promoting equity.

The big problem the US confronts is that poverty has grown increasingly concentrated over the past four decades. Our Lost in Place report showed the number of poor people living in urban census tracts with a poverty rate of 30 percent or higher has doubled since 1970; and today more of the urban poor live in neighborhoods where a high fraction of their neighbors are also poor. As difficult as it is to be poor, its harder growing up in a place where many or most of your neighbors also face these challenges.

New research shows that neighborhood effects—the impact of peers, the local environment, neighbors—contribute significantly to success later in life. Poor kids who grow up in more mixed income neighborhoods have better lifetime economic results. This signals that an important strategy for addressing poverty is building cities where mixed income neighborhoods are the norm, rather than the exception. And this strategy can be implemented in a number of ways—not just by relocating the poor to better neighborhoods, but by actively promoting greater income integration in the neighborhoods, mostly in cities, that have higher than average poverty rates.

In the New York Times, economist Justin Wolfers reported on groundbreaking work by Eric Chyn of the University of Michigan that found previous research may have understated the effect of neighborhoods on lifetime earnings and employment. The paper shows that moving low-income children in very poor neighborhoods to less poor neighborhoods can have a major positive effect on their life chances.

Most media outlets have covered this story as reinforcing the importance of “mobility programs”: that is, policies that encourage residents of very low-income neighborhoods to move to more economically integrated areas, usually with some form of direct housing assistance like vouchers. And the ability to move to neighborhoods with good amenities and access to jobs, without having to pay unsustainable amounts for housing or transportation, is a crucial part of creating more equitable, opportunity-rich cities.

But the coverage may be missing the other half of the policy equation: Chyn’s paper adds to the evidence about the value of mixed-income neighborhoods in general, not just mobility. That means it’s just as important that cities find a way to invest in low-income neighborhoods to bring opportunity to them, rather than simply trying to move everyone out.

Why this research is so important

The results of the voucher demonstration illustrate that there can be large benefits from even modest changes in economic integration. The average household moved about 2 miles from their previous public housing location, and still lived in a neighborhood that had a higher than average poverty rate. Chyn’s results show the effects of moving from neighborhoods dominated by public housing (where the poverty rate was 78% on average), to neighborhoods that had poverty rates initially 25 percentage points lower, on average. Most participants still lived in neighborhoods with far higher levels of poverty than the typical American neighborhood. But compared to their peers who remained in high poverty neighborhoods, they enjoyed better economic results later in life.

chyn_employment

This chart shows that children who moved out of very low-income neighborhoods were about 5-10 percentage points more likely to be employed as adults.
chyn_earnings

In this chart, you can see the growing earnings benefit to children who left very low-income neighborhoods in their adult years.

This study—on the heels of a widely-cited study led by Harvard economist Raj Chetty released last year—adds even more heft to the growing body of evidence that helping people with lower incomes move to mixed-income neighborhoods can play a huge role in spreading economic opportunity.

The new research improves on older studies by getting rid of an important confounding factor that affected some earlier research by more closely replicating a true “natural experiment.”

The experiment was made possible by the decision to demolish large scale public housing in Chicago in the early 1990s. The families dislocated from the old style public housing—which were in neighborhoods of extremely concentrated poverty—had to find new housing. The Chicago Housing Authority (CHA) provided the families with vouchers to move to privately operated rental housing, typically in neighborhoods with far lower levels of poverty. The kids who moved to new lower-poverty neighborhoods saw a significant increase in their lifetime earnings compared to otherwise similar kids who remained in the public housing that wasn’t torn down.

This natural experiment has an important advantage over the “Moving to Opportunity” (MTO) housing experiment conducted by the federal government in the 1990s. In MTO, public housing households had to apply for a voucher lottery. This created the possibility that the people who had applied were particularly motivated and able to make the transition to a new neighborhood. That would mean that even those households that lost the lottery might have better-than-average outcomes, reducing the gap between those who moved and those who didn’t, and making the effect of moving appear smaller than it really was.

But unlike MTO, the participants in the CHA relocation program were not self-selected. They represented a more or less random cross-section of public housing residents, and so the differences between the outcomes of treatment groups (those who got vouchers) and those who didn’t (control groups) could be treated as purely the result of the voucher program.

The policy implication: Mixed-income neighborhoods promote opportunity

But it’s important to put this finding in a broader context. Evidence about mobility programs, in turn, are part of a larger body of research that neighborhoods matter for economic opportunity. While the focus has been helping people leave neighborhoods with high concentrations of poverty, it’s also possible to bring investments and resources to these communities.

Of course, when that happens, it often happens in conjunction with—or even because of—a return of middle- and upper-income people to the neighborhood. In other words, gentrification.

For some, that’s enough to reject that policy avenue. But some research suggests we ought to give it another look. While news from neighborhoods in San Francisco and Brooklyn, where incredibly high levels of demand and tight supply have led to spiraling housing costs, makes it sound like gentrification inevitably and utterly displaces all a neighborhood’s residents, other research suggests that displacement is far less widespread than commonly thought. While housing costs can be an issue, a recent study from the Philadelphia Federal Reserve suggests that displacement is much less common than we might expect—and another study of New York public housing residents in gentrifying areas showed an increase in earnings and school test scores.

This research also occurs against a backdrop of widening inequality and economic segregation. And inequality has an important spatial dimension: low-income and high-income households are increasingly segregated from one another in separate neighborhoods.

While the spatial response, as we’ve said, has focused on mobility, enabling the poor to move to higher income neighborhoods is challenging for a number of reasons. The raison d’etre of many suburbs is exclusion—using zoning requirements to make it essentially impossible for low income households to afford housing—and efforts by outside organizations or governments to reduce these barriers have been difficult. If we want to make the biggest difference in economic integration, we need to try to integrate low-income neighborhoods as well as high-income neighborhoods.

Neighborhoods for everyone

Taken together, the new Chyn results add to the growing body of literature on neighborhood effects and strongly suggest that we ought to be looking for all kinds of opportunities, large and small, to promote more mixed-income neighborhoods. Even the small steps—like lowering the poverty rate in a kid’s neighborhood from 75 percent to less than half—pays clear economic dividends.

But we also need to remember that integration isn’t just about moving around people with low incomes. We can reinvest in neighborhoods of concentrated poverty in ways that improve quality of life and enhance opportunity in place.

The myth of revealed preference for suburbs

If so many people live in suburbs, it must be because that’s what they prefer, right? But the evidence is to the contrary.

One of the chief arguments in favor of the suburbs is simply that that is where millions and millions of people actually live. If so many Americans live in suburbs, this must be proof that they actually prefer suburban locations to urban ones. The counterargument, of course, is that people can only choose from among the options presented to them. And the options for most people are not evenly split between cities and suburbs, for a variety of reasons, including the subsidization of highways and parking, school policies, and the continuing legacies of racism, redlining, and segregation. One of the biggest reasons, of course, is restrictive zoning, which prohibits the construction of new urban neighborhoods all over the country.

But does zoning really act as a constraint on more compact, urban housing? Sure, some skeptics might say, it appears that local zoning laws prohibit denser housing and walkable retail districts. But in fact, city governments pass such strict laws because that’s what their constituents want. Especially within a metropolitan region with many different suburban municipalities, these governments are essentially competing for residents and businesses. If there were real demand for denser, walkable neighborhoods, wouldn’t some municipalities figure out that they could attract those people by allowing that type of development?

A 2005 study by Jonathan Levine—and explored further in Levine’s 2006 book, Zoned Out—seeks to answer this question. Are local governments just responding to “market” demand in ensuring that new development is low-density and auto-oriented? Or is there really pent-up demand for more urban neighborhoods that can’t be satisfied because of zoning?

Atlanta. Credit: Brett Weinstein, Flickr
Atlanta. Credit: Brett Weinstein, Flickr

 

Levine looks for the answer in two contrasting metropolitan areas: Boston and Atlanta. Boston, as a much older region, has a relatively higher number of dense, walkable neighborhoods, while in Atlanta, which mostly boomed after World War Two, urban neighborhoods are much more scarce. Levine hypothesizes that if dense housing is adequately supplied to match people’s preferences, you should find a pretty good match between the kinds of places people say they’d like to live, and the kinds of places they actually do live. But if zoning really creates a “shortage of cities,” then the greater the shortfall of urban neighborhoods, the worse the matchup between stated preferences and actual living arrangements.

This is an important wrinkle to the “revealed preference” arguments of many defenders of the suburban status quo. Recent Census population figures sparked what were only the latest of a long line of scuffles over whether, or to what extent, the “back to the city” movement is real. But if Levine’s argument is correct, measuring demand for urban areas simply by how many people end up living there is flawed, because some people who would like to live in more compact neighborhoods can’t do so because there aren’t enough to go around.

To begin his analysis, Levine classified neighborhoods in both the Boston and Atlanta metro areas according to their level of “urban-ness” on a five-point scale, with “A” neighborhoods being the densest and most urban, and “E” being the most sprawling and exurban. Levine and his researchers then conducted a survey of residents in each of the zones, asking about their housing preferences and satisfaction with their current housing situation.

In Boston, about 40 percent of respondents said they preferred denser, more pedestrian-friendly neighborhoods, while in Atlanta, just under 30 percent of respondents did so. (Auto-oriented neighborhoods were preferred by 29 percent of people in Boston and 41 percent of people in Atlanta, with remaining respondents neutral.)

And how well did these preferences match actual behavior? Well, in Boston—where neighborhoods in the three most urban categories made up over half of all housing—83 percent of people with strong preferences for urban neighborhoods lived in one of these three urban zones. In Atlanta—where the same top three urban categories make up barely over 10 percent of all housing—just 48 percent of people with strong preferences for urban neighborhoods lived in an urban zone.

In fact, all down the line, people whose stated preferences were more urban were much more likely to actually live in an urban neighborhood in the Boston area than in the Atlanta area—suggesting that in Atlanta something might be preventing them from satisfying their preferences. At the same time, people who expressed preferences for the most auto-oriented neighborhoods were able to satisfy that demand the vast majority of the time in both regions—about 95 percent of those in Atlanta, and 80-90 percent of those in Boston. More rigorous tests prove that this difference is statistically significant.

levin

This seems like strong evidence that there is a “shortage of cities” in Atlanta. Why, otherwise, would there be such a gap between the number of people who satisfy their preferences for urban neighborhoods in the Boston and Atlanta metro areas—and much smaller gaps between people who can satisfy their preferences for more car-oriented areas?

If this is correct, it helps explain a other issues we see. If urban neighborhoods are undersupplied compared to demand for them, we would expect to see urban housing go to the people willing to outbid other households, increasing prices relative to auto-oriented neighborhoods, which are more plentiful. In a place like Atlanta, lots of urban housing would have to be built before this bidding war could be ended, returning prices to a “normal” market level.

It’s also notable that this kind of “shortage of cities” can occur even where there is no overall housing shortage. Atlanta, for example, is not a particularly high-cost region, but it has mostly added new housing on the suburban periphery. So while there’s no bidding war for housing in the metropolitan area as a whole, there is a bidding war for more urban housing, making walkable neighborhoods more expensive than they would have to be. Boston is almost the opposite: walkable neighborhoods appear to be less undersupplied relative to auto-oriented neighborhoods, but the region as a whole has very expensive housing, suggesting that the total supply of housing is too low. Boston could help bring down housing prices by building any housing at all—auto-oriented or more walkable. (Though walkable housing would have lower total location costs.)

Levine’s study ought to be known by anyone who works in urban planning or housing. It’s one of the strongest pieces of evidence that “revealed preferences”—the choices that people actually make about where to live—actually reveal the limited choices that people are given as a result of restrictive land use laws.

The myth of revealed preference for suburbs

If so many people live in suburbs, it must be because that’s what they prefer, right? But the evidence is to the contrary.

One of the chief arguments in favor of the suburbs is simply that that is where millions and millions of people actually live. If so many Americans live in suburbs, this must be proof that they actually prefer suburban locations to urban ones. The counterargument, of course, is that people can only choose from among the options presented to them. And the options for most people are not evenly split between cities and suburbs, for a variety of reasons, including the subsidization of highways and parking, school policies, and the continuing legacies of racism, redlining, and segregation. One of the biggest reasons, of course, is restrictive zoning, which prohibits the construction of new urban neighborhoods all over the country.

But does zoning really act as a constraint on more compact, urban housing? Sure, some skeptics might say, it appears that local zoning laws prohibit denser housing and walkable retail districts. But in fact, city governments pass such strict laws because that’s what their constituents want. Especially within a metropolitan region with many different suburban municipalities, these governments are essentially competing for residents and businesses. If there were real demand for denser, walkable neighborhoods, wouldn’t some municipalities figure out that they could attract those people by allowing that type of development?

A 2005 study by Jonathan Levine—and explored further in Levine’s 2006 book, Zoned Out—seeks to answer this question. Are local governments just responding to “market” demand in ensuring that new development is low-density and auto-oriented? Or is there really pent-up demand for more urban neighborhoods that can’t be satisfied because of zoning?

Atlanta. Credit: Brett Weinstein, Flickr
Atlanta. Credit: Brett Weinstein, Flickr

 

Levine looks for the answer in two contrasting metropolitan areas: Boston and Atlanta. Boston, as a much older region, has a relatively higher number of dense, walkable neighborhoods, while in Atlanta, which mostly boomed after World War Two, urban neighborhoods are much more scarce. Levine hypothesizes that if dense housing is adequately supplied to match people’s preferences, you should find a pretty good match between the kinds of places people say they’d like to live, and the kinds of places they actually do live. But if zoning really creates a “shortage of cities,” then the greater the shortfall of urban neighborhoods, the worse the matchup between stated preferences and actual living arrangements.

This is an important wrinkle to the “revealed preference” arguments of many defenders of the suburban status quo. Recent Census population figures sparked what were only the latest of a long line of scuffles over whether, or to what extent, the “back to the city” movement is real. But if Levine’s argument is correct, measuring demand for urban areas simply by how many people end up living there is flawed, because some people who would like to live in more compact neighborhoods can’t do so because there aren’t enough to go around.

To begin his analysis, Levine classified neighborhoods in both the Boston and Atlanta metro areas according to their level of “urban-ness” on a five-point scale, with “A” neighborhoods being the densest and most urban, and “E” being the most sprawling and exurban. Levine and his researchers then conducted a survey of residents in each of the zones, asking about their housing preferences and satisfaction with their current housing situation.

In Boston, about 40 percent of respondents said they preferred denser, more pedestrian-friendly neighborhoods, while in Atlanta, just under 30 percent of respondents did so. (Auto-oriented neighborhoods were preferred by 29 percent of people in Boston and 41 percent of people in Atlanta, with remaining respondents neutral.)

And how well did these preferences match actual behavior? Well, in Boston—where neighborhoods in the three most urban categories made up over half of all housing—83 percent of people with strong preferences for urban neighborhoods lived in one of these three urban zones. In Atlanta—where the same top three urban categories make up barely over 10 percent of all housing—just 48 percent of people with strong preferences for urban neighborhoods lived in an urban zone.

In fact, all down the line, people whose stated preferences were more urban were much more likely to actually live in an urban neighborhood in the Boston area than in the Atlanta area—suggesting that in Atlanta something might be preventing them from satisfying their preferences. At the same time, people who expressed preferences for the most auto-oriented neighborhoods were able to satisfy that demand the vast majority of the time in both regions—about 95 percent of those in Atlanta, and 80-90 percent of those in Boston. More rigorous tests prove that this difference is statistically significant.

levin

This seems like strong evidence that there is a “shortage of cities” in Atlanta. Why, otherwise, would there be such a gap between the number of people who satisfy their preferences for urban neighborhoods in the Boston and Atlanta metro areas—and much smaller gaps between people who can satisfy their preferences for more car-oriented areas?

If this is correct, it helps explain a other issues we see. If urban neighborhoods are undersupplied compared to demand for them, we would expect to see urban housing go to the people willing to outbid other households, increasing prices relative to auto-oriented neighborhoods, which are more plentiful. In a place like Atlanta, lots of urban housing would have to be built before this bidding war could be ended, returning prices to a “normal” market level.

It’s also notable that this kind of “shortage of cities” can occur even where there is no overall housing shortage. Atlanta, for example, is not a particularly high-cost region, but it has mostly added new housing on the suburban periphery. So while there’s no bidding war for housing in the metropolitan area as a whole, there is a bidding war for more urban housing, making walkable neighborhoods more expensive than they would have to be. Boston is almost the opposite: walkable neighborhoods appear to be less undersupplied relative to auto-oriented neighborhoods, but the region as a whole has very expensive housing, suggesting that the total supply of housing is too low. Boston could help bring down housing prices by building any housing at all—auto-oriented or more walkable. (Though walkable housing would have lower total location costs.)

Levine’s study ought to be known by anyone who works in urban planning or housing. It’s one of the strongest pieces of evidence that “revealed preferences”—the choices that people actually make about where to live—actually reveal the limited choices that people are given as a result of restrictive land use laws.

Counting People and Cars: Placemeter

We confess:  we’re data geeks.  We love data that shows how cities work, and that give depth and precision to our understanding of policy problems.  But truth be told, most data we — and other analysts — work with is second-hand:  its data that somebody else gathered, usually for some other purpose,  that uses definitions and methods that don’t quite capture what we’re trying to get at.  Information about employment and wage trends, for example, is gleaned from tax and administrative records.  Working with this “secondary” data means that you’re always trying to extract meaning from something that doesn’t exactly measure what you’re interested in; this is why data analysis is often as much an art as it is a craft.

So when a new technology comes around that let’s us generate our own personalized, customized data, we get pretty excited.  (For you foodies, its kind of like going from only having store-bought industrial tomatoes, to growing sun-ripened heirloom varieties in your own backyard–sort of).  But what we’re talking about here is our own, home-grown, artisanal data.

Placemeter, a New York based startup has developed image processing software to count cars, people and bikes in urban environments. Their technology uses existing cameras (it can analyze a stream of real time data) or you can buy one of their Placemeter devices (currently $90) or use any fairly standard wi-fi enabled web-cam. Placemeter then sells their monitoring and data services for between $30 and $90 per month per location counted (locations can be a street, sidewalk, building entrance or area).

For the past several months we’ve been experimenting with the company’s Placemeter device. Placemeter is a small, plastic box, slightly larger than a deck of playing cards.  It has a camera, a wifi tranceiver and an embedded processor.  It attaches to the inside of a window and is powered by a small AC-power brick.  You point it out the window, connect it to your local wifi network, and then log into a web page that lets you see the camera’s eye-view of the area outside your window.  Using your web-browser, you identify cordons (Placemeter calls these “turnstiles”) by using a mouse to draw lines on the image from the camera.  Then Placemeter starts counting the number of people, bikes and vehicles that pass through the turnstile.  It logs this information, by turnstile and by direction, and also provides a handy set of diagnostics that let you view counts by hour or by day and by direction.  (You can also export a CSV formatted log file that can be used by any standard spreadsheet or analysis program). The company has squarely addressed privacy concerns: the Placemeter analyzes the data stream in real time, and doesn’t store personally identifying information or images, just the results of its own computations of the numbers of people and vehicles it has counted.

For our CityObservatory test, we positioned the sensor to face a residential street with two-way traffic and with paved sidewalks on either side of the street.  As shown in the grainy, low-resolution photo below–captured through the Placemeter device camera, we established three turnstiles, one for the street, and one for each sidewalk.  Turnstiles are shown as bright green lines. A vehicle is just crossing Eastbound through the turnstile that crosses the roadway. Our camera has an oblique view across the street.

Placemeter Turnstiles on a Residential Street (webcam image)

Placemeter Turnstiles (Author)
Placemeter Turnstiles (Author)

A key limitation of our version of Placemeter is that it doesn’t distinguish between the type of object that goes through a turnstile:  so whether its a car, truck, or bicycle, any vehicle traveling through our street-centered cordon would be recorded as a single vehicle.  On a street with a marked bike lane, it would be possible to establish separate turnstiles for the general travel lane and the bike lane, but accurate counts would require a camera angle that allowed you to see both lanes clearly.  Placemeter has recently come out with a new version of its software that is designed to distinguish between people, bikes and cars; we have not tested that version yet.

Placemeter’s dashboard shows hour by hour counts and provides a historical baseline computed from the previous three weeks of data.  Here’s a chart for early July.  The weekday traffic shows a common “double hump” pattern, reflecting morning and evening rush hours and a mid-day dip.  Weekend traffic is lower, with a single mid-day to evening peak. Traffic on this residential street falls to nearly zero over night.  The much more subdued traffic (especially in the morning hours) is apparent on the July Fourth holiday:  Instead of the usual 257 vehicles between 8am and 9am, there were just 50.
Placemeter_Week

In addition to these hourly counts, Placemeter’s dashboard also summarizes counts by day and direction in a bar chart format.  Here what that looks like:

Placemeter_Counts_Jan
Placemeter Daily Counts

To judge the accuracy of the Placemeter vehicle counts, we compared them to a vehicle count conducted by the City of Portland.  The most recent city data were from January 2015, and were recorded on five weekdays on the same street and block face as that covered by our Placemeter setup.  For comparison purposes, we examined our daily data for the same week in January 2016 (one year later).  The city statistics show an average daily traffic of 5,233 vehicles while our Placemeter count is 4,843 ADT.  The two counts also show a similar directional pattern (the city figures show a 52%/48%westbound to eastbound split; our figures also show 52%/48%.

We’re awash in claims about how big data will transform the world. But there’s also an important role for little data, especially when its data that can be precisely focused on the places and issues you find interesting.

It levels the playing field for bikes and pedestrians. As we’ve pointed out, when it comes to some modes of transportation, especially biking and walking, they’re effectively invisible and therefore disenfranchised from policy discussions, simply because we have no data about them.  We’re awash in data about how many cars are traveling and how fast–or slowly–they travel, but have precious little information about the use of active modes.

Placemeter can be a boon to planners and transportation agencies.  Its less expensive–and by our very crude reckoning–roughly as accurate as other methods of traffic counting.  It provides basic analysis tools and archived data.  Placemeters are flexible and easy to set up, and its possible to use existing web-cams to generate data.

It can be a terrific tool for evaluating projects and events.  One of the toughest evaluation tasks is judging the impact of limited duration events (rallies, festivals and concerts, traffic associated with football games).  Placemeter lends itself to recording activity in very small geographies–like the entrance to a building or a single pathway in a park.

Finally, while its not free, its also not terribly expensive.  So this provides the opportunity to democratize access to data. If a civic group, a neighborhood organization, or just an individual citizen wants to get their own data on how a roadway or other public space is being used, they don’t have to depend on anyone else to get it.  Easy access to lots of “little data” may be just as disruptive as the much ballyhooed “Big data.”

 

Back to school: Three charts that make the case for cities

Its early September, and most of the the nation’s students are (or shortly will be) back in the classroom. There may be a few key academic insights that are no longer top of mind due to the distractions of summer, so as good teachers know, now is a good time for a quick refresher–something that hits the highlights, and reminds us of the lesson plan.  So it is today, with City Observatory.

Pay attention class (Flickr: Jeff Warren)
Pay attention class (Flickr: Jeff Warren)

There’s a growing tide of data illustrating the economic importance of vibrant urban centers. Here are three charts we’ve collected in the past year that underscore the importance of city centers, walkability and transit access—some of the critical factors behind city success.

Chart 1: Walkability drives commercial land values

Real Capital Analytics tracks commercial real estate values in cities across the United States. Like many of us, they’ve noticed the growing importance that businesses place on being located in walkable areas—because that’s where their customers and workers increasingly want to be. And the desirability of walkable areas gets directly reflected in land values. RCA constructed a price index for US commercial real estate that compares  how values are growing in highly walkable areas compared to car-dependent ones. No surprises here: over the past 15 years commercial real estate located in the most walkable areas has dramatically outperformed less walkable areas.

RCA_Walkability

RCA uses a repeat sales index to track changes in property values over time. Their data show that not only have property values in highly walkable central business district locations fully recovered since the 2008 recession, they’ve gained more than 30 percent over their previous peak. Meanwhile, commercial property values in car-dependent suburbs languish at pre-recession levels. As we’ve noted at City Observatory, the growing disparity between central and suburban property values is a kind of “Dow of Cities” that shows that illustrates the economic importance of centrality.

Chart 2: Transit access boosts property values

In addition to walkability, another aspect of great urban spaces—transit accessibility—is also a strong predictor of property values. The Center for Neighborhood Technology looked at trends in residential real estate values in Boston, Chicago, Minneapolis-St. Paul, Phoenix, and San Francisco, between 2006 and 2011, and found that property in transit served locations dramatically out performed property values in places with limited transit. They found strong evidence to support the view that “access to transit” is the new real estate mantra. Over this five-year period, transit-served locations outperformed the typical property in their region by about 40 percent, while property values in non-transit served areas underperformed the regional average.

CNT_TransitShed

Chart 3: People are increasingly moving to urban centers

Luke Juday at the University of Virginia’s Demographics Research Group has done a terrific job of compiling Census data to map the relationship between population trends and centrality—how close to the center of the central business district do people live. Juday’s work can show whether specific population groups are, in the aggregate, moving closer to the urban center, or are decentralizing. His interactive charts show data for the top 50 metropolitan areas, and clearly illustrate the centralizing trend that characterizes well-educated young workers—something that we’ve explored in our reports on the Young and Restless. For example, consider this chart of the location of 22 to 34 year olds by distance from the central business district in 2000 and 2012.

Juday_2234_50metros

 

The data in this chart are a composite of the 50 largest metropolitan areas in the U.S. In 2012, the fraction of the population within a mile of the center of the central business district (the darker line) in this key young adult demographic approached 30 percent, a substantial increase from 2000 (the lighter line). Meanwhile, the share of the population in more outlying areas declined. This is powerful evidence of the growing preference of young adults for urban living.

At City Observatory, we’re data-driven. These three charts, taken together with four others we highlighted earlier, make a strong case for the growing economic importance of cities. Walkability, transit access and the movement to city centers are big economic drivers. That’s the lesson that all of us–students and urban leaders alike–need to be keeping in mind.

 

The limits of data-driven approaches to planning

City Observatory believes in using data to understand problems and fashion solutions. But sometimes the quantitative data that’s available is too limited to enable us to see what’s really going on. And incomplete data can lead us to the wrong conclusions.

The light's so much better here (Flickr: C. Chana)
The light’s so much better here (Flickr: C. Chana)

Our use of data is subject to what we call the “drunk under the streetlamp” problem: An obviously intoxicated man is on his hands and knees on the sidewalk, under a streetlamp. A passing cop asks him what he’s doing. “Looking for my keys,” the man replies. “Well, where did you drop them?” the cop inquires. “About a block away, but the light’s better here.”

When it comes to transportation, we have copious data about some things, and almost nothing about others. Plus, there’s an evident systematic bias in favor of current modes of transportation and travel patterns. The car-centric data we have about transportation fundamentally warps the field’s decision-making. Unless we’re careful, big data will only perpetuate that problem—if not make it worse.

Sometimes Qualitative Data is More Informative

To understand why qualitative data can sometimes tell us more, let’s look at some documentation about the way one American transportation system performs.

Three recent essays from people walking in Houston make it clear that, there, the infrastructure and land use patterns that facilitate safe walking simply don’t exist. The following excerpts are snapshots from a large body of qualitative evidence showing that, in many U.S. cities, walking is a hellish experience.

Writing in Texas Monthly, in an essay entitled “Where the Sidewalks End,” Sukhada Tektel describes her experiences adapting to Houston after living in Mumbai and Toulouse:

Nothing could have prepared me for the disconnectedness of this oil-and-gas mecca: no clear city center, pitiable public transportation, and, most strikingly, no place to walk…For as far as the eyes can see, there are only cars and not a single person on foot.

David Yearsely wrote a different essay, albeit with a similar title (“Where the Sidewalk Ends”), describing wandering about Houston’s downtown and Third Ward while visiting for an organ music gathering. Even traversing the city’s upscale River Oaks district, he describes long, sidewalk-less stretches outside the walled enclaves of the busy four-lane San Felipe Avenue. In ten miles of walking, he encountered only two other pedestrians, both walking their dogs.

At the Houston Chronicle, David Dorantes wrote, “I want to walk, but Houston won’t let me.” Like many migrants to the Bayou City, he has lived in other places where walking is a normal part of everyday life. But not in Houston:

Nowhere else have I ever experienced such fear when walking in the street. I don’t mean that I’m afraid of the people who I meet on the sidewalk. I mean that walking in Houston is a horrific adventure, a pleasure endangered.

It’s unfair to pick on Houston. Large parts of most American cities, and especially their suburbs, constitute vast swaths of hostile territory to people traveling on foot. Either destinations are too spread out, or there just aren’t sidewalks or crosswalks to support safely walking from point to point. Moreover, walking is so uncommon that drivers have become conditioned to behave as if pedestrians don’t exist, making streets even more foreboding.

From the standpoint of the data-reliant transportation engineer, the problems encountered by Dorantes, Yearsley, and Tektel are invisible—and therefore “nonexistent.” Because we lack the conventional metrics to define and measure, for example, the hardships of walking, we don’t design and enforce solutions or adopt targeted public policies.

But when it comes to car traffic, we have parking standards, traffic counts, speed studies, and “level of service standards.” There is simply no comparable vocabulary or statistics for walking or cycling. Traffic engineers will immediately tell us when a road is substandard, or its pavement has deteriorated, or its level of service has become (or might someday become) degraded. We have not collected a parallel array of metrics to tell us that it isn’t similarly as safe, convenient, or desirable to walk or bicycle to common destinations. The American Society of Civil Engineers’ Infrastructure Report Card grades roads chiefly on vehicle congestion and delay (using dubious data, in our estimation). And, as we’ve pointed out, the U.S. DOT’s proposed performance measures for urban transportation further codify this bias by making vehicle delay the chief indicator of how well roads work. The logical result, as Smart Growth America has argued, is that we will end up with a system that optimizes every street for fast-moving cars, with—predictably—negative effects on walking.

The personal stories of pedestrians in Houston are rich and compelling in their detail, but lack the technocratic throw-weight of quantifiable statistics or industry standards to drive different policies and investments in our current planning system.

Will the move to “smart” cities make this worse?

Last month, the U.S. Department of Transportation announced that Columbus, OH, was the winner of its Smart Cities Challenge, beating out six other cities around the country. Google’s city planning subsidiary, Sidewalk Labs, promised to work with the winning city to deploy a wide array of big data and communication tools in order to better plan and operate transit systems. While the Guardian speculated that Google is securing a central position for its technologies in urban transportation markets, we have a different concern.

Sidewalk Labs has sketched out Flow, a flashy new data system for transportation. According to its own descriptions and press reports, it will help cities optimize traffic and parking. Clearly, Flow is primarily concerned with vehicles (cars and transit vehicles alike). But there’s no indication how it will address the movement of people on foot and on bicycles. It’s ironic that an entity called Sidewalk Labs appears more concerned with cars than with pedestrians.

As the old adage goes: If you don’t count it, it doesn’t count. That premise becomes vastly more important the more we define problems in big-data terms. New technology promises to provide a firehose of data about cars, car travel, car delay, and roadways—but not nearly as much about people. This is a serious omission, and should give us pause about the application of “smart” principles to cities and transportation planning.

It will likely amplify the bias that already favors counting cars, but not people. Consider New York City, perhaps the most pedestrian-oriented place in the nation. New York gathers data on pedestrian activity in a twice-annual survey (which counts pedestrian traffic on two different days in May and September at 100 locations). Contrast that with its system that reports vehicle traffic speeds in real time at more than 300 locations.

This isn’t simply a matter of somehow instrumenting bike riders and pedestrians with GPS and communication devices so they are as tech-enabled as vehicles. An exacting count of existing patterns of activity will only further enshrine a status quo where cars are dominant. For example, perfectly instrumented count of pedestrians, bicycles, cars in Houston would show—correctly—little to no bike or pedestrian activity. And no amount of calculation of vehicle flows will reveal whether a city is providing a high quality of life for its residents, much less meeting their desires for the kinds of places they really want to live in.

The fundamental problem is that we’ve designed our cities for the people moving through them, rather than for the people living, working, and being in them. We’re obsessed with getting there rather than being there.

If we want cities that are truly walkable and bikeable–that can become great places to be rather than easy corridors to travel through–we have to listen to more than big data. We need a framework that considers a wide array of evidence of what we’ve done and what we’ve left undone; of what we are, and what we aspire to be. Simply grafting more technology on to today’s imbalanced system will not accomplish this.

Talent: The key to metro economic success

Educational attainment explains two-thirds of the variation in economic success among metropolitan areas.

Each additional percentage point increase in the four-year college attainment rate increases metro per capita income by $1,500

We’re increasingly living in a globalized, knowledge based economy.  In that world, the single most important factor determining a region’s economic success is the education and skills of its population. If you’re concerned about urban economic development, the one thing you should be focusing on, laser-like, is educational attainment.  Raising a metro area’s educational attainment is the key to raising productivity, living standards and incomes. Our core metric for assessing the importance of a well-educated population is to look at the relationship between per capita incomes and the four-year college attainment rate, a relationship we call the “Talent Dividend.”

We’ve been tracking these data, and today we’re updating them to reflect the latest information from the Census Bureau’s just-released 2018 American Community Survey.  We’ve paired this information with the Bureau of Economic Analysis’ estimates of per capita income in each metropolitan area for 2018. The following chart plots the relationship between per capita personal income (on the vertical axis) and the fraction of the adult population who have completed at least a four-year college degree (on the horizontal axis).  Each dot on the chart represents one of the nation’s metropolitan areas with at least 1 million population (53 of them).  You can mouse-over a dot to see the corresponding metropolitan area and its educational attainment rate and per capita income.

There’s a strong, positive correlation between educational attainment and per capita income.  The metro areas with the highest levels of education have the highest levels of per capita personal income.  Cities like San Francisco, Boston and Washington have the highest levels of per capita income and the best-educated populations. Cities like Riverside and Las Vegas have low levels of educational attainment and correspondingly lower levels of per capita income. The coefficient of deterimination of the two variables–a statistical measure of the strength of the relationship–is .65, which suggests we can explain about two-thirds of the variation in per capita personal income among metropolitan areas, simply by knowing what fraction of their adult population has a four-year degree. Most cities lie close the to regression line; a few outliers have plausible explanations for their over or under performance. San Francisco and San Jose lie far above the regression line, and are super-charged (and expensive) high performers.  Raleigh and Austin have incomes lower than their educational attainment would predict, but also have populations that skew very young, and therefore have lower incomes.

This chart tells you the most important thing you need to know about urban economic development in the 21st century: if you want a successful economy, you have to have a talented population. Cities with low levels of educational attainment will find it difficult to enjoy higher incomes; cities with higher levels of educational attainment can expect greater prosperity. As Ed Glaeser succinctly puts it: “At the local level fundamentally the most important economic development strategy is to attract and train smart people.” And critically, because smart people are the most mobile, building the kind of city that people want to live in is a key for anchoring talent in place. And, importantly, the economic research shows that the benefits of higher educational attainment don’t just accrue to those with a better education: people with modest education levels have higher incomes and lower unemployment rates if they live in metro areas with higher average levels of education.

The data presented here imply that a 1 percentage point increase in the four-year college attainment rate is associated with about a $1,500 per year increase in average incomes in a metropolitan area, an increment we refer to as the Talent Dividend.  This cross-sectional relationship suggests that if a metropolitan area were to improve its educational attainment by one percentage point on a sustained basis, that it would see a significant increase in its income.

Over time, the strength of this relationship, and the size of the talent dividend effect has been increasing.  When we computed the relationship using 2010 data, the correlation coefficient was .60 and the size of the talent dividend was $860 (in current dollars).  These data suggest that educational attainment has become even more powerful in determining economic success than just a few years ago.

Education is a stronger predictor of economic success today than ever before. That’s true for individuals, for private businesses, for communities, and for metropolitan economies.  The better educated you are, the more likely you are to be prosperous in a knowledge-based economy. Not only do well-paid and fast growing technology jobs go disproportionately to the better educated, but better educated workers tend to be more adaptable and more innovative, which better prepares them to cope with a changing economy.  The policy lessons for city leaders are clear: a successful economy depends on doing a great job of educating your population, starting with your children, and also building a community that smart people will choose to live in.

Court: Don’t spend billions on outdated travel forecasts

mta_purpleline

Last week, the Washington Post reported that the U.S. District Court in Washington, D.C., has ordered new ridership projections for the proposed Purple Line light rail line, which will connect a series of Maryland suburbs. Like any multi-billion dollar project that serves a densely settled metropolitan area—and this one connects some of its wealthiest suburbs—there’s bound to be controversy. But today, we’ll ignore the substantive debate over the merits of the proposed alternative and focus instead on the technical issue of projecting future ridership on which this case turned.

The court’s decision was based on the fact that the state, and the FTA, have failed to update ridership projections since 2009. The plaintiffs argued that rail ridership on the Washington Metropolitan Area Transit Authority’s Metro system has declined every year since then, and that the system’s recent safety, budget, and operational woes are threatening to push ridership even lower.

wmata_ridership

The state of Maryland and the Federal Transit Administration, the Purple Line’s project sponsors, argued in response to the plaintiffs that because WMATA is a separate transit system, its ridership is not germane to assessing the environmental impacts of the Purple Line, which will be operated by the Maryland Transit Administration. The court rejected these claims, noting that more than a quarter of the expected riders of the Purple Line are also expected to use the WMATA Metro system.

U.S. District Court Judge Richard J. Leon determined that the failure to update the ridership forecasts in light of WMATA’s troubles was “arbitrary and capricious.” In his opinion, the two agencies were “cavalier” and failed to update ridership forecasts in the face of changed circumstances, which “raises serious concerns about their competence as stewards of nearly a billion dollars of federal taxpayer’s funds.”

The case is Friends of the Capital Crescent Trail v. FTA, Civil Case #14-01471. The decision was handed down on August 3. A copy of Judge Leon’s memorandum of opinion from August 3 is available here.

Maryland officials have said they’ll appeal the District Court’s ruling, so it’s not yet certain that this case will be the law of the land. But in our view, if this ruling stands, it has an important implication for transportation planning—especially the construction of highway expansion projects. The underlying principle here is that big investments ought to be based on forecasts—whether of future traffic or ridership—that fully reflect all the information we have at hand on how travel patterns are changing, and are likely to change in the future. The track record for highway traffic forecasts has, if anything, been far worse than the problems flagged for WMATA’s Metro.

Around the nation, state departments of transportation have routinely overestimated the growth of automobile traffic and used these exaggerated projections to justify billions of dollars of new roads. The State Smart Transportation Institute analyzed an aggregation of state traffic forecasts prepared annually by the USDOT. SSTI’s analysis showed that 20-year projections overestimated future traffic volumes in every single year states’ reports could be compared against data on actual miles driven by Americans.

SSTI_conditionsperformance

As we and others have noted, there’s been a sea-change in American travel habits, influenced by changing gasoline prices and a generational change it attitudes toward travel and urban living. Yet many state forecasting models are still grounded in unexamined extrapolations of trends dating back to the 1990s. In Friends of the Capital Crescent Trail v. FTA, Judge Leon cited press reports on the impact of Metro safety problems on ridership. If courts take judicial notice of the contemporaneous developments that are materially changing travel patterns, we hope that they will cast a similar light on the abundant evidence about changing travel behavior —and insist that highway projects also fully consider these changing circumstances. There’s a precedent: As we reported last year, a federal court in Wisconsin struck down the approval of a highway project there based on flawed and outdated traffic projections.

A overwhelming amount of assumptions, inertia, and potential bias are buried in traffic projections. At a time of rapid shifts in travel behavior, markets, and technology, and especially in response to climate change, we should be especially wary of projects based on models that are opaque extrapolations of outdated trends. It’s a good thing that the National Environmental Policy Act requires project sponsors to take note of changing circumstances. Hopefully, as Judge Leon observed in his opinion, this ought to be simply a matter of common sense.

The Talent Dividend: Updated

Educational attainment explains two-thirds of the variation in economic success among metropolitan areas.

Each additional percentage point increase in the four-year college attainment rate increases metro per capita income by $1,250

For a long time, we’ve been exponents of what we call “The Talent Dividend,” the idea that raising a metro area’s educational attainment is the key to raising productivity, living standards and incomes. Our core metric for assessing the importance of a well-educated population is to look at the relationship between per capita incomes and the four-year college attainment rate.

We’ve been tracking these data, and today we’re updating them to reflect the latest information from the Census Bureau’s just-released 2016 American Community Survey.  We’ve paired this information with the Bureau of Economic Analysis’ estimates of per capita income in each metropolitan area for 2016. The following chart plots the relationship between per capita personal income (on the vertical axis) and the fraction of the adult population who have completed at least a four-year college degree (on the horizontal axis).  Each dot on the chart represents one of the nation’s metropolitan areas with at least 1 million population (53 of them, according to the 2015 Census tabulations).  You can mouse-over a dot to see the corresponding metropolitan area and its educational attainment rate and per capita income.

As you’ll immediately notice, there’s a strong, positive correlation between educational attainment and per capita income.  The metro areas with the highest levels of education have the highest levels of per capita personal income.  Cities like San Francisco, Boston and Washington have the highest levels of per capita income and the best-educated populations. Cities like Riverside and Las Vegas have low levels of educational attainment and correspondingly lower levels of per capita income. The coefficient of deterimination of the two variables–a statistical measure of the strength of the relationship–is .67, which suggests we can explain two-thirds of the variation in per capita personal income among metropolitan areas, simply by knowing what fraction of their adult population has a four-year degree. Most cities lie close the to regression line; a few outliers have plausible explanations for their over or under performance. San Francisco and San Jose lie far above the regression line, and are super-charged (and expensive) high performers.  Raleigh and Austin have incomes lower than their educational attainment would predict, but also have populations that skew very young, and therefore have lower incomes.

This chart tells you the most important thing you need to know about urban economic development in the 21st century: if you want a successful economy, you have to have a talented population. Cities with low levels of educational attainment will find it difficult to enjoy higher incomes; cities with higher levels of educational attainment can expect greater prosperity. As Ed Glaeser succinctly puts it: “At the local level fundamentally the most important economic development strategy is to attract and train smart people.” And critically, because smart people are the most mobile, building the kind of city that people want to live in is a key for anchoring talent in place. And, importantly, the economic research shows that the benefits of higher educational attainment don’t just accrue to those with a better education: people with modest education levels have higher incomes and lower unemployment rates if they live in metro areas with higher average levels of education.

The data presented here imply that a 1 percentage point increase in the four-year college attainment rate is associated with about a $1,250 per year increase in average incomes in a metropolitan area, an increment we refer to as the Talent Dividend.  This cross-sectional relationship suggests that if a metropolitan area were to improve its educational attainment by one percentage point on a sustained basis, that it would see a significant increase in its income.

Over time, the strength of this relationship, and the size of the talent dividend effect has been increasing.  When we computed the relationship using 2010 data, the correlation coefficient was .60 and the size of the talent dividend was $860 (in current dollars).  These data suggest that educational attainment has become even more powerful in determining economic success than just a few years ago.

Education is a stronger predictor of economic success today than ever before. That’s true for individuals, for private businesses, for communities, and for metropolitan economies.  The better educated you are, the more likely you are to be prosperous in a knowledge-based economy. Not only do well-paid and fast growing technology jobs go disproportionately to the better educated, but better educated workers tend to be more adaptable and more innovative, which better prepares them to cope with a changing economy.  The policy lessons for city leaders are clear: a successful economy depends on doing a great job of educating your population, starting with your children, and also building a community that smart people will choose to live in.

The Summer Driving Season & The High Price of Cheap Gas

Cheaper gas comes at a high price: More driving, more dying, more pollution.

We’re at the peak of the summer driving season, and millions of Americans will be on the road. While gas prices are down from the highs of just a few years ago, there’s still a significant price to be paid.

Vacation Traffic (Flickr: Lunavorax)
Vacation Traffic (Flickr: Lunavorax)

As the Frontier Group’s Tony Dutzik noted, earlier this month marked the 103rd consecutive week in which gasoline prices were lower than they were in the same week a year previously.  Two years ago, the price of gas averaged about $3.75 per gallon. Last week, according to the US Department of Energy, it stood at just under $2.40.

While cheaper gas has been a short run tonic for the economy–lower gas bills mean consumers have more money to spend on other things–the lower price of gas has provoked predictable behavior changes.

We’re driving more, reversing a decade-long trend in which Americans drove less. Ever since the price of gas went from less than $2 a gallon in 2002 to $4 a gallon in 2008, Americans have been driving less and less every year. Vehicle miles traveled per person per day peaked in 2004 at 26.7, and declined steadily through 2013. But in 2014, with the plunge in gas prices, driving started going back up.

 

Price matters, but driving is still exhibits a relatively low elasticity relative to price. The 39 percent decline in gas prices over the past two years has (so far) produced an increase in driving of about 4 percent.

We’re dying more

Earlier this month the National Highway Traffic Safety Administration reported that highway fatalities rose nearly 8 percent in the past year.  While there’s a lot of speculation that distracted driving may contribute to many crashes–though there’s little evidence its associated with the uptick in fatalities–its very clear that more driving is the biggest risk factor in producing more crashes, and more deaths.  There’s also some statistical evidence that cheaper gas actually facilitates more driving by more crash-prone drivers, which is consistent with a rise in fatalities that is greater than the increase in the miles driven.

We’re using more energy and polluting more

More driving means more energy consumption, and more pollution as well.  Not only are we driving more miles–which burns more fuel, but we’re also buying less efficient vehicles.  According to Michael Sivak at the University of Michigan, the sales-weighted average fuel economy of new cars purchased in the US has declined over the past two years from 25.7 miles per gallon to 25.3 miles per gallon.

Each gallon of gasoline burned generates about 20 pounds of carbon emissions, so the increase in driving also means more greenhouse gas emissions.

The bottom line is that prices matter, and many of the key attributes of driving are under-priced.  Vehicles don’t pay for the pollution they emit, for their contribution to climate change, or even for the cost of wear and tear on roads. If the price of driving more accurately reflected the costs it imposes on everyone, there’d be less congestion, less pollution, fewer traffic deaths, and we’d save money. The fluctuations in gas prices over the past few years are powerful economic evidence of how this works. Its a lesson we’ve paid for, so it would be good if we learned from it.

Why Talent Matters to Cities

The biggest single factor determining the success of a city’s economy is how well-educated is its population. As the global economy has shifted to knowledge-based industries, the jobs with the best pay have increasingly gone to those with the highest levels of education and skill.

For a long time, we’ve been talking about the talent dividend–how much an area’s college attainment rate is correlated with its per capita income. Since its such an important touchstone for policy, we think its worth taking a close look at what the data say about the strength and importance of this relationship.

Today, we’ve pulled together the latest metro area data–for 2014–from the Census Bureau (on educational attainment) and from the Bureau of Economic Analysis (on per capita income).  The following chart plots the relationship between per capita personal income (on the vertical axis) and the fraction of the adult population who have completed at least a four-year college degree (on the horizontal axis).  Each dot on the chart represents one of the nation’s metropolitan areas with at least 1 million population (53 of them, according to the 2014 Census tabulations).  You can mouse-over a dot to see the corresponding metropolitan area and its educational attainment rate and per capita income.

As you’ll immediately notice, there’s a strong, positive correlation between educational attainment and per capita income.  The metro areas with the highest levels of education have the highest levels of per capita personal income.  Cities like San Francisco, Boston and Washington have the highest levels of per capita income and the best-educated populations. Cities like Riverside and Las Vegas have low levels of educational attainment and correspondingly lower levels of per capita income. The coefficient of deterimination of the two variables–a statistical measure of the strength of the relationship–is .67, which suggests we can explain two-thirds of the variation in per capita personal income among metropolitan areas, simply by knowing what fraction of their adult population has a four-year degree.

This chart tells you the most important thing you need to know about urban economic development in the 21st century: if you want a successful economy, you have to have a talented population. Cities with low levels of educational attainment will find it difficult to enjoy higher incomes; cities with higher levels of educational attainment can expect greater prosperity. As Ed Glaeser succinctly puts it: “At the local level fundamentally the most important economic development strategy is to attract and train smart people.” And critically, because smart people are the most mobile, building the kind of city that people want to live in is a key for anchoring talent in place. And, importantly, the economic research shows that the benefits of higher educational attainment don’t just accrue to those with a better education: people with modest education levels have higher incomes and lower unemployment rates if they live in metro areas with higher average levels of education.

The data presented here imply that a 1 percentage point increase in the four-year college attainment rate is associated with about a $1,100 per year increase in average incomes in a metropolitan area.  This is what we call the Talent Dividend.  This cross-sectional relationship suggests that if a metropolitan area were to improve its educational attainment by one percentage point on a sustained basis, that it would see a significant increase in its income.

Over time, the strength of this relationship, and the size of the talent dividend effect has been increasing.  When we computed the relationship using 2010 data, the correlation coefficient was .60 and the size of the talent dividend was $860 (in current dollars).  These data suggest that educational attainment has become even more important in determining economic success than just a few years ago.

We’ve also mapped the metro level data on the talent dividend relationship.  On this map, the color of each circle corresponds to a metropolitan area’s level of educational attainment (red circles have lower than average educational attainment among these metros, blue circles higher educational attainment).  The size of circles is proportional to an area’s per capita income; larger circles indicate higher per capita income.  (You can mouse over any metro area to see the educational attainment and per capita income figures for that metro area).

Education is a stronger predictor of economic success today than ever before. That’s true for individuals, for private businesses, for communities, and for metropolitan economies.  The better educated you are, the more likely you are to be prosperous in a knowledge-based economy. Not only do well-paid and fast growing technology jobs go disproportionately to the better educated, but better educated workers tend to be more adaptable and more innovative, which better prepares them to cope with a changing economy.  The policy lessons for city leaders are clear: a successful economy depends on doing a great job of educating your population, starting with your children, and also building a community that smart people will choose to live in.

Patents, place, and profit

Readers of the Aug. 19 Week Observed: here’s the piece you’re looking for.

Here’s a puzzle: If 89 percent of Apple’s ideas are invented in the U.S., why is 92 percent of its profit overseas?

The link between local economies and tax bases has long been obvious and physical. Companies paid property taxes on their buildings and equipment, people and businesses paid sales taxes on their purchases, and employers and employees paid taxes on their income. But the economy is changing, and the link between economic activity and taxes has changed.

Designed by Apple in California (Flickr: ECastro)
Designed by Apple in California (Flickr: ECastro)

Perhaps no products are a better examples of this trend than Apple’s iPhone and MacBookAir laptop computer. If you look at the polished aluminum backside of any Air, you’ll find an emblazoned slogan: “Designed by Apple in California.” That imprint has a strong basis in fact.

Between 2011 and 2015, Apple got 7,527 patents worldwide, according to the U.S. Patent and Trademark Office. Of these, 6,716 were in the United States, and 6,143 were in California.

The next line on the back of the MacBookAir is “Assembled in China.” Stories of the globalization of production of high-tech devices abound. Many reports have chronicled the conditions of the tens of thousands of workers employed by contractors like China’s FoxConn, which produce all manner of technological devices. But the labor and even the components in most tech devices reflect only a small portion of their value. The cost and profitability of devices stems not from their physical elements or the labor used to assemble them, but from the superior software and design features—or, the intellectual property generated by Apple.

As The Economist explains, “Apple, it’s worth pointing out, continues to capture most of the value added in its products. The most valuable aspects of an iPhone, for instance, are its initial design and engineering, which are done in America.” InformationWeek estimates, “The cost of parts and labor to assemble an iPhone 6S, for example, cost an estimated $160, compared to a selling price of $399.”

However, even though Apple does 90 percent of its creative work in the United States, it pays very little in U.S. taxes. Instead, Apple attributes a disproportionate share of its earnings to the Republic of Ireland. Nobel laureate and former World Bank economist Joseph Stiglitz has called Apple’s accounting “fraudulent,” telling Bloomberg:

“Here we have the largest corporation in capitalization not only in America, but in the world, bigger than GM was at its peak, and claiming that most of its profits originate from about a few hundred people working in Ireland—that’s a fraud.”

How does Apple do it? The exact mechanics are hidden in the company’s tax filings in the U.S. and Ireland, but the basic mechanism at work here is transfer pricing. Apple has transferred ownership of its intellectual property (its patents and software designs, and perhaps even its brand and logo) to its overseas subsidiaries. Obviously, Ireland is not a major center for Apple’s technology development. According to USPTO, between 2011 and 2015, Apple received exactly zero patents for work performed in Ireland. By contrast, at least some other U.S. technology companies, including IBM, Intel, Microsoft, and Google, all reported Irish patents.

Nonetheless, Apple’s operations in the U.S., and throughout the rest of the world, make payments (including royalties and purchases) to foreign affiliates in, say, Dublin, in to make use of the intellectual property. The foreign affiliate records its income and pays any applicable taxes in its low-tax or no-tax jurisdictions.

This strategy shifts profits from the United States, where they might be taxed at the high statutory 35 percent corporate tax rate, to places like Ireland, where taxes are much lower and can be deferred. As a result, Apple’s profits have piled up overseas. Today, $215 billion of Apple’s $232 billion cash hoard is located outside the United States.

Apple is hardly alone in pursuing such strategies to avoid U.S. taxes. Facebook is facing an IRS claim for billions in back taxes for a single year, based on its strategy of shifting its intellectual property to the Cayman Islands.

Because so many companies are doing this, the cumulative reduction in American taxes paid by American corporations is significantly reducing federal revenue. The most comprehensive analysis of tax avoidance is by University of California’s Gabriel Zucman, author of The Hidden Wealth of Nations. Zucman estimates that these tax avoidance tactics cost the U.S. Treasury $130 billion annually in lost revenue.

A bizarre side effect of these tax-avoiding machinations showed up last month in the form of a breathtaking report that in 2015, the Gross Domestic Product of Ireland had increased an astonishing 26.3 percent over the prior year. This Irish miracle, as it turns out, has almost nothing to do with local economic activity and everything to do with the nation’s tax haven status, which artificially inflates the size of Ireland’s measured GDP.

The economy is increasingly dominated by global, knowledge-based companies whose profits are largely attributable to their intellectual property. The current system of taxation rewards Apple, Google, Facebook, and others for offshoring their patents, brands, or proprietary designs.

While the primary implications are for the U.S. Treasury, it’s also likely that shifting taxable income outside of the United States shortchanges states and cities as well. Most state tax systems piggyback on important aspects of the federal system, and state revenue departments are even more hamstrung in dealing with the high-powered lawyers and accountants of global firms than is the IRS.

As more sectors of the economy are disrupted by big firms using brands and patents, it may be harder for states and cities to collect taxes. This could also put smaller local firms at a competitive disadvantage. A city or state may have the legal ability to collect taxes from a local coffee shop, a bookstore, or a taxi company, but if a Starbucks, an Amazon, or an Uber attributes its profits to foreign domiciled intellectual property, that’s likely to be beyond the understanding, much less the reach, of local tax collectors.

This is a global problems that affects local economies and local governments, but it’s almost entirely beyond their ability to influence. At some point national governments—and international cooperation—will be required to establish clear ground rules and a level playing field in a world increasingly characterized by global, knowledge-based competition.

Suburban Renewal: Marietta demolishes affordable housing

Just say the words “urban renewal” and you immediately conjure up images of whole neighborhoods–usually populated by poor families and people of color being dislocated by big new publicly funded development projects. It seems like a relic of the past.  But it appears to be getting a new lease on life in the suburbs. For a couple of years now, we’ve been following the story of Marietta, Georgia, where local officials used $65 million in taxpayer funds to buy up and begin demolishing some 1,300 apartments along Franklin Road. This is a striking case where the displacement of low income families was an explicit objective of public policy, rather than the side-effect of a changing real estate market. The tale raising some interesting questions about how we talk about neighborhood change, and whether we’re really open to economic integration in all places–city and suburb.

In a few weeks, the new Atlanta United franchise of Major League Soccer will kick off its inaugural season. The team’s been preparing at the verdant practice facility in the Atlanta suburbs.

Atlanta United’s Marietta Practice Facility (Google Maps)

What isn’t apparent from this picture is the fact that hundreds of old but serviceable apartments were demolished to make way for these spacious fields. The Marietta apartment complexes had been built in the 1960s, and, when new, were a preferred upscale location for single professionals and young married couples. Over the decades, the apartments aged and the mix of occupants changed. Franklin Road shifted to families with increasingly modest incomes—a process housing economists call filtering, which is the primary source of affordable housing throughout the nation. Along the way, the economic and and racial makeup of the apartments transformed from nearly 90 percent white in 1980, with a poverty rate around five percent, to 20 percent white in 2010, with a poverty rate of nearly 25 percent.

Despite the usefulness of filtering, which increased the diversity of suburban Marietta, the city perceived these units as growing concentrations of poverty and, thus, a problem. So it used the proceeds of a voter-approved bond measure to purchase and begin demolishing the housing complexes. It’s worth noting that no one ever claimed that the buildings themselves were a problem, despite their age. Rather, it had everything to do with the demographics of their occupants. In all, the city’s plan calls for acquiring and demolishing almost 10 percent of the city’s multi-family housing stock.

Caution:  This post contains graphic images of housing displacement. Viewer discretion is advised.

Marietta’s plan is proceeding apace. The city re-christened Franklin Road as “Franklin Gateway” to signal change. It began demolishing the apartments last year. Here, we’ve used imagery from Google Maps show what’s happened where one of these complexes—the Woodland Park Apartments—once stood.  This is what they looked like in 2011.

2011 (Google Streetview)
Google Maps, September 2011

Early this year, the demolition was nearly complete. All that remains of  the old apartment complex is its driveway, a partial brick wall and metal gate, and two patches of evergreen shrubbery, flanked by a stand of pine trees.

Google Maps, March 2016
Google Maps, March 2016

The latest Google imagery shows the city’s unfolding plan for development. The site will become the training facility for Atlanta United, the area’s new major league soccer team. For use of the 32 acres formerly covered by apartments, the team will pay the city $1 per year for the first ten years of a thirty-year lease.

Google Maps, May 2016
Google Maps, May 2016

The demolition of the apartments on Franklin Road represents a kind of national blind spot when it comes to talking about neighborhood change. In any large city—say New York, Los Angeles or Washington—the wholesale demolition of affordable housing to provide discounted land for new businesses would undoubtedly be treated as the most pernicious form of gentrification. But because it happens in a suburb, somehow it doesn’t count, or at least isn’t objectionable.

Perhaps this reflects a deeply ingrained but seldom-voiced bias in our views about place: Suburbs are for rich, mostly white people. Cities are for poorer people and people of color. Anything change that runs counter to this worldview (like gentrification of a Brooklyn neighborhood, or efforts to build affordable apartments in suburbs like Marin County) is an affront to the order of things. The apparent prevalence of this outlook shows just how hard it will be to make progress on economic integration.

A postcard from Marietta

Last summer, we told here the story of Marietta, Georgia, where local officials used $65 million in taxpayer funds to buy up and begin demolishing some 1,300 apartments along Franklin Road. This is a striking case where the displacement of low income families was an explicit objective of public policy, rather than the side-effect of a changing real estate market. The tale raising some interesting questions about how we talk about neighborhood change, and whether we’re really open to economic integration in all places–city and suburb.

The Marietta apartment complexes had been built in the 1960s, and, when new, were a preferred upscale location for single professionals and young married couples. Over the decades, the apartments aged and the mix of occupants changed. Franklin Road shifted to families with increasingly modest incomes—a process housing economists call filtering, which is the primary source of affordable housing throughout the nation. Along the way, the economic and and racial makeup of the apartments transformed from nearly 90 percent white in 1980, with a poverty rate around five percent, to 20 percent white in 2010, with a poverty rate of nearly 25 percent.

Despite the usefulness of filtering, which increased the diversity of suburban Marietta, the city perceived these units as growing concentrations of poverty and, thus, a problem. So it used the proceeds of a voter-approved bond measure to purchase and begin demolishing the housing complexes. It’s worth noting that no one ever claimed that the buildings themselves were a problem, despite their age. Rather, it had everything to do with the demographics of their occupants. In all, the city’s plan calls for acquiring and demolishing almost 10 percent of the city’s multi-family housing stock.

Caution:  This post contains graphic images of housing displacement. Viewer discretion is advised.

Marietta’s plan is proceeding apace. The city re-christened Franklin Road as “Franklin Gateway” to signal change. It began demolishing the apartments last year. Here, we’ve used imagery from Google Maps show what’s happened where one of these complexes—the Woodland Park Apartments—once stood.  This is what they looked like in 2011.

2011 (Google Streetview)
Google Maps, September 2011

Early this year, the demolition was nearly complete. All that remains of  the old apartment complex is its driveway, a partial brick wall and metal gate, and two patches of evergreen shrubbery, flanked by a stand of pine trees.

Google Maps, March 2016
Google Maps, March 2016

The latest Google imagery shows the city’s unfolding plan for development. The site will become the training facility for Atlanta United, the area’s new major league soccer team. For use of the 32 acres formerly covered by apartments, the team will pay the city $1 per year for the first ten years of a thirty-year lease.

Google Maps, May 2016
Google Maps, May 2016

The demolition of the apartments on Franklin Road represents a kind of national blind spot when it comes to talking about neighborhood change. In any large city—say New York, Los Angeles or Washington—the wholesale demolition of affordable housing to provide discounted land for new businesses would undoubtedly be treated as the most pernicious form of gentrification. But because it happens in a suburb, somehow it doesn’t count, or at least isn’t objectionable.

Perhaps this reflects a deeply ingrained but seldom-voiced bias in our views about place: Suburbs are for rich, mostly white people. Cities are for poorer people and people of color. Anything change that runs counter to this worldview (like gentrification of a Brooklyn neighborhood, or efforts to build affordable apartments in suburbs like Marin County) is an affront to the order of things. The apparent prevalence of this outlook shows just how hard it will be to make progress on economic integration.

Five consecutive years of job growth: a clear cause for optimism in Detroit

Back in 2009, in the darkest days of the Great Recession, Federal Reserve Chair Ben Bernanke attempted to reverse the economic pessimism that gripped the nation. He pointed to what he called “green shoots,” small bits of good news around the country. To him, the green shoots showed that the economy was turning around, the economic winter was ending, and spring was around the corner. Bernanke was right. That year, job losses slowed, then stopped, and the economy began growing again.

Image via Flickr user Velikodniy
Image via Flickr user Velikodniy

 

Today, we turn our attention to Detroit to look for evidence of green shoots in its economy. Detroit and its long period of wrenching industrial change and urban decline has long been the locus of conversation. More recently, the media has emphasized efforts to revive Detroit. The city has embarked on the construction of a downtown light rail line. Quicken Loans conspicuously moved its headquarters downtown in 2010 and has steadily expanded its footprint. In the past few months, a Niketown has opened and a Shake Shack has been announced. These are visible signifiers of progress and renewal.

At City Observatory, we judge the success of revitalization efforts based on whether they move the needle of key economic statistics. And there’s growing evidence based on our analysis of employment statistics for Wayne County (which encompasses the city) that things are changing in a positive direction in Detroit, confirming the anecdotal signs on its streets.

A core measure of economic growth is the number of jobs in the local economy. By that standard, the late 1990s and the early 2000s were an unrelenting tide of bad news for Detroit. Between 2001 and 2010, Detroit lost more than 200,000 jobs. Total payroll employment declined by 24 percent. Even though the national economy and employment expanded for most of the decade, employment in Detroit declined every year between 2001 and 2010. The Great Recession simply amplified these job losses.

detroitchange

During this time, the city was in dire straights, both politically and financially: Its Mayor Kwame Kilpatrick had been removed from office in a scandal, the city’s financial administration had been turned over to a state-appointed administrator, and in 2013, the city entered the nation’s largest municipal bankruptcy.

But since 2010, Detroit’s economy has turned around. Employment totals for Wayne County bottomed out at about 690,000 jobs, then started growing again. Detroit has recorded year-on-year increases in employment every year since then, and is continuing to do so in 2016. Today the city has 50,000 more jobs than it did in the depths of the recession.

detsector

 

There’s encouraging news, too, in the kind of growth that’s occurring. The details of Detroit employment growth point to a small rebound in the traditional manufacturing sector, as well as some much-welcomed growth in other knowledge-based sectors of the economy.

The industry detail shows that some sectors of the Detroit economy (Wayne County in particular) are now performing better than their counterpart industries nationally over the past year, March 2015-2016. For example, while manufacturing employment has slipped nationally, it’s up nearly 1 percent in Detroit over the same time period—positive news after so many years of decline. Information industries and professional and business services, two stalwarts of the knowledge economy, have both recorded faster job growth in Detroit than the country overall in the past year. And financial services employment has grown at a 7.7 percent clip, four times faster than in the nation as a whole.

detsector

Still, not everything’s rosy: Wayne County is growing more slowly (+1.4 percent) than the surrounding suburban counties (+2.7 percent) and the nation as a whole (+2.0 percent). Some of this reflects continuing, difficult adjustments. For example, government employment is down 1.0 percent in Detroit, and up about 0.5 percent nationally.

Altogether, the employment data provides hopeful signs that the Detroit economy has put its worst economic days in the rearview mirror, and is starting to build a stronger economic future. Some of this undoubtedly has to do with the national economic recovery. However, it’s important to remember that during the last national economic growth cycle (2001 to 2008), Detroit was actually losing jobs when the nation was gaining them—evidence of the region’s structural economic problems. Even though the city and region have a long way to go, they are now headed in the right direction.

About the data

Please note that for this analysis, we use federal data for Wayne County, Michigan, which is centered on Detroit and includes the adjacent cities of Dearborn and Livonia, but does not include the surrounding suburban counties that make up the balance of the Detroit metropolitan area. The Bureau of Labor Statistics, which compiles these data, doesn’t compile monthly or annual city level statistics that would let us track these changes. These data are BLS Series ID SMS26198040000000001, and are available at from the BLS website.

A To-Do List for Promoting Competitive Ride-Sharing Markets

Making a market for shared mobility services

Yesterday, we urged cities to think hard about how they can craft the rules for the transportation network companies that offer “ride sharing” systems to maximize competition, and encourage innovation and low prices.  “Let a thousand Ubers bloom,” we said.

Car-Sharing Club Poster by Weimer Pursell (1943), US Government Printing Office, via Flickr (John December)
Car-Sharing Club Poster by Weimer Pursell (1943), US Government Printing Office, via Flickr (John December)

 

The rules and regulations that cities set for ride sharing — everything from taxes, to accessibility requirements, to safety measures (like fingerprinting drivers) — can have an impact on whether ride sharing ultimately becomes an effective monopoly or cozy duopoly that is profitable for firms but offers limited choices for riders, or whether there’s an open system where new entrants can continually be as disruptive to incumbents as Uber and Lyft currently are to inefficient taxi systems.

Here’s our initial list of things cities should be considering to encourage a competitive, dynamic marketplace for ride sharing services.

  1. Assure ride sharing information is open and accessible. Ride share operators make their money using a scarce and expensive public asset: streets. They should be required to provide information about the trips they take, the areas they serve, and the fares they charge so that the public can understand how ridesharing affects the community—for good and for ill. Disclosing information on fares and service can also encourage competition, enabling comparisons between cities to see if consumers are getting a good deal. It can also provide a basis for deterring anti-competitive practices, like price discrimination. Boston and San Francisco have negotiated agreements with Uber to provide data on trip origins and destinations.
  2. Set some basic ground rules, including certification for drivers. If there are some minimal standards that assure rider safety and transparent information and pricing, consumers may be more willing to try smaller and newer firms. One possible reason that Uber and Lyft opposed fingerprinting drivers was that it diluted the value of their brand reputation by leveling the playing field, assuring that all persons using a ride sharing service were driven by similarly vetted drivers.
  3. Encourage bidding for travel. We’re all used to using services like Kayak to shop for the best airfares or bidding on eBay in real time auctions. Uber and Lyft act as price-setters, dictating a single price for trips, regardless of who’s traveling or who’s providing the ride. They then allocate drivers to riders and riders to drivers. It’s possible to imagine a more free-wheeling auction system, where riders could bid for trips, and drivers could compete to provide service. Such services might help make it easier for smaller rivals to break into the market, and would give customers more leverage.
  4. Don’t inadvertently privilege large size with regulatory set ups. Virtually any regulatory requirement will impose a higher burden on small firms and startups than on larger, established businesses.  For example, requirements that all ride share operators offer a certain number or proportion of wheelchair-accessible vehicles. A provision that assesses a fee to cover the cost of these services—or gives operators the option of paying a fee in lieu of equipping vehicles, would lower the barriers to entry.
  5. Insist on multiple options when integrating ride sharing with public transit. It’s increasingly apparent that ride sharing services could be a logical complement to fixed route transit in lower density locations and during off peak hours.  Transit operators should resist the temptation to enter into exclusive deals with a single ride share provider. Public transit is, in most places, a monopoly—but it’s under public control, and subject to a fair degree of scrutiny.
  6. Restrict or prohibit fare and driver compensation schemes that lock users into a single service. In Boston, Uber is offering 1 cent rides on UberPool—provided you buy a $40 monthly pass. While undoubtedly a money loser, this UberPool pricing model effectively discourages those who sign up from using alternative providers. Similarly, Uber and Lyft frequently offer much more favorable compensation to drivers who drive full time for only one service or the other: again, the idea being to lock drivers into one service, and reduce their competitors’ market share. If drivers truly operate as “independent contractors,” they should be free to secure business through any network, and to set their own prices.

Recently, we profiled Paul Romer, who has just been appointed as the Chief Economist for the World Bank.  One of the key insights of his New Growth Theory is that we ought to be open to a range of different institutional set-ups in order to encourage experimentation and promote economic growth.  That’s exactly the attitude we ought to bring to ride sharing.  Having different cities around the country develop various ways of regulating this industry is likely to prompt a range of different business models and a faster pace of innovation.

While hardly an exhaustive list, these six ideas illustrate the important ways that setting the rules of the game for the ride sharing industry are likely to influence competition, innovation and customer choice.  Ride sharing and urban transportation are evolving quickly; we should aim to build on this momentum.

 

Reversed Polarity: Bay Area venture capital trends

The greater San Francisco Bay area has been a hotbed of economic activity and technological change for decades, bringing us ground-breaking tech companies from Hewlett-Packard and Intel, to Apple and Google, to AirBNB and Uber. Its a great place to spot trends that are likely to spread elsewhere.  One such trend is the growing tendency of new technology startups to locate in cities.  Today we explore some new data on venture capital investment that are indicative of this trend.

Credit: David Yu, Flickr
Credit: David Yu, Flickr

As we noted in July, there’s always been a dynamic tension between the older, established city of San Francisco in the north, and the new, upstart, tech-driven city of San Jose in the south.  From the 1950s onward, San Jose and the suburban cities of Santa Clara county grew rapidly as the tech industry expanded. Eventually the area was re-christened Silicon Valley.

One of the hallmarks of the Valley’s growth was the invention and explosion of the venture capital industry: Technology savvy, high risk investors, who would make big bets on nascent technology companies with the hopes of growing them into large and profitable enterprises.  Sand Hill Road in Palo Alto became synonymous with the cluster of venture capital that financed hundreds of tech firms.

Silicon Valley’s dominance of the technology startup world was clearly illustrated each year with the publication of the dollar value of venture capital investments by the National Venture Capital Association and PriceWaterhouseCoopers MoneyTree.  Silicon Valley startups would frequently account for a third or more of all the venture capital investment in the United States.  But since the Great Recession that pattern has changed dramatically.

By 2010, according to data gathered by the NVCA, the San Francisco metro area had pulled ahead of Silicon Valley in venture capital investment.  In the past two years (2014 and 2015) venture capital investment in San Francisco has dwarfed VC investment in Silicon Valley.  In 2015, San Francisco firms received about $21 billion in venture capital investment compared to about $7 billion in Silicon Valley.

bay_area_vc

 

VC investment is important both in its own right–because we are talking about billions of dollars, which gets spent on rent, salaries and other purchases, initially at least in these local economies–but perhaps more importantly because venture capital investment is a leading indicator of future economic activity. While individual firms may fail, the flow of venture capital investment is indicative of the most productive locations for new technology driven businesses.

What these data signal is that it is an urban location–San Francisco–that is now pulling well ahead of Silicon Valley, which is still mostly characterized by a suburban office park model of development.  Some of this may have to do with the kind of firms that are drawing investment. Much of the current round of VC investment is going to software and web-related firms, not the kinds of semiconductor-driven hardware firms that have been Silicon Valley’s superstars in the past.  But unlike the 1970s and 1980s, when technology was a decidedly suburban activity, focused primarily in low density “nerdistans,” today its the case that new technology enterprises are disproportionately found in cities.  And today, companies are increasingly choosing to locate their operations in more urban neighborhoods and more walkable suburbs.

What’s driving firms to cities is the fact that the workers they want to hire–well educated young adults in their twenties and thirties–increasingly want to live in dense, walkable urban environments like San Francisco, and not the sprawling suburbs of Silicon Valley.  Further evidence of this trend is, of course, the famous “Google buses” that  pick up workers in high-demand neighborhoods in San Francisco and ferry them, in air-conditioned, wifi-enabled comfort, to prosaic suburban office campuses 30 or 40 miles south.

The movement of workers, investment, and new startup firms to San Francisco is another indicator of the growing strength of cities in shaping economic growth. And has been the case over the past half-century or more, trends that start in the Bay Area tend to ripple through the rest of the country. What we see here is a shift in economic polarity, from the suburban-led growth of the past, to more city-led growth. That’s one of the reasons we think the reversal of the long process of job decentralization has just begun.

Finally, some background about geography.  The Bay Area has several major cities, including San Francisco, Oakland and San Jose.  To the Office of Management and Budget, which draws the boundaries of the nation’s metropolitan areas after each decennial census, the three cities were all part of a single metropolitan area up through the classification created following Census 2000.  In 2010, with new data, and slightly different rules for delineating metro areas, San Jose was split off into its own separate metropolitan area, consisting of Santa Clara and San Benito counties at the south end of San Francisco Bay.  (If it hadn’t been hived off into its own metro area, the name of the larger metropolis would have been the “San Jose-San Francisco-Oakland” metropolitan area, inasmuch as the city of San Jose’s population had passed that of San Francisco.

 

 

 

The triumph of the City and the twilight of nerdistans

This is a story about the triumph of the City—not “the city” that Ed Glaeser has written about in sweeping global and historic terms—but the triumph of a particular city: San Francisco.

For decades, the San Francisco Bay Area’s economy has been a microcosm and a hot house for studying the interplay between innovation, economic prosperity, urban form and social impacts. It gave us the quintessential model of technological geography, Silicon Valley. And today, it’s showing us how that geography is changing—and shifting towards cities.

As a graduate student at the University of California, Berkeley, more than three decades ago, one of the first things I learned about living in Bay Area was that the large city between us and the Pacific Ocean was not “San Fran” nor “SF,” and especially not “Frisco.” San Francisco was simply “the City.”

In the late ’70s and early ’80s, San Francisco was the queen of her little geographic universe, the center of arts, culture, and commerce in Northern California. That was heyday of San Francisco Chronicle columnist Herb Caen, the martyred Harvey Milk, and George Moscone in City Hall. In the wake of Prop. 13, California’s voter-adopted property tax limitation measure, there was a lot of political unrest that led to, among other things, rent control in the City.

Down Highway 101, there was Silicon Valley—or, to those in the Bay Area, simply “the Valley.” Santa Clara County, on the peninsula south of San Francisco, was long regarded as an agricultural hinterland—much as the Central Valley or Salinas are thought of today. The Stanford University campus, the South Bay’s major intellectual center, was (and still is) nicknamed “the Farm”; the area was historically famous for its fruit orchards. But all that changed. San Jose and its surrounding communities grew steadily in the 1960s, 1970s, and 1980s to become the economic hotbed of the region. The personal computer was essentially invented in Silicon Valley garages. Hewlett-Packard, Intel, and Apple all got their start in The Valley. Cities and states across the nation and the world set about trying to replicate what they perceived to be the elements of Silicon Valley’s success: research universities, science parks, technology transfer offices, entrepreneurship programs, and venture capital investment. But no matter how many emulators emerged, Silicon Valley remained the dominant epicenter of new technology firms in the U.S.

As the Valley grew, the City seemed quaint and dowdy by comparison. In the 1990s, it lost some of its corporate crown jewels, as Bank of America decamped its headquarters to—shudder—North Carolina. Sure, the City had its counter-cultural cred with the Jefferson Airplane and, later, Dead Kennedys and others, but the Valley was where the work got done.

The technology wave, particularly the personal computer and the Internet, seemed to bypass San Francisco of the big new firms, the Ciscos, the Oracles, the Googles, got their start in Silicon Valley and grew there. Measured by gross domestic product per capita, San Jose blew by San Francisco in the 1990s, and never looked back. It was, as Joel Kotkin famously argued, the victory of the suburban nerdistans. Engineers and businesspeople wanted to live split-level houses on large lots in suburbs and drive, alone, to work each day. While Kotkin admitted that some creative types might gravitate toward Richard Florida’s boho cities, he pushed that most job growth would be in sensible suburbs:

“Today’s most rapidly expanding economic regions remain those that reflect the values and cultural preferences of the nerdish culture — as epitomized by the technology-dominated, culturally undernourished environs of Silicon Valley. In the coming decade, we are likely to see the continued migration of traditional high-tech firms to new nerdistans in places like Orange County, Calif., north Dallas, Northern Virginia, Raleigh-Durham and around Redmond, Wash., home base for Microsoft.”

But for the past decade or so, and most notably since the end of the Great Recession, a funny thing has happened. Tech has been growing faster in the City than in the Valley. Lots of new firms working on new Internet technology plays—the Ubers, the AirBnBs, the SalesForces—started up in San Francisco and grew there. At the same time, more and more young tech workers, not unlike the young workers nationally, had a growing preference for urban living. The City is a lot more urbane than the Valley. As Richard Florida has chronicled, venture capital investment, perhaps the best leading indicator of future technology growth, has shifted from the suburbs to the cities—nowhere more strikingly than in the San Francisco Bay Area.

And so, to accommodate the needs and desires of their most precious input—the human capital of their workers—Silicon Valley companies started running their own subsidized, point-to-point transit services. The “Google buses” pick up workers in high-demand neighborhoods in San Francisco and ferry them, in air-conditioned, wifi-enabled comfort, to prosaic suburban office campuses 30 or 40 miles south. These buses became the flashpoint for protests about the changing demographics and economic wave sweeping over the city, as Kim-Mai Cutler explained in her epic TechCrunch essay, “How Burrowing Owls Led to Vomiting Anarchists.” In the past 12 years, the number of workers commuting from San Francisco to jobs in Santa Clara County has increased by 50 percent, according to data from the Census Bureau’s Local Employment and Housing Dynamics data series.

Those trends came to their logical culmination this week. The San Francisco Business Times reported that Facebook, now headquartered in the Valley’s Menlo Park, is exploring the construction of a major office complex in San Francisco. According to the Times’ reporting, the company’s decision is driven by the growing desire of its workers to live in urban environments. Additionally, Facebook has faced competition and poaching for talent from San Francisco-based companies, including Uber.

Facebook’s interest in a San Francisco office is just one harbinger of the northward movement of the tech industry. Apple, which has famously insisted that its employees work in its campus in Cupertino, has recently leased office space in San Francisco’s SoMa neighborhood. Google now has an estimated 2,500 employees in San Francisco, and has purchased and leased property in the city’s financial district.

The miserable commute to Silicon Valley from San Francisco means that busy tech workers find it more desirable to work closer to where they live. Paradoxically, as Kim-Mai Cutler warned, the protests and obstacles to Google and other tech buses are prompting tech companies to expand their operations in The City, which brings in even more tech workers to bid up the price of housing there. As she tweeted on July 25:

kimmai_gbus_tweet

As we’ve chronicled at City Observatory, jobs are moving back into city centers around the country, reversing a decades-long trend of employment decentralization. Companies as diverse as McDonalds, which is relocating from suburban Oak Brook to downtown Chicago, and GE, which will move from a suburban Connecticut campus to downtown Boston, all cite the strong desire to access talented workers. Those workers are are increasingly choosing to live in cities. While we view the resurgence of city center economies as a positive development, it also poses important challenges, especially concerning housing supply and affordability. For economic and equity reasons, it is critical that we tackle the nation’s growing shortage of cities.

Our apologies to Ed Glaeser for borrowing the title of his excellent book, The Triumph of the City: How Our Greatest Invention Makes Us Richer, Smarter, Greener, Healthier, and Happier, for this commentary. We’re deeply indebted to Dr. Glaeser for outlining many of the forces at work in America’s cities, including agglomeration economies and the theory of the consumer city. These are chief among the explanations for the recent triumph of San Francisco over Silicon Valley.

Let a thousand Ubers bloom

Why cities should promote robust competition in ride sharing markets

We’re in the midst of an unfolding revolution in transportation technology, thanks to the advent of transportation network companies. By harnessing cheap and ubiquitous communication technology, Uber and other firms organizing what they call “ride sharing” services have not only disrupted the taxi business, but are starting to change the way we think about transportation.  While we think of disruption here as being primarily driven by new technology, the kinds of institutional arrangements–laws and regulations–that govern transportation will profoundly determine what gains are realized, and who wins or loses.

Many thousands of Irises (Flickr: Oregon Department of Agriculture)
Many thousands of Irises (Flickr: Oregon Department of Agriculture)

Right now, Uber has an estimated market value (judging by what recent investors have paid for their stake in the company) of nearly $70 billion. That’s a whopping number, larger in fact than say, carmakers like Ford and GM.  It’s an especially high valuation for a company that has neither turned a profit nor gone public, thus subjecting its financial results to more outside scrutiny.  Uber’s generous valuation has to be based on the expectation that it’s going to be a very, very large and profitable firm, and that it will be as dominant in its market as other famous tech firms–like Microsoft or Google–have been.

The importance of competition

For a moment, it’s worth thinking about the critical role of competition in shaping technology adoption and maximizing consumer value. Take the rapidly changing cell phone industry, which has increasingly replaced the old wire-line telephony of the pre-digital era. Back in the day, phone service—especially local phone service—was a regulated monopoly. It barely changed for decades—the two biggest innovations were princess phones (don’t ask) and touch-tone dialing.

But when the Federal Communications Commission auctioned off wireless radio spectrum for cellular communications it did so in a way that assured that there would be multiple, competing operators in each market. Though there’s been some industry consolidation, critical antitrust decisions made in the past few years have kept four major players (AT&T, Verizon, Sprint, and T-Mobile) very much in the game. T-Mobile has acted as the wildcard, disrupting industry pricing and service practices and prompting steady declines in consumer voice and data costs. In the absence of multiple competitors, it’s unlikely that a cozy duopoly or even triopoly would have driven costs down.

Or consider the case of Intel, which because of a quirk of US Defense Department requirements was obligated to “second source” licenses for some of its key microprocessor technologies to rival Advanced Micro Devices (AMD). Second-sourcing required Intel to share some of its intellectual property with a rival firm so that the military would have multiple and redundant sources of essential technologies. This kept AMD in the market as a “fast follower” and prompted Intel to continuously improve the speed and capability of its microprocessors.

How much does being first count for?

Uber’s first-mover advantages and market share arguably give it a market edge; drivers want to work for Uber because it has the largest customer base and customers prefer Uber because it has more drivers. More cars mean shorter waits for customers, which attracts more customers to Uber and therefore generates more income for its drivers. This positive feedback loop can help drive up market share for Uber at the expense of its competitors. Whether this happens depends two things: how powerful are these network effects and whether effective competitors emerge.

Some economists think that these network externalities tend inevitably to lead to winner-take-all markets, and that once established, dominant market positions are difficult or impossible to overcome. That’s a major factor behind Uber’s high valuation: investors think the company will continue to have a dominant position in the industry and will eventually reap high profits as a result.

Antitrust is a live issue with Uber. The company has famously disclaimed that Uber drivers are its employees—asserting, instead, that they are “independent contractors”—businesses separate from Uber. But this has led some to argue that Uber is collaborating with its drivers to fix prices, which may constitute a violation of antitrust laws. The argument is that the Uber app —which presents all customers with the same rate and gives supposedly independent drivers no opportunity to offer different prices (and no opportunity for customers to bargain) — represents technology-enabled price fixing. This may especially be a problem for “surge pricing,” when every driver effectively raises her price at the same time (something that would be impossible to accomplish absent the technology).

But others question whether the market power afforded by these network externalities extend beyond local markets. Bloomberg View’s Justin Fox argues that the scope of network effects probably doesn’t exceed metropolitan markets. Except possibly for business travelers or tourists, Uber’s market share in some far away city is of little importance to travelers in Peoria. This may increasingly become true as these services become more widespread—it’s still the case that 85 percent of Americans have never used Uber or Lyft, and the 15 percent who have used the services are wealthier, better educated, and probably less price sensitive than those who haven’t used these services yet.

A policy shock in Austin

The big news in the ride sharing business this year was a referendum in Austin on the city’s proposed requirement that all contract drivers be fingerprinted. Uber and Lyft went to the political mat, spending $8.4 million on a campaign to defeat the requirement (the most expensive local political campaign in Austin history by far). The centerpiece of their campaign was a threat to pull out of Austin if the requirement took effect. They lost 56 to 44 percent, and both have followed through on their threat. But in their wake, a number of smaller companies and startups have stepped into the gap. (It’s hard to think of a place that is more entrepreneurial and tech savvy; other communities might not have seen such a response.) According to the Texas Tribune, there are now half dozen companies offering ride sharing services, with a range of pricing, technology, and business models. The lucrative New York market has also attracted a new entrant, Juno. It aims to attract Uber and Lyft’s highest rated drivers by offering them a chance to own equity in the firm. Nothing guarantees that any of these competitors will survive. Already, Uber’s largest domestic rival, Lyft, has put itself up for sale.

In the long run, the social benefits of a new technology will depend, in large part, on whether the technology is controlled by a monopolist, or is subject to dynamic competition. New evidence suggests that the economic harm of monopolies may be much larger than previously recognized, and that a key method monopolists use to earn high profits (or “economic rents”) is to try to shape the rules of the game to their advantage. In a recent research paper published by the Federal Reserve Bank of Minneapolis, James Schmitz writes:

. . . monopolists typically increase prices by using political machinery to limit the output of competing products—usually by blocking low-cost substitutes. By limiting supply of these competing products, the monopolist drives up demand for its own.

There’s nothing foreordained about what shape the marketplace for transportation network companies or ride sharing will look like. There could be one dominant firm—Uber—or many competing firms. It’s actually very much in the interests of cities to encourage a large number of rivals. Economically, competition is likely to be good for consumers and for innovation. Having lots of different firms offer service—and also compete for drivers—is likely to drive down the share of revenue that goes to these digital intermediaries. And as the experience of Austin shows, having just one or two principal providers of ride sharing services means that they can credibly threaten to pull out of a market, and thereby shape public policy. With more competitors, such threats are less credible and effective, as pulling out would usually just mean conceding the market to those who remain.

As municipal governments (and in some cases, states) look to re-think the institutional and regulatory framework that guides transportation network companies and taxis, they should put a premium on rules and conditions that are competition-friendly, and that make it particularly easy for new entrants to emerge. An open, competitive marketplace for these services is more likely to promote experimentation, provide better deals and services for customers, and give communities an equal voice to that of companies in shaping what our future transportation systems look like.

Paul Romer to the World Bank

Today we’re getting really wonky. Paul Romer, who’s currently at New York University’s Marron Institute has just been appointed to be the chief economist for the World Bank. Personnel decisions involving technocratic positions at global NGOs is about as wonky as it gets, of course. But this is a genuinely interesting development, especially if you’re passionate about cities, and economies, as we are.

Paul Romer at TED University. TED2011. February 28 - March 4, Long Beach, CA. Credit: James Duncan Davidson / TED
Paul Romer (James Duncan Davidson / TED)

 

 

 

 

First, an introduction:  Romer is a prolific and wide-ranging economist.  He’s most famous for his work in creating what’s come to be called “New Growth Theory” (NGT).  It bears a much longer and more precise description, but briefly, NGT focuses on the critical role that creating new ideas plays in driving long term economic growth.  The reason we become more prosperous over time, is not because we accumulate more stuff, but because we continuously generate new and better ways of making use of the finite materials around us.  A critical property of ideas are that they are “non-rival”–you and I can equally make use of an idea without diminishing its utility to one another.  Romer pointed out that non-rivalry is crucial for driving growth, and for some pretty technical reasons, it also means that unfettered free markets can’t automatically generate the conditions that produce long term growth.  As a result, the kinds of institutional arrangements we create, both nationally and locally, are very important to whether we experience growth or not.

New Growth Theory has an important implication for cities.  Cities are, as Jane Jacobs argued decades ago, the crucibles and laboratories where new ideas — what Jacobs called “new work” gets created.  The combination of a diverse population, frequent interaction, and the right set of rules or institutions is what makes economies grow–and these forces play out most dramatically in cities.  (For a much longer explanation of NGT and its policy implications you can read a report I wrote for the US Economic Development Administration).

Romer explained this all in a quite accessible article written for the World Bank 25 year ago, entitled “Two Strategies for Economic Development:  Using Ideas and Producing Ideas.”  This article demonstrates Romer’s keen ability to translate complex economic arguments into simple and powerful metaphors.  He illustrates the difference between traditional views of economic growth and the knowledge-driven growth of new growth theory by contrasting two child’s toys.  The conventional model is the Play-Dough Fun Factory.  It combines capital (the plastic press and dies) with labor (a child’s arm) and raw materials (clay) to produce tubes, I-beams, and other shapes.  This model (and the very math-ey versions of it used by economists) are good for thinking about production efficiency and allocation, but don’t help much to explain how growth happens. 

Conventional Growth Theory
Conventional Growth Theory

 

In contrast, New Growth Theory visualizes the growth process much as if it were a child’s chemistry set.  It turns out that there are so many different possible combinations of even a few handfuls of ordinary chemicals, that its simply impossible for a manufacturer to verify that all of the ways they might be mixed would turn out not to be hazardous or explosive (which for many children is the chief motivation for playing with chemicals).  That’s a downside for chemical companies, their risk analysts, attorney and insurance companies, but its got a surprisingly optimistic implication for long run economic growth.

New Growth Theory (Flickr: Russel Oskay)
New Growth Theory (Flickr: Russel Oskay)

 

The point is that prosperity is driven by the nearly inexhaustible opportunities to create new ideas–new combinations of things–that produce useful products and services.  The trick is figuring out the kinds of institutional arrangements that will prompt people to undertake the experiments that will generate these ideas. This has a critical implication for cities, as Romer explains:

“As the world becomes more and more closely integrated, the feature that will increasingly differentiate one geographic area (city or country) from another will be the quality of public institutions. The most successful areas will be the ones with the most competent and effective mechanisms for supporting collective interests, especially in the production of new ideas.”

In recent years, Romer has been a strong advocate of cities, and has pointed out the direct relationship between urbanization and economic and productivity growth.  Across countries, within countries, and over time, a higher degree of urbanization is strongly correlated with greater economic output.

Romer_GDP_Urbanization
More urban, more productive. (www.paulromer.net)

 

The question going forward is what we might do to harness the growth potential of cities as places that offer new ways to do things and develop ideas.  He proposed the idea of “Charter Cities” — de novo city-states that would experiment with new institutional arrangements, looking to generate the kind of growth that we’ve seen in places like Singapore, Hong Kong and Shenzen.  An abortive attempt to do actually try this out in Honduras actually died stillborn–for reasons that illustrate what it will take to really make such a proposal work.  

More recently, he’s also made the case that creating new cities would be one of the ways that Europe might better respond to its refugee crisis:  HIs argument, in a nutshell:

1.  It takes only a few cities, on very little land, to accommodate tens or hundreds of millions of people.

2. Building cities does not take charity. A city is worth far more than it costs to build.

3. To build a city, do not copy Field of Dreams. (“Build it and they will come.”) Copy Burning Man. (“Let them come, and they will build it.”)

The World Bank is the dispenser, not just of billions of dollars in loans for less developed nations, but is also the dispenser of the the conventional wisdom, especially when it comes to cutting edge development strategies.  Its notions about the processes and strategies that can stimulate economic growth have immediate, practical and widespread implications.

Romer is a brilliant and original thinker, and is an economist who is willing to fully explore the policy implications of his theoretical work, and regularly comes up with ideas that make us look differently at the world.  He’s somebody who sees cities at the center of the solution to many of the globe’s most pressing problems. We’re excited to see what he does at this new job.

 

Housing Cost Calculators

Suddenly, we’re awash in calculators. Housing calculators.

If you’re a Baby Boomer, you remember the day you saw your first electronic calculator. It had an electronic display–red or green light-emitting diode segments, usually eight or ten of them that would display numbers, arithmetic operators and a decimal point. They had a few hard-to-press chicklet type keys, but they would add, subtract, multiply and divide with a speed and accuracy that was previously unavailable. Precise math suddenly became easier. (And if you typed in 07734 and turned it upside down it looked like is was saying “hELL0.”)

Calculators (Flickr: Marcin Wichary)
Calculators (Flickr: Marcin Wichary)

 

In the past few months, we’ve seen the advent of a new generation of calculators–housing calculators, aimed at helping us understand the complex dynamics of financing and affording housing. Like the early days of the electronic pocket calculator, there are a lot of competing brands and different designs. Each of these calculators looks at the interplay of different factors that influence the feasibility of building new housing, embracing a range of purely private sector considerations (construction costs, interest rates, rents) and some public policies as well (inclusionary zoning, parking requirements, height limits, planning processes). All are designed as generalized “what if” models, and specifically disclaim their use for investment purposes.

The latest of these is the Urban Institute’s new “Affordable Housing: Does it Pencil Out” website, released today. In theory, these tools ought to give us a clearer picture of the factors influencing housing affordability and how we might make some progress in tackling this problem. Here’s a quick thumbnail of it, and three other examples of the genre.

Urban Institute: Does it Pencil Out?

The Urban Institute
The Urban Institute

The Urban Institute’s calculator estimates construction costs and rents for apartments built in Denver, which it characterizes as a fairly typical metropolitan area. You are given the choice of modeling a 50 or 100 unit apartment building, and you can see how varying the level of rent charged and some key development costs (like interest rates, land costs, construction costs, and operating costs) influence the profitability of a proposed development. The site’s key conclusion: It’s very difficult to build housing that’s affordable to anyone below 100 percent of area median incomes without some sort of subsidy.

Terner Center, UC Berkeley: Will Housing be Built

Terner
Terner Center, UC Berkeley

The Terner Center’s calculator takes a slightly different approach than the other calculators presented here, and as its name suggests, offers up its estimates of the probability that a particular housing development will go forward under different assumptions about financing, affordability requirements, rents, construction costs and approval processes. It’s calibrated using data from Oakland California. The Terner Center model has a rich set of controls that let you explore the impacts of varying inclusionary zoning requirements, parking requirements, and uniquely, the impact of a more attenuated approval process.

Cornerstone Partnership: Inclusionary Calculator

Cornerstone_Partnership
Cornerstone Partnership

Cornerstone Partnership, a housing advocacy network, has created its own tool which lets the user select the number of units to build, construction costs, the cost of land, parking requirements, interest rates and rents, and other variables. The model then estimates the total cost of the project and whether it is profitable. Projects that generate more than a ten percent rate of return for the investor are judged feasible. Unlike the other calculators presented here, you have to register with the website to use this calculator.

Citizen’s Housing and Planning Council (NY): Inside the Rent

Inside_the_Rent
Inside the Rent

This calculator, built for New York City, allows the user to see the factors that influence the rental cost of new apartment construction in different neighborhoods in New York City. Between land, construction, soft costs and financing, new apartments in a mid-rise building in a typical neighborhood have a sticker price of around $500,000; and unsubsidized rents for these newly constructed units run at more than $4,000 a month. A unique feature of this calculator is its effort to estimate the cost of paying prevailing wages for construction and upkeep.

Some thoughts on the state of the art in housing calculators

While billed as calculators or tools, each of these is actually a gussied-up, html-coded quantitative model. Like all models, each is only as good as the assumptions that it’s based on. An ideal model is transparent about what its assumptions are, and enables the user to test those assumptions. But frequently models–especially complex models–make it difficult know exactly which assumptions are driving its conclusions. Different modelers will choose different assumptions–which may be buried deep in a model’s structure–which may unfortunately conceal biases.

These calculators have some similarities: They let you vary key financial parameters, like the price of land, rent levels, interest rates, construction costs, the amount of affordable housing included, and the extent of the public subsidy. They’re particularly useful for exploring the tradeoffs and costs of different policies; parking requirements and construction delays can move an otherwise likely and feasible development into the risky or unprofitable categories.

What’s difficult — and maddening, though not surprising — is how difficult it is to compare the results of the different calculators. They use varying terminology and definitions, and seem to have a wide range of assumptions. They are individually complex, and produce results that are framed differently, so that one can’t easily say how two calculators would appraise a project with the same inputs.

For example, the Urban Institute’s Will it Pencil and Cornerstone’s Inclusionary Calculator seem to produce very different messages about housing affordability. The authors of Cornerstone Partnership model use it to support their claim that developers can profitably build additional units of affordable housing with modest or no subsidies while remaining profitable. CityLab summarized the model’s conclusions as: “A new tool shows that developers can profit by building affordable housing almost anywhere.” In contrast, the authors of the Urban Institute’s model essentially say that it’s unprofitable to build any amount of affordable housing without substantial subsidies. They say: “Without the help of too-scarce government subsidies for creating, preserving, and operating affordable apartments, building these homes is often impossible.”  It’s not immediately apparent from looking at the two calculators which of these conclusions is the most accurate.

What’s needed here, is a kind of Consumer’s Guide to housing calculators. It would be useful, for example, if we had a standardized “benchmark” development: a certain number of units, and certain land cost, rent level and other parameters, than could be plugged into each model, and then we could see what kind of outputs each model produced for the same development.

The best that can be said at this point is that while the calculators we have don’t definitively answer the question, they do a good job of framing the variables that we need to pay attention to in discussing affordability. They’re also helpful in exploring the tradeoffs between policy objectives, for example, how increased parking requirements or longer approval processes lower the likelihood that housing projects will move forward. These calculators are in their infancy–two of them are self-described “betas”–and we hope that the people who’ve built them continue to develop and refine them, and research and debate the assumptions on which they’re based.

Homeownership can exacerbate inequality

In yesterday’s post, we described why homeownership is such a risky financial proposition for low income households, who tend to be disproportionately people of color. From a wealth-building standpoint, lower income households tend to buy homes at the wrong time, in the wrong place, face higher financing costs, and have less financial resilience to withstand the fluctuations of housing and economic markets. Yet we continue to persist the the belief that homeownership is a universal elixir for wealth building. In fact, there’s some strong evidence that our excessive investment in housing–and our subsidies for homeownership have worsened our income inequality problems. This suggests it might be time to rethink our national outlook on housing and wealth building.

Has Homeownership Actually Heightened Inequality?

New research from Zillow’s Svenja Gudell shows that the collapse of the housing bubble actually worsened inequality. Modestly priced homes saw the biggest price declines, and the households who owned these homes lacked the equity to cope with the downturn, and were much more likely to be foreclosed upon: “When the bubble popped, less-expensive homes—often bought by low-income homeowners—were more likely to be foreclosed on than higher-end homes.”

In many important respects, the case for home-ownership as wealth creation is a circular argument: We proclaim that housing is a great investment, and encourage families to go heavily into debt to purchase homes, and then use the fact that so much household wealth is tied up in housing to justify additional subsidies and regulations to drive up home values. These regulations include local zoning (which limits the supply of housing, helping drive up prices or as it’s usually expressed “to protect property values”), but go much further. The federal government directly or indirectly provides or guarantees most home mortgages (and prices lower and terms more favorable that would be the case in a purely private market). And the federal tax code provides something on the order of a quarter of a trillion dollars in annual subsidies to homeownership. If homeownership is a good investment, it’s substantially because government policies have made sure that it pays off.

From a distributional standpoint, it’s clear that the emphasis on homeownership has actually led to a greater concentration of wealth, and not greater equality. As Matthew Rognlie showed virtually all of the increase in wealth inequality in the United States in the past four decades is accounted for by the increase in the share of capital in housing. Mian and Sufi plotted the ratio of the amount of home equity owned by the highest income quintile compared to the middle quintile of the US population. In the 1990s, a household in the highest income quintile had about 5 times as much housing equity as the average, middle quintile. By 2010, this difference had nearly doubled: to 9 times as much housing equity.

Particularly over the past decade, housing has a poor record as a wealth creator. Overall, homeowners collectively lost something on the order of $7 trillion in the collapse of the housing bubble. To put that number in some perspective, consider the average home equity of a household in the middle of the income distribution, with a household head aged 35 to 44 years. Data compiled from the Fed’s Survey of Consumer Finance by David Rosnick and Dean Baker shot that while inflation-adjusted home equity for this group grew from 1992 through 2007, since then it has fallen sharply; today the households in the middle quintile of this age group have less than half as much home equity as in 2007.

Screen Shot 2016-07-18 at 5.51.12 PM

Time to Rethink Homeownership?

The collapse of the housing bubble erased all of the growth in the homeonwership rate in the United States since 1980. On the upswing, the bubble generated lots of (paper) wealth, and drew millions of households into ownership. The homeownership rate peaked at more than 69 percent in 2007, then plunged to less than 64 percent, as millions of households lost their homes.

The aftermath of the bubble should remind us that homeownership is a risky endeavor, and that for a substantial portion of the population, it’s not a feasible or prudent strategy for trying to build wealth. It’s time to re-think the role of homeownership in promoting wealth, especially for the poor. There are three big takeaways here:

  1. Pushing homeownership as a universal wealth building strategy for the poor, is a snare and a delusion. Its likely to hurt many families. Policies that lower the bar for home purchases, like very low down payment loans, may actually expose those least able to handle the risks of homeownership to even greater probability of loss.
  2. The efforts to extend homeownership down the economic spectrum in many ways simply constitute a way of providing political cover for subsidies like the mortgage interest deduction that chiefly benefit upper income households, thus actually worsening income inequality.
  3. As a nation, we have no substantial policy for helping renters build wealth. More than a third of our population, including its youngest, poorest, and people of color are, will continue to be renters. We might, for example, consider repurposing some of the $250 billion annually in federal tax subsidies to homeownership to help reduce rental costs or subsidize savings programs for renters.

Homeownership: A failed wealth-creation strategy

It’s an article of faith in some quarters—well, most quarters—that in the United States, owning a home ought to be a surefire way to build wealth. Whether it’s presidents, anti-poverty groups, foundations, or realtors, we’re always being told that that homeownership is the foundation of the American dream, and a key way secure one’s financial future.

For a long time, it certainly worked out that way for a lot of households. Real home prices in the US outstripped inflation by a wide margin. Home equity rose. If you bought a house in the 1960s, pretty much anywhere in the US, it was worth a lot more two or three decades later. Collectively, and after adjusting for inflation, the real value of home equity owned by US households increased from 1 trillion in 1960 to 2 trillion in 1975 to 8 trillion before finally peaking at more than 14 trillion in 2006.

 

Because of this record, we’re told that promoting more widespread homeownership is an effective way to lift low-income families out of poverty, and to help racial and ethnic minorities—who’ve long had lower than average rates of homeownership, to build wealth.

The implication is that the 40 percent or so of American households that don’t now own homes would be better off, all things equal, if they were able to buy a home. But as the standard investment disclaimer goes, “past performance is no guarantee of future results.” That warning is especially true for low income households and minorities who are now renters.

Why Homeownership is Risky for Low Income Households

Housing can be a good investment if you buy at the right time, buy in the right place, get a fair deal on financing, and aren’t excessively vulnerable to market swings. Unfortunately the market for home-ownership is structured in such a way as to assure that low-income and minority buyers meet none of these conditions. For these Americans, there’s no guarantee that homeownership builds wealth; in fact, tends to be a risky proposition that often produces financial hardship.

First, you have to buy at the right time. The old adage is to buy low and sell high. Jordan Rappaport of the Kansas City Federal Reserve Bank estimates that buying has outperformed renting on a financial basis only about half the time since the 1970s; so those who buy during the  “wrong” half stand to be worse off. And low-income and minority buyers tend to be disproportionately drawn into the market at these “wrong” times.

That’s because the best time to buy, from a wealth-building perspective, is when housing prices are low and growing sluggishly. But generally, such times coincide with limited credit availability: home lenders ration credit according to credit score, and only the “best” borrowers have access to home loans when prices are low.

Credit: Dan Moyle, Flickr
Credit: Dan Moyle, Flickr

 

As the experience of the last housing bubble showed, low-income and minority buyers came into the market most strongly as lending standards were relaxed, relatively late in the cycle. Those who bought in 2001 (when the market was depressed) fared very differently that those who bought in 2006 (at the height of the bubble). Easy credit nominally made homes more affordable, but also drew ever more borrowers into the market to bid up the prices of homes until the bubble popped. Because of this inherent quality of the credit cycle, the poorest borrowers are drawn into the market at the worst time to buy—when prices are at their highest.

Second, you have to buy in the right place. Opportunities for home appreciation vary enormously, not only by region of the country, but by neighborhood within metro areas. Ethnic minorities tend to buy in neighborhoods that have lower rates of home price appreciation. Zillow’s Skylar Olsen analyzed the data on home price trends by race and ethnicity. They show that Black and Hispanic households experienced bigger declines in home values as the housing bubble collapsed, and a slower rebound as it recovered, leaving them worse off that the typical white homeowner. And that’s not a result of Black and Hispanic buyers being poor judges of neighborhood quality: In segregated housing markets, the behavior of whites to avoid Black and Hispanic neighborhoods means that it’s much more difficult for those communities to see consistently rising home values.

Zillow_Race_Equity

 

Third, you have to get a good deal on credit. The evidence is that low income borrowers and ethnic minorities pay, on average, higher interest rates. A 2006 study for HUD found that after controlling for household, property and loan characteristics, black households pay interest rates that are 21 to 42 basis points higher than whites, and hispanics pay rates than are 13 to 15 basis points higher. Federally guaranteed home mortgages must pay fees based on their riskiness, as measured by the mortgage’s loan-to-value ratio and the borrower’s credit score. Because minority buyers tend to have lower down payments and worse credit scores, it’s estimated that they pay guarantee fees that are 50 percent higher on average than white buyers. In addition, we know that low income and minority borrowers were the targets of predatory lenders. If you pay more for your mortgage, that raises the cost and lowers the returns to homeownership.

Finally, in order to build wealth with housing, you have to have the ability to weather economic cycles. Low income and minority families often have limited financial resources beyond the equity in their homes and therefore are poorly positioned to cope with financial setbacks—loss of a job, a major medical expense or home repair—and missing mortgage payments can quickly push them into default. And highly leveraged home buyers (who get 90% or greater mortgages) are vulnerable to lose their entire investment in the face of even a modest decline in home prices. As Atif Mian and Amir Sufi have documented, conventional US mortgage loans are a risky, one-sided bet for borrowers.

As Patrick Bayer, Fernando Ferreira, and Stephen Ross have demonstrated, minority homebuyers are drawn into the market relatively late in the credit cycle, have limited financial resources on which to draw in the event of economic problems and are disproportionately more likely to default on their loans, even after controlling for differences in credit scores, down payments, and neighborhood characteristics. As a result, they conclude:

“Our study raises serious concerns about homeownership as a vehicle for reducing racial wealth disparities. …Homeownership may be especially risky for households with a low initial level of wealth (savings) or fewer family resources on which to draw when hit with an adverse economic shock.“

If you buy at the wrong time, if you buy in the wrong place, if you pay too much for for the money you borrow and don’t have the financial wherewithal to weather economic turbulence, chances are that home ownership could turn out to be a wealth-destroying, not a wealth-building, proposition. It clearly was in the aftermath of the collapse of the housing bubble.

The Week Observed: July 29, 2016

What City Observatory did this week

1. Economist Paul Romer Joins the World Bank.  Paul Romer, a leading exponent of the New Growth Theory has been hired as chief economist for the World Bank. We explore how his thinking about the role of knowledge-driven growth and the key role of cities in fostering institutional and technological innovation might influence the Bank’s strategies. The good news here is that Romer is one of the most plain-spoken economists around, illustrating his theories with play dough and a children’s chemistry set (Really!).

Paul Romer at TED University. TED2011. February 28 - March 4, Long Beach, CA. Credit: James Duncan Davidson / TED
Paul Romer at TED University. TED2011. February 28 – March 4, Long Beach, CA. Credit: James Duncan Davidson / TED

2. Housing Cost Calculators. There’s a growing array of web-based tools that let you dig into the cost components of housing construction and development to better understand why the rent is, as they say, “so damn high.” We review four different housing cost calculators, including the latest entry from the Urban Institute, plus models from UC Berkeley’s Terner Center and the Cornerstone Partnership. Our verdict: while the models are useful at illustrating some of the key variables and tradeoffs in the development process, there’s a huge amount of complexity here, and many critical assumptions built into the models are not readily apparent, especially for casual users.

3.  The Party Platforms on Housing. Housing and housing affordability are top of mind for local governments, but where do they show up in the national political agenda.  We take a close look at what the Republican and Democratic party platforms have to say about housing.  The parties both express support for homeownership, but seemingly part company on zoning.  The Republican platform lambastes the Affirmatively Furthering Fair Housing Rule as “social engineering” (an epithet which could easily be applied to most zoning).  The Democratic platform makes a vague call for easing local barriers to building affordable rental housing.

4. The Triumph of the City and the Twilight of the Nerdistans.  The big news in San Francisco this week is that Facebook is exploring a major expansion in the city.  The center of gravity of the Bay Area’s tech sector is decidedly shifting north, propelled by the desire of tech workers to live in great urban neighborhoods–and the fierce competition by tech firms to hire these workers. The wave of corporate expansions in downtown areas, and the growing flow of venture capital to startups in city centers signals the ebbing of the suburban nerdistan model of development.

 


The week’s must reads

1.  After a century is zoning obsolete? This past week marked the 100th anniversary of zoning in the US, dated from the adoption of New York City’s first zoning code. At BloombergView, Justin Fox notes that, as it has been applied, particularly since the 1970s, zoning has been implicated as a contributor to segregation, inequality and housing unaffordability.  He looks back at the historical roots of zoning, unearths some of the archaic views on which it is based, and concludes that if its not time to kill zoning altogether, then a radical overhaul is probably in order.

2.  Cities Alive: Towards a walking world.  London-based ARUP architects have produced a colorful, encyclopedic guide that builds a strong case for promoting more walkable cities, along with case studies of 80 successful projects. This compendium draws from the work of Jan Gehl, Janette Sadik-Khan, Jeff Speck and others, to explain why walking is beneficial to health, communities and the economy, and to demonstrate the practical steps toward promoting walking. That said, the report is primarily about tactics and projects; it only briefly touches on how we might make strategic changes to the policies and institutions that lead to auto-dominated urban landscapes.  While there’s copious detail on parklets, there’s only passing mention of pricing and economic incentives, and no reference to eliminating or reducing parking requirements.

3. More evidence for Portland’s Green Dividend.  At BikePortland, Michael Andersen shows that per capita car ownership in Portland has declined, taking nearly 40,000 cars off the roads, compared to the rate of vehicle ownership in 2007.  He estimates that less driving is saving Portland residents $138 million annually in vehicle operating costs, money that mostly gets re-spent in the local economy.


New knowledge

 

1. The Outlook for the Homeownership Rate. In the aftermath of the collapse of the housing bubble, homeownership has fallen to rates not seen in decades. The University of Pennsylvania’s Susan Wachter and Arthur Acelin conclude that a combination of demographics and economics likely may continue to depress homeownership rates going forward. Homeownership is down most for young adults, who will represent a growing share of the population over the next two decades.  The report also points to tougher lending standards as a brake on homeownership.

2.  The impact of Seattle’s minimum wage on earnings, employment and business activity. A team at the University of Washington, led by economist Jake Vigdor, has been using employment tax records to track the impacts of the City of Seattle’s minimum wage, which rose to $11 per hour for many employers. They find that since the wage increased, hourly earnings for low wage workers are up by more than $1 per hour, but much of the effect is attributable to the strong local economy, rather than the wage law. There’s been a small reduction in hours worked: the employment rate for low wage workers in Seattle declined about 1 percent.  Despite fewer hours of work, average worker earnings are up. The study finds no net impact on business success: the failure or exit rate of businesses hasn’t changed appreciably, and the closure of local businesses has been more than offset by new business starts.

3.  Has the Great Recession depressed the rate of productivity growth? A new paper from the National Bureau of Economic Research shows that in the face of weak demand, businesses have limited incentives to incur the up-front costs associated with developing R&D, and deploying new ideas and technology. The result is that the state of the macro-economy lowers the rate of productivity growth and decreases the total capacity of the economy–something economists call an “endogenous” effect, which is a key reason why recovery from financial crises is so slow.


The Week Observed is City Observatory’s weekly newsletter. Every Friday, we give you a quick review of the most important articles, blog posts, and scholarly research on American cities.

Our goal is to help you keep up with—and participate in—the ongoing debate about how to create prosperous, equitable, and livable cities, without having to wade through the hundreds of thousands of words produced on the subject every week by yourself.

If you have ideas for making The Week Observed better, we’d love to hear them! Let us know at jcortright@cityobservatory.org or on Twitter at @cityobs.

The Week Observed: July 22, 2016

What City Observatory did this week

1. Homeownership:  A failed wealth creation strategy.  Its an article of faith that owning a home is the most reliable route to wealth building in the US.  But this hasn’t been true over the past decade, and its especially problematic for low income households and minorities. The housing market is structured so that they buy at the wrong time, in the wrong place, pay a higher price and face far greater risk.

2. Homeownership can worsen inequality.  Policies that promote homeownership have worked far better for the wealthy than the poor, with the result that wealth inequality related to housing has actually grown in the past decade.  Low income households experienced greater value declines and more frequent foreclosures; higher income households continue to experience gains in home equity, with the result that the top twenty percent of households had, on average 9 times as much home equity as the typical household in 2010, up from a five-fold difference in 1990.

3.  Housing can’t be a good investment and affordable.  There’s a fundamental contradiction between the two pillars of US housing policy. In order to be a good investment, it has to increase steadily in value over time.  But rising home values are synonymous with diminished affordability. Until we confront this contradiction squarely, we’ll have real difficulty making progress on either front.

4.  Changes.  We bid adieu and bon chance to our colleague Daniel Hertz, who’s taken a new position with a public policy think tank in Chicago. His thoughtful analysis and clear voice have defined City Observatory, and we’re delighted that he’s agreed to continue to provide monthly contributions. Thanks, Daniel!

 


The week’s must reads

1. Car-sharing reduces traffic.  Susan Shaheen and her research team at the University of California Berkeley used fleet data and surveys of users of Daimler’s Car2Go car-sharing service.  They estimate that each additional vehicle in the ride-sharing fleet leads to 9 to 11 fewer cars on the road, and greenhouse gas emissions per user decline substantially. Users report selling existing cars, or avoiding buying cars, and can utilize car-sharing on an as-needed basis to supplement transit, biking and walking. This is strong evidence that “transportation as a service” will be more efficient than our current model of having each household own one or more cars.

2. Making the buses run on time in NYC.  For the past several years, bus ridership in New York City has been dwindling, even as subways have become increasingly crowded.  The Transit Center released a detailed report–“Turnaround: making recommendations on how to get buses moving faster, including re-designing and straightening routes, electronic payment, all-door boarding, and dedicated lanes. It’s a list of ideas that could be applied in nearly all cities. Conspicuous by its absence is one other idea: charging private vehicles using the public roadway in peak hours to reduce traffic and speed buses.

3. Demographic Headwinds for Job & Housing Growth?  A new report from John Burns Real Estate, a consultancy, predicts much slower job growth in the years ahead, due to lower growth in the working age (20-64) population. They forecast that this group, which grew 1.0 to 1.5 percent per year in the past decade, will grow at less than 0.5 percent annually through 2024. The result:  labor shortages and likely subdued demand for housing. Some provocative data here.

 


New knowledge

1. Segregation and the Financial Crisis.  NYU’s Furman Institute has posted another one of its series of well-structured discussions of key urban policy topics. This one explores an essay by Jacob Faber which suggests that the size of the housing market collapse was shaped heavily by segregation.  In reply, Steve Ross acknowledges the key role of segregation in shaping housing market outcomes, but argues the decline was much more broad based.

2. Public Support for Road Finance Alternatives.  Everyone, it seems, wants more and better roads, but no one actually wants to pay for them is the short summary of Indiana University’s Denvil Duncan’s paper exploring survey evidence on support for road finance alternatives.  Of five considered options, none gets majority support; higher gas taxes (favored by 29%) or tolls (34%) are the least unpopular.  Raising the income tax is the least favored. The title of this paper “Searching for a Tolerable Tax,” gives away the game; the full paper is available (paywall) from the Public Finance Review.

3. How Housing Market Fluctuations Affect Young Households.  In hot housing markets, high prices keep younger households in the rental market, and also seem to lead to lower rates of marriage and child-bearing, according to research from economists Luc Laeven and Alexander Popov. They look at across market variations in home prices in the US, and conclude that housing booms tend to disadvantage young households, while benefiting older homeowning households.

 


The Week Observed is City Observatory’s weekly newsletter. Every Friday, we give you a quick review of the most important articles, blog posts, and scholarly research on American cities.

Our goal is to help you keep up with—and participate in—the ongoing debate about how to create prosperous, equitable, and livable cities, without having to wade through the hundreds of thousands of words produced on the subject every week by yourself.

If you have ideas for making The Week Observed better, we’d love to hear them! Let us know at jcortright@cityobservatory.org, dkhertz@cityobservatory.org, or on Twitter at @cityobs.

Rules of the road

Earlier, we wrote about the first fatal crash of a partly-self-driving car. A Tesla, operating on autopilot mode, failed to detect a semi-trailer crossing in its path, and the resulting collision killed its human driver.

The crash has provoked a great deal of discussion in the media about safety data, the potential for future technology, and the problems of human-interaction with partially automated systems. A key challenge is that current (and at least foreseeable) implementations of self-driving technology are likely to require human supervision and intervention, and the man-machine combination likely poses its own risks. The seeming inerrancy of the machine systems in route circumstances may lull humans into a sense of complacency or inattention, with disastrous consequences when the machine fails. (Indeed, according to at least one account the victim of the Florida crash was watching a Harry Potter video at the time of the crash).

A parked Tesla. Credit: AJ Batac, Flickr
Tesla (price $101,500) Parking Space (Free!) . Credit: AJ Batac, Flickr

 

While we may instinctively regard the safety issues surrounding autonomous vehicles as being primarily technological in nature, they also depend critically on institutional arrangements we establish and the policy choices we make about transportation and public space.  Safety will be determined as much by rules of the road as by any safety device.

To many achieving safety will come in the form of fully automated vehicles that would eliminate any human role in the driving process. Not only are such vehicles still years—if not decades—away, but then there’s the challenge navigating a transitional period in which self-driving vehicles share the road populated mostly with human-piloted vehicles.

To avoid these problems altogether, Jerry Kaplan writing in the Wall Street Journal, has suggested that self-driving vehicles be given their own infrastructure. Lanes on highways would be reserved for self-driving vehicles.

Taking the initiative in this way would better foster innovation and let the free market work its magic. What might such a plan look like? Perhaps we could start by reserving high-occupancy-vehicle lanes or certain roads at specific times for automated vehicles.

It’s breathtaking that neither the Kaplan—nor apparently his editors at the Wall Street Journal—grasped the glaring contradiction between “free market” and “government prohibiting non-automated vehicles from using roads.”

There’s no such thing as a free market for transportation: transportation hinges directly on public policy, particularly spending on roads and the rules that govern their use. A technologically advanced car is essentially useless without a network of public roads on which it can operate. The (largely) private market for vehicles depends directly on a public policy of building roads and regulating them in ways favorable to vehicle travel.

How we get from today’s technology to tomorrow’s, whatever form it takes, will be very much about public policy choices. That’s the way its always been.  It’s worth spending a moment thinking back about this process. In the early days of the automobile, the rules of the road were quite different. There were no traffic signals, and wagons, streetcars, and pedestrians freely mixed on city streets. In the earliest days, some cities required cars be preceded by a flag-bearer to warn other travelers of its approach.

A key victory of what Vancouver’s Gordon Price calls “motordom” came in literally re-writing the rules of the road. By law and custom, other users were marginalized or completely excluded from the roadway. Pedestrians not crossing at marketed crosswalks were branded “jay-walkers”—a derisive and entirely manufactured term, designed to shame what had long been common behavior on city streets.

There’s no such thing as a free market for roads

While it’s cloaked in the rhetoric of markets, Kaplan’s call to dedicate a portion of the public right of way exclusively to autonomous vehicles is really the latest incarnation of Asphalt Socialism: We ought to give massive public subsidies to private vehicle movement, privilege these cars over other forms of transportation, and generally subordinate the quality of place to the movement of vehicles.

It may make sense for fully automated vehicles to have their own right of way: but if they do, they ought to pay for the privilege. The root of many of our transportation and urban problems is the consistent under-pricing, and consequent massive subsidies to private vehicle transportation. Our decision to subsidize freeway construction in cities—ostensibly to speed travel—has chiefly led to much more sprawling development patterns. Its an open question whether fleets of cheap, comfortable authonomous vehicles would lead people to choose to commute longer distances from even more far-flung locations.  Subsidized infrastructure for automated vehicles would recapitulate this mistakes of the past with a whole new set of technologies.

Likewise, the advent of driverless vehicles raises important issues about the legal liability for damages when car crashes occur.  Will driverless vehicle manufacturers or software providers or map makers be subject to legal recourse under product liability laws?  At least one study has suggested a fairly sweeping legal framework that would shift liability to vehicle owners.  How the legal architecture of liability is re-written will likely have a profound impact on the deployment of this technology.

The advent of highly instrumented vehicles should instead be treated as an opportunity to revisit archaic and failed choices about how we regulate, price and pay for roads. If vehicles had to pay for the cost of the public roads on which they travel (and the public street space that is routinely used for free car storage), drivers would make very different decisions about when, where and how much to travel.  Realizing the safety and other benefits that can potentially come from self-driving vehicles will be just as much a matter of working out the right public policies as it is tackling the technological challenges.

Less than perfect

Last week, sadly, two tragic deaths represented unfortunate, but predictable firsts in transportation. They are also reminders that despite the very real potential benefits of new technology, operating large metal objects at high speeds is an inherently dangerous activity, and public safety is best served by reducing people’s exposure to the risk—which means designing urban spaces to minimize necessary driving and keep most vehicular traffic traveling at low speeds.

On May 7, Joshua Brown had his Tesla sedan operating in auto-pilot mode crash into a semi truck that turned across his path on a four-lane Florida highway. Neither driver nor operator reacted to the truck, the car’s self-driving mechanism apparently fooled by the low contrast between the truck and a bright sky, or its own programming that led it to disregard large metal rectangles as highway signs. As Tesla put it: Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied.” (The passive voice here suggests autonomous vehicles are well on their way to developing the key driving skill of minimizing responsibility when crashes occur).

A Tesla. Credit: Lummi Photography, Flickr
A Tesla. Credit: Lummi Photography, Flickr

 

Then on July 1, bike sharing recorded its first fatality. In Chicago, 25-year old Virginia Murray riding a Divvy bike was struck by a flat-bed truck as they both turned right from Sacramento on Belmont in the city’s Northwest side.

Until these crashes, both technologies had enviable safety records. But as they become more widely used, it was a statistical certainty that both would fail. What lessons can we draw from these tragedies?

Gizmodo hastened to point out that the Tesla, while using its lane-keeping, object-detection and speed-maintaining functions, was not truly a fully autonomous vehicle. (Tesla’s system requires the driver to keep his or her hands on the wheel, or the car starts slowing down). Arguably, the Tesla’s systems don’t have all of the functionality or redundancy that might be built into such vehicles in the future. To Gizmodo, the Tesla crash is just a strong argument for fully autonomous vehicles—humans can neither be counted upon to intervene correctly at critical moments, and in theory, vehicle-to-vehicle communication between the Tesla and truck could have avoided the incident entirely.

In its press release on the crash, Tesla pointed out that its vehicles, collectively, now have recorded one fatality in 130 million miles of auto-piloted driving, which compares favorably with a US average fatality rate of one fatality per 94 million miles driven.

In a subsequent tweet-exchange in response press coverage of the Florida crash, Tesla CEO used that disparity to claim that if all cars worldwide were equipped with the auto-pilot function it would save half a million lives per year. Elon Musk upbraided a Fortune reporter, insisting he “take 5 mins and do the bloody math before you write an article that misleads the public.”

But the math on safety statistics hardly supports Musk’s view, for a variety of reasons, as pointed out in Technology Review. So far, the sample size is very small: until Tesla has racked up several tens of billions of miles of driving, it will be hard to say with any validity whether its actual fatality rate is higher or lower than one in 94 million. Second, it’s pretty clear that current Tesla owners only use the autopilot function in selected, and largely non-random driving situations, i.e. travelling on freeways and highways. Limited access freeways, like Interstates, are far safer than the average road; in 2007, the crash rate on Interstates was one fatality per 143 million miles driven (100 million divided by .70). The most deadly roads are collectors and local streets, where auto-pilot is less likely to be used.

Fatality Rates Per Million Miles Traveled, 2007

Screen Shot 2016-07-11 at 8.37.58 AM

Statistically, it’s far too early to make any reasonable comparisons between this emerging technology and human drivers. But our experience with managing risk and safety with other technologies suggest that the problem will be daunting. As Maggie Koerth-Baker pointed out at FiveThirtyEight, the complexity of driving and of coping with every possible source of risk—and selecting the safest action—is mind-boggling. Plus, computers may not make the same mistakes as humans, but that doesn’t mean that they won’t sometimes act in ways that lead to crashes.

Part of the problem is that the very presence of safety systems may lull drivers into a false sense of security. Crashes, especially serious ones, are low-probability events. Humans may be very leery about trusting a machine to drive a car the first few times they use it, but after hundreds or thousands of repetitions, they’ll gradually believe the car to be infallible. (This logic underlies the Gizmodo argument in favor of full autonomy, or nothing).

This very process of believing in the efficacy the safety system can itself lead to catastrophes. Maggie Koerth-Baker describes the meltdown of the Three Mile Island nuclear reactor. It had highly automated safety systems, including ones designed to deal with just the abnormalities that triggered its accident. But they interacted in unanticipated ways, and operators, trusting the system, refused to believe that it was failing.

While some kinds of technology—like vehicle-to-vehicle communication—might work well in avoiding highway crashes, there’s still a real question of whether autonomous vehicles can work well in an environment with pedestrians and cyclists—exactly the kind of complex interactions with un-instrumented vulnerable users that resulted in Virginia Murray’s death in Chicago.

Increasingly, safety problems affect these vulnerable users. Streetsblog reported that the latest NHTSA statistics show that driver deaths are up six percent in the past year, pedestrian deaths are up 10 percent and cyclist deaths are up 13 percent, reversing a long trend of fewer deaths, and making 2015 the deadliest year on the road since 2008.

For the time being, it’s at best speculative to suggest that all of these deaths can be avoided simply by the greater adoption of technology. And as many observers have noted, today’s technology, while impressive and developing quickly, is far from achieving the vision of full vehicle autonomy; some robotics experts predict self-driving cars may be 30 years away. With present technology, as we’ve noted more driving means more deaths. The most reliable way to reduce crash-related deaths is to build environments where people don’t have to drive so much, and where cyclists and pedestrians aren’t constantly exposed to larger, fast moving and potentially lethal vehicles even when making the shortest trips. That’s something we can actually do with the technology that exists today.

Review: State of the Nation’s Housing 2016

At City Observatory, we love fat reports full of data, especially when they shed light on important urban policy issues. Last week, we got the latest installment in a long-running series of annual reports on housing produced by Harvard’s Joint Center on Housing Studies (JCHS). The State of the Nation’s Housing, 2016—aka SONH2016—presents copious details on the housing situation. This year, it particularly highlights the growing problem of housing affordability. (We summarized last year’s report here.)

There’s a lot to digest in this report. Our review focuses on four big issues, how the report describes affordability, its analysis of recent developments in the housing market, the outlook for homeownership as a wealth building strategy, and the implications of all this for housing policy.

Housing Affordability

SONH2016’s headline finding is the growing number of households spending half or more of their income on housing. Using the 30 percent of income standard, the report estimates the number of rental households with affordability problems has increased by 3.6 million since 2008. The report continues to use the 30 percent of income standard even though there are real questions as to whether it really reflects affordability, or addresses the housing/transportation cost tradeoff. And the number of households spending 50 percent or more of their income on rent has increased by 21 million to 11.4 million over that same time period (p. 4).

Credit: SONH2016
Credit: SONH2016

 

We agree that housing affordability is a widespread and serious problem. But what seems lacking from SONH2016 is a thorough diagnosis of the causes of the affordability problem and an appropriately scaled set of solutions. There’s passing reference to local land use controls as a contributing factor to the problem of housing affordability: “High rents reflect several market conditions, including a limited supply of land zoned for multifamily use and a complex approval process that adds to development costs.” (p. 4) But the report emphasizes the impact of land use controls on building affordable housing, and aside from this brief mention, offers little comment—and no policy recommendations—about how to ameliorate the effects of supply constraints by changing local zoning.

While SONH2016 does a thorough job of amassing statistics that chronicle the growth and extent of affordability issues, it spends relatively little time describing policy actions that might make a dent in the problem. On the last full page of the report, the authors offer their outlook on housing challenges. The note that only modest additional resources have been made available to support low income housing construction —we’re told (p. 36) the biggest program in the past decade is the recent authorization of the $174 million Housing Trust Fund (which works out to about $16 for each of the 11 million households spending 50 percent or more of their income on housing). In effect, we’re presented with a daunting problem without any commensurate solution.

The Housing Market

The report tries to sound some optimistic notes about about single-family homeownership. But they come off as weak reeds in the face of what continues to be a highly depressed sector of the economy. The report highlights an increase in single family housing construction (p. 1: “new home construction was up by a healthy 11 percent”) but later concedes that the total number of homeowners, and the homeownership rate, has continued to fall (p. 16).

On the plus side, the report helps dispel a commonly reported misperception about the growth of McMansions. The data show that though large homes (greater than 3,000 square feet) have become a bigger share of the market, that has more to do with the utter collapse of the market for small homes than it does with any expansion of the appetite for large ones. In fact, McMansion-sized homes are still being built at just half the rate of a decade ago (200,000 units per year compared to 400,000 at the peak). (See p. 8). As we’ve explained at City Observatory, the “McMansion/Multi-Millionaire” ratio has actually been falling.

Screen Shot 2016-07-06 at 10.30.09 AM

 

Very much to its credit, SONH2016 calls out the growing economic segregation of US housing. It says that poverty in the US has become more concentrated, and bluntly calls out the location of public housing and exclusionary local zoning as major causes of this problem (p. 35).

Homeownership, Wealth and Home Prices

State of the Nation’s Housing continues to promote homeownership as a reliable wealth-creating strategy, what it calls “the wealth building potential of sustained home ownership” (p. 21) The report cites data for the net wealth gains of homeowners who managed to hang on to their homes, but combines data from those who bought over the entire decade (1999 to 2009), which conceals the key fact that those who bought in say 2001 or 2002 had a very different experience than those who bought at the height of the bubble. The report omits the fact that homeowners collectively lost something on the order of $7 trillion in the housing bubble, and that these losses fell disproportionately on low income and minority households. And, as Calculated Risk’s Bill McBride points out, a decade after the peak of the housing bubble, real, inflation-adjusted home prices are still 17 percent below their peak.

Credit: Calculated Risk
Credit: Calculated Risk

 

This series of State of the Nation’s Housing reports have been been issued for nearly three decades. Over the years, there have been a number of recurring themes. Concerns about housing affordability have been hardy perennials. A decade ago, concerns about affordability were also in the report’s headlines, although for very different reasons. The 2006 State of the Nation’s Housing dealt primarily with how rising home prices were reducing affordability for homebuyers. (This particular aspect of the affordability problem was corrected by the popping of the housing bubble, something that that year’s SONH didn’t forsee.)

In 2006, with the nation’s housing bubble about to burst, it offered confident reassurance: “large house price declines seem unlikely for now . . . the long term outlook for housing is bright . . . STRONG DEMAND FUNDAMENTALS . . . with each generation exceeding the income and wealth of its predecessor, growth in expenditures on home building and remodeling should match if not surpass the current pace . . . (State of the Nation’s Housing 2006, page 2). Now, nearly a decade later, the housing sector’s contribution to GDP remains fully one-third lower than its historic average (SONH 16, Figure 12).

The point here isn’t that so much the SONH made a bad forecast, but more than it gave bad advice. Far from being a reliable source of wealth generation, housing was a very risky proposition that was about to impose huge costs of millions of households. Prior to 2007, one might argue that a 20 or 30 percent drop in home prices was improbable.  But in light of the human and personal costs of the last housing bubble, it seems like the report ought to be a bit more cautious in its approach to the question of whether, for whom and under what circumstances home ownership is a good investment.

Federal Housing Policy

There are some surprising omissions from the report. You’ll find no mention of the biggest and most expensive federal housing subsidy programs: the favorable tax treatment of owner-occupied housing, in the form of the mortgage interest deduction, the exclusion of imputed rental income and favorable treatment of capital gains on housing. Collectively, these policies amount to a $250 billion annual subsidy for homeownership, which thanks to a falling homeownership rate, is going to a smaller fraction of American households each year.

It’s also interesting to note that at a time when SONH16 frets that our nation’s housing problems are being aggravated by “dwindling federal subsidies” (p. 31), that federal resources for housing are “dwindling” (page 31), the value of the federal tax subsidies has increased sharply. According to the Congressional Budget Office, the combined value of favorable tax treatment for owner occupied housing increased by $39 billion between fiscal years 2013 and 2015.

 

If we’re really concerned about housing affordability, we have to do more than periodically trumpet alarming statistics. We really ought to plainly identify root causes, and spell out the public policy choices that we’ve made — through local zoning and public subsidies — and talk about the scale of the effort required to materially change the situation.

A casual reader of the SONH16 may come away with the impression that federal housing policies are ineffectual and under-funded, but that’s hardly the case. We have a well-funded federal housing policy that works quite well—if you agree that its intended purpose is to provide generous support, particularly for the wealthiest households, to own their own homes.   In our view, a more comprehensive framing of federal housing policy–one that encompasses tax subsidies to homeownership–provides a much more useful context to understanding our housing problems and formulating commensurate solutions.

If you’re interested in housing, particularly the tale of the tape when it comes to a wide range of housing statistics, SONH 16 is an invaluable resource. But when it comes to thinking about housing policy, and specifically, identifying the root causes of our affordability problem and the nature and scale of the policy solutions that would need to be undertaken to actually move the needle on the indicators presented here, you’ll have to look elsewhere.

Three challenges for the civic commons

In Philadelphia last week, the Gehl Institute convened Act Urban—a global group of leaders and practitioners in the field of the civic commons. After three days of fieldwork and observation, expert presentations and intense discussion, I was asked, along with other panelists to sum up what we’d heard and what the challenges are for this emerging field going forward. Here’s an abbreviated summary of what I had to say.

Philadelphia's Chinatown. Credit: Mumu Matryoshka, Flickr
Philadelphia’s Chinatown. Credit: Mumu Matryoshka, Flickr

 

Like most of the attendees I spoke with, I found it hopeful and encouraging to see the breadth and impact of the projects underway in Philadelphia. As a regular visitor to the city over the past couple of decades, it’s evident that change is very much in the air, and that in important respects, the fabric of the city is beginning to be re-woven in ways that promises to bring Philadelphians closer together. Over the course of three days we saw numerous examples of a range of institutions re-thinking their roles and facilities to promote greater civic involvement and to cross, if not erase, long-established boundaries that divided the community.

From the presentations and discussions, we know that what’s happening in Philadelphia is starting to happen in other cities as well, both in the US and in other countries. All of this is exciting and encouraging.

But, in my view, three big challenges stand directly ahead.

They are the three “M’s”: Moving from micro to macro; Markets; and Metrics. I’ll address each of them in turn.

Micro to macro

Economists routinely make a distinction between micro-economics and macro-economics. Micro is the observation of a single facet of the economy. It’s about learning from and describing the nature of a small, bounded and usually partial segment of the economy. Macroeconomics is the converse—it is the economy of the globe and nations, about how all kinds of small actions and activities add up at a large scale.

The field of civic engagement is strongly grounded in its “micro” phase. It’s about learning how to craft individual pieces of the public realm so that they function better, whether that’s parks, streets, libraries, swimming pools, or other public spaces. This is a logical starting place; it’s easier to mobilize, secure resources, make progress, learn from mistakes and move forward with small scale investment. And the success stories—many of which we heard described in Philadelphia—help inform practice and spread the message about the opportunities and merits of civic engagement.

Pop-up protected bike lane, Minneapolis. Credit: nickfalbo, Flickr
Pop-up protected bike lane, Minneapolis. Credit: nickfalbo, Flickr

 

But at some point, the civic commons has to explicitly aim to achieve scale. Instead of being exceptional and innovative, changing and challenging the status quo, it has to come to define the regular way of doing things. This is the challenge of moving from micro to macro, of moving from projects to policies and institutions, and moving from tactical urbanism to a broader strategy.

Philadelphia’s decision to impose a tax on soda and to use the proceeds to help fund a multi-year bond to pay for capital improvements to parks, libraries and public spaces is an example of how to transition from micro to macro. Not only does this measure provide the resources to greatly increase the scale of activities, the funding mechanism—which is visible and broad-based—means that every citizen will know that they’re making a contribution, and that they have a stake in these investments.

Markets

The second “M” is markets. It may seem odd to invoke markets in the context of public space, but they matter a lot, and they’re telling us something important. When we speak of the civic commons and public realm, we tend to frame it as a largely government-led or public sector function. Municipal governments are primarily responsible for building, financing, operating and regulating public spaces. But viewed more closely, there’s an ambiguity and an interdependence between the public and private sectors on the ground in cities.

At the street level, great urban spaces are formed by mutually reinforcing public and private investments. Great streets, squares, and public spaces attract people, and the flow of people stimulates commerce. And the nearby presence of businesses—shops, bars, cafes, restaurants—reinforces the activity in the public realm. As we showed with our recent Storefront Index (which measures the number and concentration of customer-facing retail and service businesses in cities), the difference between an under-utilized park and an activated one is substantially explained by the presence and density of adjacent storefronts.

At a larger scale, it’s readily apparent that there’s a growing demand of great urban environments. Somewhat paradoxically, just as technology has, at least in theory, freed us from the need to be physically present in a particular place to work, or access information, or have easy access to a wide range of goods, people seem to be craving the opportunity to live in places that afford a wide range of opportunities for easy personal interaction. The rows of people in coffee shops, independently working at their laptop computers, signals a strong desire to be in the public realm, at the same time they are connected to the Internet.

Credit: brewbooks, Flickr
Credit: brewbooks, Flickr

 

Just as technology is freeing us from place, there’s a growing demand to live and work in cities. Well-educated young adults are disproportionately moving to cities. Companies that hire these workers are moving to cities as well. The rent premium for central locations relative to suburbs has increased sharply in the past decade. All of these trends are a sign that there’s strong market demand for urbanity. At City Observatory, we’ve called this “the shortage of cities” because the demand for urban space and urban living is increasing far faster than we’ve been able to increase the supply. And a key element of the supply of cities is the public realm that makes city living and city neighborhoods so appealing. So as we think about how to expand the civic commons and activate public spaces, we should do so with a clear recognition that this is something that the market demands.

Metrics

My third “M” is metrics: how we measure the extent and activation of the public realm. The ability to measure the health and extent of public spaces and the activity that occurs within them is important both to designing great spaces, to moving from “micro” to “macro” and harnessing the growing market demand for the civic commons.

Many of the obstacles we face in promoting the public realm are due to the fact that we face a severe disparity in the kinds of things we measure. Some disciplines and some sets of investments have well-developed sets of metrics and copious statistics that make the strong case for their interests. This is very clear in the case of automobile transportation: every city has detailed measures of traffic volumes, vehicle speeds, vehicle delay times and the like. Almost no city has good data on the number of pedestrians, their convenience or comfort, or even good data on the use of parks or public spaces.

In public policy, it’s often the case that what counts is what gets counted. And the effect in the public realm is that great emphasis gets put on what we can count (the number and speed of vehicles moving through a place) and very little emphasis gets put on how many people actually use or inhabit spaces. In essence we often prioritize “traveling through” rather than “being in” urban environments.

The way to change this is to develop a range of metrics of the quality and use of public spaces. New data and new technology make possible a range of new metrics. In the past few years, Walk Score has emerged as a convenient, ubiquitous and easily understood tool for measuring the walkability of urban spaces. At City Observatory, we’ve developed the Storefront Index, which measure the number and concentration of customer-facing retail and service businesses that help frame walkable commercial neighborhoods. New technology lets us count the number of people walking in or using public spaces. We’re just in the infancy of these measures, but they can be useful tools for planning, and for elevating the health and use of the public realm in policy discussions.

Moving from exceptional innovation to commonplace adoption

One of the exciting things about visiting projects that are transforming neighborhoods and urban spaces is seeing the insight and creativity that designers, community groups and enlightened leaders have brought to bear on improving the public realm. Part of the sense of accomplishment from this kind of innovation comes from challenging the accepted norms, bending or negotiating the rules and doing something that hasn’t been done—or that people thought was impossible. While we should always continue to be innovative, the next big challenge for those with an interest in building cities by strengthening the public realm is to transform innovative breakthroughs into accepted, even commonplace practice. The keys to doing this will be to build on the visible evidence of success in particular projects, and use that to leverage institutional change: not breaking the rules for one project, but re-writing the rules for all projects. That’s why the three “M’s” are important: moving to system level change will require thinking about the “macro” rather than just the micro, harnessing the growing market demand for great urban places, and developing metrics that build a strong case for policy and investment.

Joe Cortright presented these remarks to the closing session of the Act Urban convening in Philadelphia on June 17, 2016. For more information about the Act Urban project, visit its website.

More evidence on the “Dow of Cities”

Last summer, we flagged a fascinating study by Fitch Investment Advisers which tracked twenty five years of home price data, stratified by the “urbanness” of housing. Fitch showed that particularly since 2000, home prices in neighborhoods in the center of metropolitan areas increased in value relative to all other metropolitan housing. We termed the price premium that central neighborhoods command “the Dow of Cities” because like the Dow Jones index, it serves as an indicator of the market valuation of urban locations.

Last month, the Federal Housing Finance Agency (FHFA) produced a new data series, a repeat sales index of housing. (For more about ways of measuring housing prices, check out our stats guide.) They analyzed over 100 million property transactions from 1975 through 2015 and produced an index of home prices that can be used to track neighborhood level price changes, and to disaggregate the effects of location from other factors (like home size). A new research paper—Local House Price Dynamics: New Indices and Stylized Facts—based on that data, authored by Alexander Bogin, William Doerner, and William Larson looks at the relationship between urban location and price increases within metropolitan areas. You’ll find a summary of the report and maps for several metropolitan areas in Emily Badger’s Wonkblog analysis.

Their key finding is that more centrally located homes—those closer to a metropolitan area’s central business district—have experienced higher rates of appreciation over the past twenty five years. You can see those findings in the following chart, which shows the appreciation rate by zip code, between 1990 and 2015, based on distance to the central business district. The results are surprisingly strong: in large U.S. metropolitan areas, homes within 5 miles have appreciated, in real terms, at an annual rate of about 1.5 to almost 2.0 percent per year over the past 25 years, while homes 10 or more from the center have appreciated at a fraction of one percent per year. (See the green line in figure four). In smaller metropolitan areas (those with fewer than 500,000 housing units), the relationship is nearly flat, suggesting that the big gains in home values in the center have been in the largest metropolitan areas.

Screen Shot 2016-06-29 at 10.04.47 AM

Maps of several metropolitan areas make the pattern clear. In these maps, zip codes with the highest relative rates of housing appreciation have the darkest blue shading. Those zip codes with the lowest relative levels of appreciation have the lightest shading. In each case, appreciation rates are normed to metropolitan level averages. For example, the Washington Post prepared the following map of housing price appreciation in Portland, showing that the neighborhoods with the highest levels of appreciation are in or very near the center of the metropolitan area, and the rates of appreciation are lower on the suburban periphery. Similar patterns show for metropolitan areas like Houston, Minneapolis, Chicago, Denver and Phoenix.

Screen Shot 2016-06-29 at 10.06.03 AM

Map Credit: Washington Post, from FHFA data.

 

The authors are making their complete data set of 5-digit zip code level data available for download. You can also explore on-line maps of individual metropolitan areas which allow you to see the rate of price appreciation since 1990 and 2000.

As the authors note, the size and detail of the FHFA transaction database gives it some technical advantages over other ways of measuring home prices. The FHFA data covers more transactions that private databases like the Case-Shiller data used by Fitch, and has the added advantage of being publicly available. And in contrast to analyses of area home price data (such as Zillow’s zip code level price estimates), is able to distinguish between changes in home price and home quality. That said, the FHFA analysis substantially confirms the trends spotted using these other data sources.

The effect is clear, but what’s the cause?

While there’s little question about the trend—housing closer to the city center is appreciating significantly relative to that in peripheral suburbs—there’s still some debate about the causes. One of the FHFA studies authors, William Larson, maintains that the shift in demand back to cities doesn’t necessarily represent a change in preferences, but is instead driven simply by growing higher incomes, traffic congestion, improved urban amenities, and reduced crime. While economists generally want to explain everything in the context of market variables—and we include ourselves in this category—this seems like a stretch. As we’ve shown in our studies of the Young and Restless, well-educated young adults are today dramatically more likely to choose to live in close-in urban neighborhoods than their predecessors of 10, 20 and 30 years ago. Moreover, the growth urban amenities is as much a result as a cause of the shift to urban living—the growing number of well-educated urban residents provides the demand for restaurants and the experience economy. In a forthcoming paper, Jesse Handbury and Victor Couture attribute the rise of central city property values directly to “a diverging preference for consumption amenities” particularly by well-educated workers.

It’s also doubly hard to square the evidence on accessibility, commute times, and travel modes with a claim that preferences haven’t changed. Central locations have always had an accessibility advantage to jobs, congestion and travel times have not increased appreciably—and in fact have decreased in recent years. And as one indicator of a fundamental change in preferences, far more Americans choose to cycle to work in cities today than a decade or two ago; something they could have easily done then—but didn’t, most likely because that generation had a different preference for driving as opposed to cycling; just as it has a different attitude toward urban living. Indeed, the willingness of consumers to pay higher prices to live in cities today, relative to the price of suburban living, is a key indicator of the change in values we attach to great urban places.

And, as we’ve noted before, it’s an indication that we have, nationally “a shortage of cities”—the demand for housing in urban neighborhoods is rising faster than the supply, resulting in steadily escalating relative home prices in dense, central neighborhoods. It’s both a major contributor to the growing challenge of housing affordability and an economic signal that we ought to be doing more to build additional housing in cities.

Cities and Brexit

Last week’s big news was Britain’s decision, via referendum, to leave the European Union. The results of the vote lead Prime Minister Cameron to resign and sent markets reeling, and it’s still unclear what the ultimate economic and political effects will be. For some keen, if depressing, insight on the ramifications of Brexit, you may want to read this essay by Fusion’s Felix Salmon.

Analyses of the vote in Britain point up the sharp generational, educational and geographic cleavages that produced the outcome. Older and less well-educated voters favored leaving, younger and more highly-educated voters preferred to remain the the European Union. As the BBC reported, those 18 to 24 voted almost 3 to 1 in favor of remaining; a majority of those over 45 voted to leave:

Screen Shot 2016-06-27 at 11.05.52 AM

There was a strong correlation between the education level of an area, and its vote on the referendum. According to data analyzed by The Guardian, the highest educated areas voted most strongly to remain; the least well-educated areas tended to vote to leave.

Screen Shot 2016-06-27 at 11.07.15 AM
Credit: The Guardian

 

And similar to our own red state/blue state divisions—a topic we’ve explored at City Observatory—there’s a strong geographic split in the vote in the UK, especially in England. (All of Scotland and most of Northern Ireland favored staying in the EU, by sizable margins.) As Emily Badger has pointed out in the Washington Post, there’s an urban/rural divide in the election returns. London and its environs, and Liverpool and Manchester supported remaining by substantial margins. The more rural parts of England voted to leave. The Washington Post’s map casts the vote in familiar red/blue hues, with the remain vote shown as blue and the leave vote shown as red.

Screen Shot 2016-06-27 at 11.06.09 AM

 

Another way to look at this question is to consider the relationship between population density and voting patterns. Studies of the US showed a strong correlation: higher density counties were much more likely to vote for Barack Obama than Mitt Romney in 2012. To dig deeper into this question, we gathered data on the population density of English electoral districts and compared it to the fraction of local electors voting to remain in the European Union. Density data are the number of eligible electors (voters) in each of England’s 380 electoral districts, divided by the number of hectares in the district (electors per hectare). These data come from the Local Government Boundary Commission. We obtained election returns for each district from the UK Electoral Commission.

This chart shows the relationship between each electoral district’s density (shown on the horizontal axis, densest areas to the right of the chart) with the fraction of the voters favoring remaining in the European Union (vertical axis, higher values corresponding to a greater percentage of voters favoring remaining). Since there are varying numbers of voters in each district, we’ve displayed these results as a bubble chart, with the size of each bubble corresponding to the number of votes cast in each electoral district.

 

In general, the denser an electoral district, the more likely it was to was to vote to remain in the EU. All of the electoral districts with densities of 40 or more electors per hectare voted to remain. The pattern for lower density districts was more diffuse, but on average those with the lowest levels of density cast their ballots for leaving.

The geographic, demographic and generational polarization around issues like Brexit may be a sign of our times. In a world still being reshaped by globalization and technological change—and still working to cope, fitfully, with the aftermath of the worst economic downturn in eight decades, there’s a division between those who greet the future with optimism and those who are looking to return, or cling to, a seemingly fast disappearing past. Many of the fault lines are defined by place, with many city economies drawing and attracting talent, flourishing with the growth of global connectedness. But others, especially in smaller towns and rural areas haven’t gained so much from change. As Britain’s Brexit election shows us, the divisions are deep. Addressing them will be a major issue for all of us.

Sprawl and the cost of living

Over the past three weeks, we’ve introduced the “sprawl tax”—showing how much more Americans pay in time and money because of sprawling urban development patterns. We’ve also shown how much higher the sprawl tax is in the US than in other economically prosperous countries, and how sprawl and long commutes impose a psychological, as well as an economic burden. Today, we’ll take a close look at how ignoring the sprawl tax distorts our view of the cost of living in different regions and neighborhoods.

As one old saying goes, an economist is someone who knows the price of everything and the value of nothing. It’s often claimed that some places, often sprawling Sunbelt cities, have a lower cost of living, based usually on observations about lower housing prices. And judged solely from the sticker price of new homes, the argument has some merit.

Phoenix. Credit: Al_HikesAZ, Flickr
Phoenix. Credit: Al_HikesAZ, Flickr

 

But as our aphorism about economists implies, there is a lot more to this question than just one set of prices. If you’ve followed our series on the sprawl tax, you know that living in some cities—those with cheap average housing costs, like Houston or Dallas or Birmingham—also carries with it a heavy, and largely ignored cost in the form of the “sprawl tax”: much higher transportation costs. In short, we tend to fixate on the price of something we can easily measure (housing) and simply leave out the value of something that is much less obvious (sprawl and longer commutes)

How big is the sprawl tax, relative to the supposed cost of living differences among metropolitan areas? Quite large as it turns out: enough to erase much of the supposed cost advantages that low density settlement is supposed to offer.

We know that within metropolitan areas, there’s a strong tradeoff between rents and home prices and typical commute distances. Low density housing, at a long remove from the city center and most metro area jobs, commands lower prices, reflecting in large part the added transportation costs implied by more far-flung and less accessible locations. Others, notably the Center for Neighborhood Technology through its H+T calculator, have addressed the tradeoff between housing costs and transportation costs within metropolitan areas. Our analysis here uses the sprawl tax in concert with Bureau of Economic Analysis estimates of housing costs differences at the metropolitan level to examine this tradeoff.

The Sprawl Tax and Cost of Living Differences

In the following chart, the vertical axis shows the sprawl tax: higher values mean that a region’s residents pay more for transportation costs and travel time as a result of sprawl. The horizontal axis shows the rent differential: how much more (or less) the typical resident pays in annual rent/housing costs compared to the typical large metropolitan area. Areas to the right have higher rents; areas to the left have lower rents. And, as discussed in the post-script, at least some portion of the difference in rents reflects real quality of life differences between cities; but for now we use this as an index for housing cost comparisons.

 

The chart illustrates a range of different combinations. Four metro areas with the highest sprawl taxes (Atlanta, Nashville, Dallas, Houston) all have lower than average housing costs. For example, according to the BEA estimates, the annual cost of housing in Houston is $850 per person less than the national average. But the typical Houston household would face a sprawl tax based on longer commute distances of about $2,900 per worker which would essentially wipe out the superficial price advantage from housing.

The nation’s most expensive housing markets—including San Francisco, San Jose, New York, and Washington—have housing costs that are considerably higher than the national average, with per person annual rental costs exceeding the national average by $5,000, up to almost $10,000 in San Jose and San Francisco. But these cities have much lower than average sprawl taxes. Workers in San Francisco, San Jose and New York pay less than $500 per year in sprawl taxes. This amount doesn’t fully offset the added rental costs, but combined with the quality of life differences between high prices and low prices cities, makes the differences much smaller.

The Bronx, NYC. Credit: Dave Johnson, Flickr
The Bronx, NYC. Credit: Dave Johnson, Flickr

 

And for many cities, adding the sprawl tax essentially erases the supposed cost of living advantage from cheaper housing. For example, Houston’s housing cost is about $1,100 per person less than Portland’s ($845 below the large metro average compared to $318 above). But Portland’s sprawl tax ($871) is $2,000 less, per worker, than Houston’s ($2,877), which for many households will more than eliminate the housing price advantage.

A sensible discussion of the real differences in the cost of living between places has to look just past the prices of a few commodities and big ticket items, to consider the intrinsic value consumers attach to great urban environments, the variety and convenience of consumption opportunities in compact urban centers and the sprawl tax the consumers must pay.

Methodology: How we calculated living cost differentials

The best measure we have of inter-metropolitan differences in consumer prices is the Bureau of Economic Analysis Regional Price Parities (RPP) estimates. These estimates, prepared from the data used to construct the nationwide consumer price index, confirm some of our fundamental intuitions about differences in costs of living between communities. The cost of goods varies little among places. The difference between relatively expensive and relatively inexpensive cities (that is, the 75th percentile and 25th percentile) in the cost of goods is just 2.5 percent. For services, prices vary somewhat more, about 8 percent. So what’s really making up the difference in cost of living? No surprise: housing, which varies by more than 30 percent.

Regional Price Parities for Metropolitan Areas of Over 1 Million, 2013

Component Weight Mean Interquartile Range (25%-75%)
Goods 41.5% 99.2 97.3 – 99.7
Services (excluding rent) 37.9% 99.3 94.3 – 102.1
Rent 20.7% 107.2 85.5 – 116.0

Source: Bureau of Economic Analysis, City Observatory calculations. Weight is the share of these items in the RPP calculation. Index values are for the entire nation, i.e. U.S.=100. Mean and interquartile range are weighted by metro area.

Here are BEA’s estimates of the rent price index for the 51 largest U.S. metropolitan areas (those with a population of one million or more). The data are indexed to the national average rent (U.S. = 100). On average, rents are about seven percent higher in large metropolitan areas (i.e. the index for the median metropolitan areas of the 51 largest is 107). Rental prices in San Jose and San Francisco are 80 to 90 percent higher than nationally; rents in Louisville and Birmingham are 25 to 30 percent lower than the national average.

Metro Area Rent Index
San Jose-Sunnyvale-Santa Clara, CA 194.4
San Francisco-Oakland-Hayward, CA 181.9
Washington-Arlington-Alexandria, DC-VA-MD-WV 169.9
Los Angeles-Long Beach-Anaheim, CA 166.9
San Diego-Carlsbad, CA 162.7
New York-Newark-Jersey City, NY-NJ-PA 157.2
Boston-Cambridge-Newton, MA-NH 140.6
Miami-Fort Lauderdale-West Palm Beach, FL 128.8
Seattle-Tacoma-Bellevue, WA 127.7
Riverside-San Bernardino-Ontario, CA 120.1
Sacramento—Roseville—Arden-Arcade, CA 120.1
Baltimore-Columbia-Towson, MD 117.4
Chicago-Naperville-Elgin, IL-IN-WI 116.9
Denver-Aurora-Lakewood, CO 115.1
Philadelphia-Camden-Wilmington, PA-NJ-DE-MD 113.1
Hartford-West Hartford-East Hartford, CT 110.9
Portland-Vancouver-Hillsboro, OR-WA 110.8
Austin-Round Rock, TX 110.1
Minneapolis-St. Paul-Bloomington, MN-WI 110.0
Virginia Beach-Norfolk-Newport News, VA-NC 109.1
Orlando-Kissimmee-Sanford, FL 104.3
Salt Lake City, UT 104.3
Tampa-St. Petersburg-Clearwater, FL 104.0
Las Vegas-Henderson-Paradise, NV 101.0
Providence-Warwick, RI-MA 100.7
Dallas-Fort Worth-Arlington, TX 100.2
Houston-The Woodlands-Sugar Land, TX 98.5
Milwaukee-Waukesha-West Allis, WI 97.6
New Orleans-Metairie, LA 97.4
Richmond, VA 97.4
Phoenix-Mesa-Scottsdale, AZ 97.3
Jacksonville, FL 96.3
Rochester, NY 95.4
Raleigh, NC 93.7
Atlanta-Sandy Springs-Roswell, GA 92.0
Detroit-Warren-Dearborn, MI 88.1
San Antonio-New Braunfels, TX 87.9
Nashville-Davidson—Murfreesboro—Franklin, TN 86.4
Charlotte-Concord-Gastonia, NC-SC 84.6
Kansas City, MO-KS 84.4
Columbus, OH 84.1
Indianapolis-Carmel-Anderson, IN 84.1
St. Louis, MO-IL 83.9
Cleveland-Elyria, OH 80.4
Cincinnati, OH-KY-IN 80.0
Oklahoma City, OK 79.6
Buffalo-Cheektowaga-Niagara Falls, NY 79.3
Memphis, TN-MS-AR 79.1
Pittsburgh, PA 78.8
Louisville/Jefferson County, KY-IN 75.0
Birmingham-Hoover, AL 70.3

Because housing costs are the biggest source of variation in the measured cost of living among large metropolitan areas, we use the BEA data to estimate how much more, or less, a typical household pays for housing, based on the difference in rents among metropolitan areas. BEA estimates the rents as a combination of the actual rent paid by renters, and the “imputed rent” which is the value of housing services received by households that own their own homes). We compute the difference in the income paid for housing among metropolitan areas by observing the difference between the average rental price parity for the 51 largest metropolitan areas (107) and the actual value for each metropolitan area. We multiply that difference by the per capita personal income of the area and the share of per capita income devoted to rent (estimated by BEA at 20.7%).

A Postscript: Two Big Challenges with Comparing Living Costs

We present the estimates of housing cost differentials here, and compare them to the magnitude of the sprawl tax to illustrate how important urban form is to our economic and personal well-being. But its also worth noting that conventional cost comparison measures understate two important economic advantages of cities.

First, as with other commodities, differences in prices often signal the value that consumers attach to different objects. Housing may be more expensive in San Francisco or Hawaii than in Omaha or Idaho, but much of this difference reflects the value that we attach to being in a vibrant city or a sunny, tropical climate. A two bedroom apartment near the beach in Maui or on Nob Hill in San Francisco represents an entirely different set of amenities than a similarly sized apartment in Fresno or Fargo. Indeed, a whole class of economic analysis—hedonic regression—uses these price differences to estimate the value of natural and manmade amenities.

Second, there’s an inherent limitation in trying to compare different places that have widely different sets of attributes and consumption opportunities. Typical estimates of cost-of-living differences will look only at a single commodity (like housing) or a very limited market basket of simple goods and services, and use these to compute differences in living costs between places. The assumption is that consumers or households buy the exact same mix and quantity of goods and services wherever they live. Economists have cast serious doubt on the validity of these simple-minded price comparisons. It turns out, consumers attach value to having convening access to a big range of goods and services, something you find mostly in cities. Columbia’s Jessie Handbury has shown that the greater mix, variety and convenience of shopping opportunities in larger cities means that consumers actually enjoy lower prices for the particular market basket of goods they prefer in larger cities than in smaller ones, contrary to the notion that small town prices are lower. BEA’s Regional Price Parities which imply that New York’s prices for goods are 8.8 percent higher than the national average aren’t adjusted for quality and variety. According to Handbury’s estimates, when you make that adjustment, prices in larger cities are actually lower than in smaller ones.

The market cap of cities

What are cities worth? More than big private companies, as it turns out: The value of housing in the nation’s 50 largest metropolitan areas ($22 trillion) is more than double the value of the stock of the nation’s 50 largest corporations ($8.8 trillion).

Market capitalization is a financial analysis term used to describe the current estimated total value of a private company based on its share price. It’s a good rough measure of what a company is worth, at least in the eyes of the market and investors. The market capitalization—or “market cap,” as it is commonly called—is computed as the current share price of a corporation multiplied by the total number of shares of stock outstanding. In theory, if you were to purchase every share of the company’s stock at today’s market price, you would own the entire company.

Checking up on your cities. Credit: OTA Photos, Flickr
Checking up on your cities. Credit: OTA Photos, Flickr

 

In roughly similar fashion, we can compute the market capitalization of cities—or at least of their housing stock. We start with Zillow’s estimate of the market value of owner-occupied housing in each of the nation’s largest metropolitan areas which is computed by estimating the current market price of each house in a metropolitan area, and sum that value over all of the owner occupied houses. We also estimate the value of rental housing. For rented units we use a commonly accepted technique of estimating current values based on the income generated from rent. (Americans paid about $535 billion in rent in 2015, according to data compiled by Zillow; we can use this data and some financial formula to estimate the value of rental housing. Details of this calculation are explained below.) Then we add together the value of all owner-occupied housing and the value of rental housing to compute the total market cap of housing in each metropolitan area in the US.

Together, the 50 largest publicly traded private corporations in the United States had a market capitalization of $8.8 trillion at the end of 2015. The total market value of housing in 2015 in the 50 largest metropolitan areas was $22 trillion. For reference, the gross domestic product—the total value of all goods and services produced in the US in 2015—was estimated at $18 trillion. It’s hard to find things measured in trillions of dollars, so we’ve juxtaposed GDP against the market cap of housing and businesses. Keep in mind that the GDP is a flow (trillions of dollars per year) while the value of corporations and housing is a stock (trillions of dollars in value at one-point in time).

The following table shows the market value of housing in each of the nation’s 50 largest metropolitan areas and the current market capitalization of the nation’s 50 largest publicly-traded private sector businesses.

For metro areas, the value of housing is divided into two components (owner-occupied housing) shaded blue, and rental housing (shaded orange).

The most valuable company is Apple, with a market cap of $541 billion; the most valuable metro area is New York, where the market value of owner-occupied and rental housing is $2.9 trillion—more than five times higher. The current market value of Apple is about the same as the current market value of housing in Seattle (the twelfth most valuable market on our list).

Some modest-sized metros have housing that’s worth as much as the entire value of some very well-known corporations: IBM’s market cap ($128 billion) is about equal to Indianapolis housing ($138 billion). Orlando’s housing ($208 billion) is valued at more than 25 percent over all of Disney ($164 billion). Three Seattle-based companies (Microsoft, at $418 billion; Amazon, at $285 billion; and Starbucks, at $84 billion) are worth more combined ($787 billion) than all the housing in Seattle (about $617 billion).

The differences are smaller at the bottom end of our two league tables. The fiftieth largest firm, the oil services company Schlumberger, is worth about $15 billion more than the fiftieth most valuable metro housing market, Buffalo: $82 billion versus $67 billion.

Buffalo! Credit: Zen Skillicorn, Flickr
Buffalo! Credit: Zen Skillicorn, Flickr

 

It may seem strange to compare the market value of houses with companies, but this exercise tells us more than you might think. Just as the share price of a corporation reflects an investor’s expectations about the current health and future prospects of a company, the price of housing in a metropolitan area also reflects consumer and homeowner attitudes about the quality of life and economic prospects of that metropolitan area. So, for example, as the price of oil has fallen, weakening growth prospects in the oil patch, it’s quickly translated into less demand and weaker pricing for homes in Houston. Just as stock market investors purchase and value stocks based on the expectation of income (dividends) and capital gains from their ultimate sale, so too do homeowners (and landlords)—they count on the value of housing services provided by their home as well as possible future capital gains should it appreciate.

In fact, these two commodities—housing and stocks—are among the most commonly held sources of wealth in the United States. And while the financial characteristics of the two investments are dramatically different the underlying principle is the same, making market cap is a useful common denominator for assessing the approximate economic importance of each entity.

Each day, the financial press reports the market’s assessment of the value of individual firms, through their stock prices. But viewed through a similar lens, the housing markets of the nation’s cities are by this financial yardstick an even bigger component of the nation’s economy.

Technical Notes

How we computed the value of rental housing. In real estate, the value of rental housing is usually estimated using a “cap rate” capitalization rate, that approximates the rate of return on capital that real estate investors expect from leasing out apartments. To estimate the current market value of apartments, we take Zillow’s estimate of the total amount of rent paid in each market and deduct 35% to estimate “net operating income”—the amount the investor receives after paying maintenance, other operating expenses, and taxes—and then we divide this number by a capitalization (cap) rate of 6%. Both of these figures (net operating income and capitalization rates) are rough estimates—values vary across different times of properties, different markets, and over time with financial conditions (such as with the change in market interest rates).

Many thanks to Zillow’s Chief Economist Svenja Gudell and Aaron Terrazas for doing the hard work here of estimating property values and rental payments. For more keen insights on housing markets, follow their work at Zillow’s Real Estate and Rental Trends blog.

The Week Observed, December 2, 2016

What City Observatory did this week

1. Does Rent Control Work: Evidence from Berlin.  Economists are nearly unanimous about rent control:  they think it doesn’t work. Berlin’s recent adoption of a new rent control scheme in 2015 provides a new test case to see if they’re right.  An early analysis of the Berlin program shows that it’s done little to reduce rents, and even though the program was intended to address affordability problems for low and moderate income households, most of the benefits have gone to those renting the most expensive apartments.

Nikolaiviertel, Berlin (Flickr: oh_berlin).
Nikolaiviertel, Berlin (Flickr: oh_berlin).

2. Does Cyber-Monday mean package delivery Gridlock tuesday?  The growing volume of e-commerce has led some pundits to worry that city streets will be clogged by delivery vehicles.  But while we are getting many more packages at our homes, the growth of actual truck traffic has been much slower, in large part because growing volumes produce economies of scale for shippers.  More packages mean higher delivery density, more stops per mile traveled, and less energy, pollution and labor per package delivered.  In addition, e-commerce purchases mean fewer shopping trips. On balance, e-commerce is likely to reduce, rather than increase overall traffic.

3. Destined to disappoint: Housing lotteries.  The demand for affordable housing is so great and the supply of subsidized housing so small, that cities frequently have to resort to lotteries to allocate units to deserving households. An analysis of New York City’s lotteries for the past three years showed that nearly half of all winners fell into the 25 to 34 year old age category, leading to speculation that the lottery is somehow tilted in favor of young adults.  We look at the population that’s likely to be seeking an rental apartment in New York, and find little discrepancy between the population and lottery winners.  The bigger problem with lotteries is that so few units are available:  Fewer than two-tenths of all the households moving to an apartment in New York in the past year were lottery winners.

4. Why biotech strategies are often 21st Century snake oil.  Cities and states around the nation have invested hundreds of millions of dollars in public funds in efforts trying to make themselves the next hub of biotechnology.  But like many biotech ventures themselves, this is a high-cost, high-risk undertaking. In one particularly epic example, a small town in Minnesota spent more than $30 million in state and federal funds on highway improvements for its biotech park, based in large part on the assurances that a prominent national biotech analyst could provide a $1 billion venture fund.  You’ll never guess what happened next.

Must Read

It’s been an incredibly prolific week for “must read” articles, so we’re highlighting a few more than usual.  We have two very insightful commentaries on road safety and inclusionary zoning, and four articles dissecting the results of the November 8 national election (hopefully we’ve reached peak political post-mortem).

1. The real reason the US has so many traffic deaths.  The surge in crashes and traffic deaths in the past few years has re-kindled concern about road safety, and prompted a wave of media reports pointing a finger of blame at texting while driving.  At City Observatory, we’ve been skeptical of this explanation. Now, Vox has published a comprehensive essay from by , reminding us of the big structural reasons why American traffic deaths are so much higher than in other countries–and it has almost nothing to do with texting.  Not only do Americans drive many more miles (or kilometers, if you prefer), but added driving has been spurred on by cheaper gas prices.  Garrick and his co-authors conclude that the recent increase in crash rates and deaths is almost fully explained by the decline in gas prices and lower unemployment rates. That’s not to say texting in a car is a good idea, but our road safety problems are more fundamental and deep-seated.

2. In many cities, inclusionary zoning–mandating that those building new housing in cities include a fixed proportion of affordable units–is seen as an easy way to force developers to solve the affordability problem, at no cost to the public.  Writing at the Sightline Institute, Dan Bertolet and Alan Durning consider whether inclusionary zoning is the most promising or most counter-productive strategy for tackling this problem.  They argue that uncompensated inclusionary zoning–where the costs of added units are borne entirely by the developer–simply pushes up the market price of housing, reduces the number of new units built, and actually makes housing affordability problems worse.  In theory, they say that if developers costs are compensated or offset (by some combination of density bonuses, faster permit approvals, lessened parking requirements or tax breaks) that these negative effects could be reduced or eliminated.  While that’s likely to be true, the very practical question that gets begged in this analysis–and in most IZ debates–is whether these “offsets” are large enough to truly cover the higher costs.  This article is a thoughtful exploration of many of the points that come up in debates over inclusionary zoning.  Its an absolute must read for anyone who cares about housing affordability.

3. The election, by metro area.  Mapping how America’s metro areas voted.  Richard Florida breaks out the election returns by metropolitan area, finding that most large metropolitan areas voted for Hillary Clinton, while most smaller ones voted for Donald Trump.  Clinton won more than three-quarters of the votes in the San Francisco metro area and more than two-thirds of the votes cast in the San Jose, Washington and New York metro areas.  Of the nation’s largest metropolitan areas (those with a million or more population), ten  of them (including Oklahoma City, Dallas, Pittsburgh and Cincinnati) awarded at least a majority of votes to Donald Trump; in five other metropolitan areas Trump won a plurality of the Presidential vote.

4. The election, by productivity.  In “Another Trump-Clinton Divide,” the Brookings Institution’s Mark Muro slices county-level election returns by gross domestic product.  He finds that the most economically productive counties in the US (again, overwhelmingly in large metro areas) tended to vote strongly for Hillary Clinton.  In all, the counties that voted blue in 2016 accounted for 64 percent of US GDP, compared to only 36 percent of GDP in red counties.  The economic disparity between red and blue counties has apparently widened.  In the  similarly close 2000 presidential election, counties that voted for Al Gore produced 54 percent of US GDP, compared to 46 percent for the counties who voted for George W. Bush. (Imagine electoral votes were apportioned by economic output).

red_blue_gdp_2016

5. The election, by tech-based economic development.  The Economist’s “Graphic Detail” feature further sharpens this economic view of politics by looking at how tech-dominated counties voted in the election.  (There’s a fair amount of overlap here between high tech counties and high productivity ones).  Their summary:  “In counties that favoured Democratic presidential candidates between 2000 and 2016, employment in high-tech industries grew by over 35%. In Republican-leaning counties, such employment actually fell by 37%. Today, there are more than three times as many high-tech industry workers in places that voted for Hillary Clinton as there are in those that favoured Mr Trump.”  Something on the order of 90 percent of the nation’s employment in computer manufacturing, software publishing and information services is located in counties that voted Democratic in the 2016 election.

screenshot-2016-11-29-11-20-14

New Knowledge

1. The 500-pound gorilla in US retailing is the fast-growing e-commerce behemoth, Amazon.  There’s little question that its growth has had a significant effect on the retail landscape, contributing first to the decline of independent bookstores, and more recently, it is argued, to the overall shrinkage of the number of retail establishments in the US.  A new report from the Institute for Local Self-Reliance–Amazon’s Stranglehold: How the Company’s Tightening Grip is Stifling Competition, Eroding Jobs, and Threatening Communities, —  takes a comprehensive and critical look at Amazon’s growth and impacts.  There’s a huge amount of information here, addressing everything from the growth of e-commerce and Amazon’s market share, to working conditions in Amazon warehouses, and the competitive effect.  While the report’s tone can be a bit hyperbolic, and its title and chapter heads leave little doubt as to the authors’ feelings  — “monopolizing the economy, undermining jobs and wages, weakening communities” — there’s plenty of hard data as well.

2.  More evidence on lead and crime. A growing body of research points to the substantial role that exposure to lead played in determining crime rates in US cities. While much of the research examines the correlation between atmospheric lead (from burning leaded gasoline) and the rise and subsequent decline in urban crime rates, a new study takes a look at a different source of exposure: lead water pipes. Many cities routinely used lead water pipes at the end of the 19th century, and by comparing crime rates in cities with lead and iron water pipes, James Feigenbaum and Christopher Muller are able to tease out the connection between lead exposure and city crime. In their paper “Lead Exposure and Violent Crime in the Early Twentieth Century, they show that cities with lead water pipes had crime rates that were 24 percent higher than cities that didn’t use lead.

 

 

21st century snake oil

Thanks to technological innovations, our lives are in many ways better, faster, and safer: We have better communications, faster, cheaper computing, and more sophisticated drugs and medical technology than ever before. And rightly, the debates about economic development focus on how we fuel the process of innovation. At City Observatory, we think this matters to cities, because cities are the crucibles of innovation, the places where smart people collaborate to create and perfect new ideas.

While the emphasis on innovation is the right one, like any widely accepted concept, there are those who look to profit from the frenzy of enthusiasm and expectation.

Around the country, dozens of cities and many states have committed themselves to biotech development strategies, hoping that by expanding the local base of medical research, that they can generate commercial activity—and jobs—at companies that develop and sell new drugs and medical devices. There’s a powerful allure to trying to catch the next technological wave, and using it to transform the local economy.

Over the past decade, for example, Florida has invested in excess of a billion dollars to lure medical research institutions from California, Massachusetts and as far away as Germany to set up shop in the Sunshine State. Governor Jeb Bush pitched biotech as a way to diversify Florida’s economy away from its traditional dependence on tourism and real estate development.

The historic Florida capitol. Credit: Stephen Nakatani, Flickr
The historic Florida capitol. Credit: Stephen Nakatani, Flickr

 

Of course it hasn’t panned out; Florida’s share of biotech venture capital—a key leading indicator of commercialization—hasn’t budged in the past decade. And several of the labs that took state subsidies are down-sizing or folding up their operations as the state subsidies are largely spent. Massachusetts-based Draper Laboratories (which got $30 million from the state) recently announced it was consolidating its operations at its Boston headquarters and closing outposts in Tampa and St. Petersburg—in part because they were apparently unable to attract the key talent that they needed. The Sanford-Burnham Institute, which got over $300 million in state and local subsidies, is contemplating leaving town and turning its Orlando facilities over to the local branch of the University of Florida.

And while Florida’s flagging biotech effort might be well-meant but unlucky, in one recent case, the spectacular collapse of a development scheme has to be chalked up to outright fraud. As the San Francisco Chronicle’s Thomas Lee reports, both private and public investors have succumbed to the siren song of biotech investment. Last month, the Securities and Exchange Commission issued a multi-million dollar fine, and a lifetime investment ban, to Stephen Burrill, a prominent San Francisco-based biotech industry analyst and fund manager. Burrill diverted millions of dollars meant for biotech startups funds to his personal use. Not only that, but Burrill was a key advisor to a private developer who landed $34 million in state and federal funds to build a highway interchange to service a proposed biotech research park in rural Pine Island, Minnesota, based on Burrill’s promise he could raise a billion dollar investment fund to fill the park with startups. In the aftermath of the SEC action, Burrill is nowhere to be found, and the Elk Run biotech park sits empty.

But puffery and self-dealing are nothing new on the technological frontier or indeed, in the world of economic development. The most recent example, biomedical equipment maker Theranos, which claimed that it had produced a new technology for performing blood tests with just a single drop of blood. The startup garnered a $9 billion valuation, and conducted nearly 2 million tests before conceding that its core technology didn’t in fact work. Theranos has told hundreds of thousands of its patients that their test results are invalid. As ZeroHedge’s Tyler Darden relates, the company rode a wave of fawning media reports that praised its disruptive “nano” breakthrough technology (WIRED) and lionized its CEO as “the world’s youngest self-made female billionaire” and “the next Steve Jobs.” All that is now crashing to earth.

When it comes to biotech breakthroughs, consumers, investors and citizens are all easy prey for the hucksters that simultaneously appeal to our fear of illness and disease and our hope—borne from the actual improvements in technology—that theirs is just the next step in a long chain of successes. Investors pony up their money for biotech—even though nearly all biotech firms end up money losers, according to the most comprehensive study, undertaken by Harvard Business School’s Gary Pisano. And as my colleague Heike Mayer and I pointed out nearly a decade ago, it’s virtually impossible for a city that doesn’t already have a strong biotech cluster to develop one now that the industry has locked into centers like San Francisco, San Diego and Boston.

At first glance, biotech development strategies seemed like political losers: you incur most of the costs of building new research facilities and paying staff up front, and it takes years, or even decades for the fruits of research to show up in the form of breakthroughs, products, profits and jobs. No Mayor or Governor could expect to still be in office by the time the benefits of their strategy were realized. But as it turns out, the distant prospects of success always enable biotech proponents to argue that their efforts simply haven’t yet been given enough time (and usually, also resources) to succeed. And likewise, no one can pronounce them failures. When asked why the struggling Scripps Institute in West Palm Beach hadn’t produced any of the spin off activity expected, local economic developers had a read explanation, reported the Palm Beach Post:

“Biotech officials urge patience and repeat the mantra that a science cluster needs decades to evolve. “This takes a lot of time to develop,” said Kelly Smallridge, president of the Business Development Board of Palm Beach County.”
“The biotech bonanza Jeb Bush hoped for? It didn’t go as planned,” Palm Beach Post, June 15, 2015

So rather than being a liability, the long gestation period of biotech emerges as a political strength. Apparently, you’ve got to give the snake oil just a little bit more time to kick in.

Equity and Parks

Last week, our friend and colleague, Carol Coletta delivered a “master talk” to the 66th Annual Conference of the International Downtown Association. Carol is President & CEO, Memphis River Parks Partnership, and a recognized thought leader on urban issues. Here are her reflections on the role of parks and public spaces in meeting the key challenges of our time, overcoming social distance and building stronger and more successful communities.

Great public spaces, especially parks in and near downtowns can be an essential venue for social and economic mixing, promoting both vitality and empathy:  Equity does not sit in opposition to a thriving, appealing cityIt is central to it.  

Successful downtowns increasingly depend on great public space.  And great public space located in a downtown is more likely to be equitable space because of its location, not despite its location.  

 


Equity and Parks

These past six months have been more challenging to downtowns than any I remember – and I’ve been working on, investing in and living in downtowns for almost half a century.  We are being asked to reconsider everything we believe about downtowns – why they are important, and how they work.  

Carol Coletta

Not one of us really knows how this will all turn out.  But I am going to go out on a limb here and tell you there are two sure bets:  Investing in parks and Investing in equity. And if we do it right, an investment in parks will be an investment in equity.

Here in Memphis, we are in the final stages of a capital campaign to build a 31-acre park on the Mississippi River adjacent to downtown.  This follows the completion of three other capital projects on the riverfront in the past two years – the remaking of two parks that carried Confederate names and creating a five-mile bike/ped trail along our riverfront.  

Two other major projects are currently underway:  the restoration of the largest historic Cobblestone Landing in America and a complete transformation of our original “Main” library, also on the riverfront.  

This work was sparked by Memphis’ participation in a groundbreaking initiative called “Reimagining the Civic Commons.”  The initiative is supported by the JPB Foundation, Knight Foundation, the Kresge Foundation, and the William Penn Foundation, along with, in our case, the Hyde Family Foundation and the City of Memphis. 

This initiative challenged us to think of civic assets as having purpose beyond the obvious – to lay claim to the reality that assets like parks, trails, libraries, cultural centers and the like can and should increase civic engagement, promote environmental sustainability, add value to their surrounding neighborhoods and promote socio-economic mixing.

You can think of it the four e’s:  

  • Engagement
  • Environment
  • Economy
  • Equity

But to do that – to get these assets to perform in new ways — is a heavier lift than I imagined when we birthed this five years ago.  Why?  

We encountered four principle hurdles:  

  • Parks, libraries, trails, and cultural centers are industries, whose leadership and employees have historically been trained, like most of us, to think narrowly and vertically about their work.
  • These assets are created and then operated with “minimum viable product” budgets that drive away people with financial options for where they spend time.
  • These assets are too often an afterthought for those who fund them.  They are considered “nice to have” but not essential infrastructure. 
  • The fear of “gentrification” looms so large that the desire to build and run great assets with the power to attract people across the income spectrum is immediately deemed suspect.

This last hurdle – fear of “gentrification” – is a special problem for civic assets built in downtowns.  Too often, downtown parks or cultural centers or libraries are considered “glamour assets.”  And they are located downtown.  Thus, they cannot possibly be equitable.  

I fervently disagree.  What we’re building in downtown Memphis demonstrates why.  

Yes, downtown Memphis is a neighborhood that has “turned around” from predominantly low income to higher income.  But it is surrounded on the east by a crescent of persistently poor neighborhoods – neighborhoods that are home to 40% of the city’s poor children.  The riverfront is within walking or biking distance of these kids.  And they come… every day.

In fact, the riverfront is some of the most equitable space in Memphis – it is free, it is open to everyone, it is one of the few public places in the region where you now find very poor people and very rich people sharing the same space.  The reason?  It feels like a vacation – special, elevated – because it is clean, it’s beautifully landscaped, it’s well designed and well managed, and it’s fun.  Turns out, if you create the right environment, people mostly enjoy being in the company of strangers.   

That’s in the DNA of any successful downtown.  You may have to work hard these days to get low income housing in downtowns.  But you typically don’t have to work hard at all to get low income people to public spaces in downtown.

Why does that matter?  Because sharing space regularly with strangers – including those who don’t look like you — breeds empathy.  Empathy is essential to community.  And community is essential to democracy.

As blues artist Keb Mo once put it, “You can’t feel ‘em, if you can’t see ‘em.”  The public space you create in downtown allows us to “see ‘em.”   

We know that if people of different incomes live in close proximity to one another, there is far more upward mobility for poor people.  The research on that is clear.

The problem is, we haven’t figured out how to make that alluring, to make that stick or to do it at scale.

We can’t force rich people and poor people to live near each other or send their kids to the same schools.  But we can encourage what I believe is the next best thing by seducing them into a shared, robust public life.  

Nothing about that is easy.  The design, maintenance and management of public space must be ambitious, sometimes clever and always resolute.  But public space that routinely attracts people across the income spectrum and across demographics feels to me like the “gateway drug” to shared community, a healthy democracy, and more equitable economies.

I wish we could all run the experiment on that.  Take the next five years and operate our public spaces and our downtowns through a mission lens of creating shared community, healthy democracy and equitable economies.  Sign me up for that!  

As ambitious as that may be, does our pursuit of equity end with providing space so alluring, so seductive that it attracts people of all incomes?  Hardly!

In addition to creating welcoming space for all, our equity strategy has three more parts.

It starts closest to home with staff and board.  How do we hire?  How do we pay?  How do we promote?  And how do we recruit to our board?  

The next layer is our contracting.  In our last two capital projects, we had Minority and Women-Owned Business Enterprise performance on one project at 43% and the other at 86%.  Our MWBE operating expense purchasing is at 38%, increasing from 8% in the two years since our current leadership team began. 

The final layer is connecting with the community.  And in Memphis, that means, in particular, African-Americans, because Memphis is a majority black city, in a majority black county, in a  metro area that is predominantly African-American.  If black Memphians aren’t showing up in big numbers in our workforce, on our board and in our parks, we are missing the market.  

We connect with the community in all the traditional ways, of course.  We show up at community fairs, we do talks, open houses, public Zoom meetings, we invited students to help us build one park, volunteers to help us on special projects each month and high school students to work alongside the design team on our new park.  We program specifically with socio-economic mixing in mind – different demographics occupying the same space at the same time.

But we are also working to establish welcoming physical connections to that crescent of disadvantaged neighborhoods just outside of downtown. It is striking how disconnected a neighborhood only 8 or 9 blocks away from the riverfront can feel because of missing sidewalks, dilapidated buildings and vacant lots.  We are working hard to change that. 

Is it enough?  No.  But the strategy I’ve described is mission-critical and is being accomplished with no major new “outside” funding.  

Convincing people to “live life in public” is one of the greatest services you and I can perform for our cities.  Because parks are not just places to unwind or recreate, just like downtowns are not simply places to conduct business.  They are deeply necessary platforms for equity.

Adam Gopnik, writing in The New Yorker, described the mixing we need in cities this way:  “Cities shine by bringing like-minded people in from the hinterland (gays, geeks, Jews, artists, bohemians), but they thrive by asking unlike-minded people to live together in the enveloping metropolis. While the clumping is fun, the coexistence is the greater social miracle.”  

So think of yourself as an alchemist trying to spark that social miracle of coexistence of unlike-minded [and unlike-looking] people.  When you believe that is at the heart of your mission, that’s where the equity work begins.  And in today’s very divided, very fraught, very threatened nation, it can, indeed, feel as daunting to achieve as a miracle.

But remember:  Equity does not sit in opposition to a thriving, appealing cityIt is central to it.  

The good news is that a commitment to equity should be the easy organizational choice.  

  • If you have a more diverse staff, you benefit from their diverse perspectives.  
  • If you grow the talent of your staff, you benefit and so do they.
  • If your board is more diverse, their broader networks benefit your mission.  
  • If you find more minority contractors, you have more choices on whom you hire.  Plus, you benefit from their support and their networks, and the community in which you exist and that you serve gets stronger.  
  • If your connections to the community are broadened (and deepened), you gain new perspective, new support, and in the best circumstances, you and your community get stronger.

We don’t know all we need to know about the future of our downtowns just yet.  We don’t know how they will change.  

But we do know this:  Great downtowns increasingly depend on great public space.  And great public space located in a downtown is more likely to be equitable space because of its location, not despite its location.   

If there is anything the past six months have taught us, it is this:  public space and the pursuit of equity are more important than ever.  They ought to be joined at the hip.  This is the moment for us to make big bets on both because they are the most certain bets we can make.

Why cities need to embrace change

This is the text of a speech delivered in Detroit to the Congress for New Urbanism conference by Carol Coletta, a senior fellow at the Kresge Foundation’s American Cities Practice.


Could there be a more apt place to observe “The Transforming City” than Detroit?

On behalf of Rip Rapson and my colleagues at the Kresge Foundation, welcome to Detroit. If you travel to Detroit regularly, as I have over the past 15 years, you see that Detroit changes quickly.

The speed of change here sometimes takes your breath away.

Carol Coletta

How many of you have walked the Detroit Riverfront or ridden the Dequindre Cut?

Visited the expanding Eastern Market?

Seen the Q Line construction on Woodward?

Eaten a meal at Selden Standard or Wright & Company, one of those meals so special that it deserves its own social media channel?

Walked the streets of downtown or Midtown and discovered Great Lakes Coffee, City Bird, or the El-Moore Lodge?

Or met Claire Nelson at the Urban Consulate, or any one of Detroit’s arts and civic innovators responsible for some of the most exciting urban work in the country?

This is the Detroit you can see right outside this theatre.

But there is another Detroit, one that is harder to see. It’s the Detroit that feels threatened by the pace of change in the city, suspicious of newcomers eager to be part of the change, and wondering when their loyalty to Detroit will be rewarded.

Such feelings are not unique to Detroit. Every morning my Google Alerts brings a new batch of headlines from around the country detailing the gentrification battles.

Because “new urbanism” is the butt of some of this criticism, I want to spend the next few minutes unpacking the myths and the realities of gentrification and what those of us who care about great places can do about it.

First, let me share some numbers.

In 1970, about eleven hundred urban Census tracts were classified as high poverty.

By 2010—40 years later—the number of high poverty Census tracts in urban America had increased from 1100 to more than 3,000. (3165)

The number of people living in those high poverty Census tracts had increased from 5 million to almost 11 million. And the number of poor people in high poverty Census tracts had increased from 2 million to more than 4 million.

So over a 40-year period, the number of high poverty Census tracts in America’s core cities had tripled, their population had doubled, and the number of poor people in those neighborhoods had doubled.

Given that record, I’ll bet a lot of people are hoping for a little gentrification– if gentrification means new investment, new housing, new shops without displacement.

The idea that places might benefit from gentrification runs against the popular narrative. But here’s the really startling fact: only 105 of the eleven hundred Census tracts that were high poverty in 1970 had rebounded to below poverty status by 2010. That’s only ten percent! Over 40 years!

A similar study of Philadelphia by Pew found almost exactly the same result in that city’s neighborhoods. There, ten times as many poor neighborhoods (164) experienced real declines in income as experienced gentrification since 2000.

It is the lack of gentrification that we rarely count and never see. The deterioration happens too slowly for us to notice. But it doesn’t mean the deterioration isn’t devastating. In fact, the high poverty neighborhoods of 1970 lost 40 percent of their population in 40 years.

You could make the case that poor people are displaced from poor neighborhoods because of their poor schools, their lack of jobs, their more chaotic public spaces, their lack of opportunity.

Understand, this is not the fault of the people who live there. This is a public policy failure.

But… when a combination of government intervention, philanthropic support, community development, and market forces combine to change a place as quickly as Detroit—even when that change means new residents, new jobs, and new places to live—it also rightfully generates concern.

See, we are conflicted about change. Many of us wish we could fix place in time.

But neighborhoods do change. You know that. You change them. And when change results in mixed income neighborhoods—in other words, when we achieve investment without displacement — it’s good for everybody.

The research on this is quite clear: The ability of people to improve their economic status from one generation to the next is strongly correlated with mixed-income neighborhoods.

Many of the public policy interventions to achieve economically integrated neighborhoods have supported poor people moving to wealthier neighborhoods. But that is an expensive, slow political slog that is hard to scale.

But what if we flipped that script? What if… we could lure people with financial options about where they live to disinvested neighborhoods—resulting in the kinds of places that enable opportunity?

And what if we also made a special effort to insure that the people remaining in low-income neighborhoods—people without options about where they live—what if an extra effort were made to insure they benefited from new people and new investment in their neighborhoods?

The research tells us that mixed-income neighborhoods benefit poor people naturally. But can we double down to accelerate those benefits?

Think of it this way: Can we get gentrification with broadly-shared benefits.

I think so. But it’s not easy. Remember: Only 10 percent of high poverty neighborhoods “gentrified” over the past 40 years. And today we have triple the number of high poverty neighborhoods than we had 40 years ago.

Clearly, mixed income neighborhoods won’t happen if we don’t work at it.

So how can we do that?

First, let’s acknowledge that, for the first time in 50 years, the market is moving in our favor. People (and jobs) are moving to cities. We need to see that as the opportunity it is to get mixed-income neighborhoods and not fear good, thoughtful development.

That means we can’t let NIMBYs win the day. The same people who complain about high prices also complain when developers show up to build more supply. We have to make the connection between supply and demand for the protesters and the press.

But attention must be paid to creating more mixed income housing. Our success on this has been mixed, and I’m struck by the comparison on methods used in NYC and in Portland, Oregon’s Pearl District to create more affordable housing in mixed income settings.

As City Observatory reported, The City of New York, one of the nation’s hottest housing markets, has had inclusionary zoning for the past 10 years. And over that time, the city has produced an average of 280 units per year for a total of 2800 units.

In contrast, Portland took a very different approach. Portland used additional property tax revenue from construction in one neighborhood to subsidize affordable housing. Using just a third of such revenues from The Pearl District (along with Low Income Housing Tax Credits), Portland has built more than 2300 units of affordable housing—almost as many units as the much larger New York.

Portland’s Pearl District is an example of a desirable neighborhood. The cost of desirable neighborhoods goes up. And it is the fear of rising costs, new investment, (and sometimes a changing demographics) that spawned the “just green enough” movement.

Think about that: Disinvested neighborhoods lack access to parks and quality public space. But wait! Let’s not make it too nice for fear it will attract new investment. That’s craziness born out of legitimate frustration when prices start going up.

The fact that buyers and renters are willing to pay more for quality neighborhoods means we need to build more of them, not fewer of them.

How do we do that at scale?

When someone calls for new investments in infrastructure to stimulate the economy, will we be ready with a plan that defines infrastructure as something more than roads and bridges?

Why can’t “infrastructure” include new and redesigned parks and libraries, neighborhood community and cultural centers, trails and gardens—a reimagined civic commons? That’s the defining line I want to hear from our next president. I want so many desirable neighborhoods that people will have good choices at all price points.

The way we live today is changing so fast. We are decoupling and recoupling. We have mothers raising kids alone, and people delaying childbearing—some forever—who want to help. We are sharing jobs, cars and homes. We are retiring later and living longer. And our lives, increasingly, are lived in public.

We need to ready our cities for these changes. We need to figure out how to revalue what exists and give new life to the material, the buildings, the neighborhoods, the cities and the people we too often discard and write off.

Equity does not sit in opposition to a thriving, appealing city. It is central to it.

This is the work of CNU. This is your work. And that’s why I’m happy to be with you here in Detroit to celebrate and learn alongside you this week. Thank you for inviting me.

How sprawl taxes our well-being

In the first installment of our “Sprawl Tax” series, we explained how laws and patterns of development that make our homes, businesses, and schools farther apart cost us time and money—on average, nearly $1,400 a year per commuter in America’s 50 largest metropolitan areas. In the second installment, we showed how the Sprawl Tax is levied much more heavily on Americans than our international peers, with US commuters paying a much larger proportion of their income on transportation and spending much more time on their trips to and from work than people in other wealthy countries.

Today, we want to talk about another cost of sprawl, and the greater distances it forces us to travel: Our quality of life. Powerful evidence suggests that longer commutes make us individually less happy and less healthy, in addition to having detrimental effects on our communities. In recent years, behavioral economics has made great strides in determining how different factors influence our happiness. Consistently, this literature finds that long commutes are strongly associated with lower levels of “subjective well-being”—the technical term that researchers use to describe “happiness.”

One study from Germany, for example, calculated that reducing one’s daily commute time from 23 minutes each way (the German average) to zero minutes would produce an increase in happiness equal to about an 18 percent increase in income. Research in other countries, including the United States, has produced similar results.

In a survey of working women in Texas, behavioral economist Daniel Kahneman and his collaborators found that time spent commuting had the lowest positive ratings of all daily activities.

Other studies have confirmed that commute distances are correlated with happiness and health. The Gallup Healthways Index shows that Americans with longer commutes report lower levels of subjective well-being. The data also show that long commutes are correlated with a higher incidence of back pain, obesity, and high cholesterol.

Minutes from home to work Average Index Score
0-10 69.2
11-20 68.3
21-30 67.5
31-45 67.1
46-60 66.4
61-90 66.1
91-120 63.9

Source: Gallup

We also have detailed data from a survey taken by the state of Connecticut. For nearly every income group, self-reported well-being declined as commute distance increased. The chart below shows that relationship. The power of commuting distance was such that low-income households (making under $30,000) with a roundtrip commute of 40 minutes or less reported being as happy as households making roughly twice as much money (between $50,000 and $75,000), but with commutes of 80 minutes or more.

 

It’s not a surprise, then, that average commute times are also correlated with satisfaction with the local transportation system itself. Using data from a survey of homeowners commissioned by Porch, an online home improvement information firm, and the median commute length as calculated by the Brookings Institution, we can see a strong negative correlation between metro area commute times and satisfaction with the region’s transportation system: the longer the median commute, the less satisfied homeowners are.

 

 

Conversely, it turns out that transportation satisfaction is almost completely uncorrelated with “congestion”—at least as it’s often measured. As you can see below, the Urban Mobility Scorecard ratings of metropolitan traffic congestion calculated by the Texas Transportation Institute bear almost no relationship to whether homeowners report being satisfied with their region’s transportation system. If anything, congestion is associated with more satisfaction.

 

 

Taken together, this analysis suggests that overall commute distances—and not traditional measures of traffic congestion—are the chief factor influencing homeowner perceptions about transportation.

Finally, there is evidence that longer commutes have social, as well as personal, costs. Robert Putnam reported that each additional ten minutes of commute time reduces social capital—things like church-going, civic participation, club attendance—by 10 percent.

As we’ve shown, Americans around the country bear the financial burden of the sprawl tax. But sprawling car dependent development patterns don’t just end up costing us time and money. The long commutes they engender also make us less happy. They’re correlated with lower levels of mental and physical health, and reduce our social capital. Among metropolitan areas, long commutes—and not traffic congestion—are what we find least satisfactory about our transportation systems.

Sprawl Tax: How the US stacks up internationally

In our first post on the “Sprawl Tax,” we’ve explored the ways that our decisions about how to build American cities have imposed significant costs—in money, time, and quality of life—on all of us. We pay more to drive more, spend more time traveling instead of being at our destinations, and as a result deal with more stress, than we would if our destinations weren’t so widely separated from one another.

It's a good bet that the people who live here have long and expensive commutes. Credit: Kaizer Rangwala, Flickr
It’s a good bet that the people who live here have long and expensive commutes. Credit: Kaizer Rangwala, Flickr

 

But the sprawl tax isn’t equally costly for everyone. We’ve shown how cities that are more compact enable their residents to spend significantly less time and money on transportation than less compact cities.

And what about beyond our borders? When we compare the typical American to her counterpart in other rich countries, it’s clear that the sprawl tax is a national concern, dragging down our disposable income and free time relative to residents of other countries.  To illustrate the international dimension of the sprawl tax, we draw on data on travel time and transportation spending compiled by Stephen Redding and Matthew Turner in a recent paper on the connections between transportation infrastructure and urban form.

Household Budgets

The table below shows the fraction of household income that people in 15 European countries, Canada, and the US countries spent on transportation from 2005 to 2009, as well as how much time the average worker in each of these countries spends on a daily roundtrip commute. Let’s take the spending side first.

During this time, the average American household spent approximately 18 percent of its budget on transportation. Among the other 16 countries, none spent more than 16 percent, and the average spending level was just 12.8 percent. That means a typical US household spent about five percentage points more of its income on transportation than the residents of other developed countries—which translates to about $1,500 every year.

Here’s how we arrive at this figure: In 2007 (the median year of the estimates presented above) disposable household income in the United States was $31,000 compared to an average of $21,000 in these other nations, according to estimates from the World Bank, based on purchasing power parities that adjust for price differences between countries. If US households spent the same share of their incomes on transportation as did the households in the typical high income country, they would have spent about $1,500 less (.05 * 31,000) on transportation.

Commute Time

The average American worker spent about 51 minutes commuting between home and work and back again—more than all but one other country in the sample. That exception was Canada, where the typical worker spent 63 minutes on their commute—but the rest were lower, all the way down to Portugal, where roundtrip commutes took up just 29 minutes per worker per day. For the other 15 nations examined the average commute time was 39 minutes—about 12 minutes less per day than in the US.

This means that over the course of a year with 250 working days, the typical American commuter spends about 51 more hours (12 minutes times 250 days) commuting to and from their place of work than workers in other high income countries. Valued at $15 per hour, the additional cost of commuting to US workers comes to $770 per worker worker per year.

The International Sprawl Tax

Compared to other high income nations, we spend about $1,500 per household on transportation costs and about $770 per worker more on commute time costs. While there are many reasons for these disparities, in large part, they reflect American cities’ more sprawling development patterns, and the represent a sprawl tax that is paid by Americans.

Achieving scale in affordable housing

There’s little question that housing affordability is a growing problem in many cities around the country. Rents have been rising faster than incomes, especially for low- and moderate-income households.

One of the most widely touted policy responses is “inclusionary zoning,” which requires developers who build new housing to set aside at least a portion (typically 10 to 20 percent) of new units that will be sold or rented for less than the market price.

In many respects, inclusionary zoning seems like a win-win, free lunch policy: by making developers pay for new affordable housing, these new homes don’t directly cost taxpayers a dime. But developers have to make up the cost of these below-market units somewhere, and typically it will be by passing the costs on to the buyers of the market rate units in their development. At least one study* suggests that this results in higher prices. In some cases, cities offer density bonuses to developers to ease the financial burden of constructing below market units, but it’s far from clear that the bonuses cover the additional costs, plus the uncertainty and negotiation that attends these frequently discretionary approval processes adds to costs.

But the larger problem with inclusionary zoning requirements is that they may simply not be up to the scale of the problem. Although dozens of jurisdictions have enacted inclusionary zoning requirements, they simply haven’t produced many units of housing. Consider New York City’s decade-old policy. In many ways, New York ought to be a perfect place for inclusionary zoning, which tends to do best in hot real estate markets. But in one of the nation’s hottest housing markets, in its largest city, inclusionary zoning produced about 2,800 units of affordable housing its its first decade—about 280 per year, in a metropolis of over eight million people.

Credit: Josh Liba, Flickr
Credit: Josh Liba, Flickr

 

Most inclusionary zoning programs are much smaller, and cities have less leverage with developers because market-rate development is not nearly as profitable as it is in robust markets like New York. A recent compendium of inclusionary zoning programs showed that only six cities nationally operated inclusionary zoning programs that had produced more than 100 units per year, and just one jurisdiction—Montgomery County, Maryland, a high income suburb of Washington, DC—accounted for nearly half of all inclusionary zoning units.

The fundamental problem with inclusionary zoning is also its primary advantage: it asks for, and receives, virtually no taxpayer money. But skimming off the top of developer profits is almost by definition an inadequate source of funding for affordable housing, particularly in places like New York and San Francisco where the need is most acute. All newly built housing is generally a fraction of one percent of a city’s housing market in any given year; housing that triggers inclusionary requirements is less than that; and you then have to reduce that number by 80 to 90 percent to get to the 10 to 20 percent set-aside of affordable units. It’s not an accident that Montgomery County has built so much inclusionary housing, relatively speaking—it’s also built vastly more housing, period, than most cities, nearly doubling its population since 1970. How many inclusionary housing advocates in other parts of the country are eager for such a breakneck pace of development?

Solutions, then, are likely to require some actual tax money. One possibility: dedicate a portion of the added property tax revenue from new real estate construction to subsidizing affordable housing. Portland, Oregon has dedicated about a third of such revenues to affordable housing, and has built more than 2,300 units of affordable housing in one neighborhood near downtown—nearly as much as New York’s affordable housing ordinance has produced in the five boroughs of New York. Portland has dedicated $67 million on tax increment funds over the next decade to support affordable housing in the city’s fast changing neighborhoods. Also, unlike inclusionary zoning, using tax increment financing doesn’t have the undesirable side effect of driving up the price of market rate housing or constricting the supply of market rate units.

Ultimately, a solution that addresses the scale of the nation’s affordability problems will have to tackle the nation’s highly skewed subsidies to homeownership by higher income households. The combination of the mortgage interest deduction, property tax deduction, capital gains exemption and the non-taxation of imputed rents amounts to a federal subsidy to owner-occupied housing on the order of $250 billion per year, most of which goes to the nation’s highest income households. There’s a lot we could do: like expand funding for rental vouchers, which reach only 22 percent of those who qualify. Or tap the capital gains that accrue to homeowners (in substantial part due to the constriction of housing supply by zoning regulations. But it should be clear that feel good programs like inclusionary zoning are mostly a token response to a problem of much more substantial dimension.


* See: Schuetz, Meltzer & Bean, Silver bullet or trojan horse? The effects of inclusionary zoning on local housing markets in the United States, Urban Studies 2011;48(2):297-329. The authors note that most inclusionary zoning programs have had a modest scale relative to housing markets, and conclude: “Results from suburban Boston suggest that IZ has contributed to increased housing prices and lower rates of production during periods of regional house price appreciation. In the San Francisco area, IZ also appears to increase housing prices in times of regional price appreciation, but to decrease prices during cooler regional markets. There is no evidence of a statistically significant effect of IZ on new housing development in the Bay Area.”

Neighborhood change in Philadelphia

Last week, the Pew Charitable Trusts released a fascinating report detailing neighborhood change in Philadelphia over the past decade and a half. “Philadelphia’s Changing Neighborhoods” combines a careful, region-wide analysis of income trends with detailed profiles of individual neighborhoods.

Using tract-level income data, Pew researchers classified Philadelphia neighborhoods according to their median income in 2000 and the increase in their median income between 2000 and the five-year 2010-2014 American Community Survey.

A tract counted as “gentrifying” if its income was below 80 percent of the regionwide average in 2000, but grew by at least 10 percent in real terms by 2014, and its income was then in the top half of all the neighborhoods in the city of Philadelphia.

Credit: Tom Ipri, Flickr
Credit: Tom Ipri, Flickr

 

A couple of key conclusions emerge from this work.

Though it gets a lot of press attention and generates controversy, gentrification in Philadelphia has been rare, and is concentrated in just a few neighborhoods. By Pew’s reckoning, just 15 of the region’s 371 Census tracts (or about four percent) experienced gentrification.

For low-income neighborhoods, a continuing decline in income was a far more common outcome. In Philadelphia, ten times as many poor neighborhoods (164) experienced real declines in income as experienced gentrification since 2000.

These findings for Philadelphia echo our own analysis of neighborhood change from 1970 through 2010, presented in our report “Lost in Place.” (Lost in Place used poverty rates to identify low income neighborhoods and identified gentrification as a decline in poverty rates to below the national average in formerly high poverty neighborhoods.) Our key conclusion—that gentrification affected just five percent of those living in high poverty neighborhoods, and that most place over high poverty remained poor for decades—is very similar to Pew’s Philadelphia analysis.

Much of the controversy surrounding gentrification stems from the widespread belief that gentrification automatically results in the displacement of long-time neighborhood residents. Implicitly, many people seem to visualize neighborhood change as a kind of zero-sum game: each new resident moving in must mean that one previous resident moved out. The published academic literature, however, mostly fails to find widespread displacement. While the Pew study doesn’t address displacement directly, their research provides an interesting sidelight to this question.

The authors of the study also graciously provided us with unpublished data on the population levels for each of the Census tracts in their study, with data sorted according to their classification of neighborhood change. Like many cities, since 2000 Philadelphia has begun to experience a population increase. Gentrifying neighborhoods played an outsized role in contributing to city population growth. Between 2000 and 2014, the 15 gentrifying neighborhoods grew by 13.4 percent, adding 7,000 new residents. Citywide, the population increase was only 2 percent. These 15 tracts accounted for 22 percent of citywide population growth.

Meanwhile, poor neighborhoods that didn’t gentrify only managed to tread water in terms of population levels. Overall, population in these neighborhoods increased only 0.2 percent between 2000 and 2014; some 40 percent of all poor neighborhoods lost population. The different growth trajectories of poor neighborhoods that don’t gentrify compared to those that do is a good reminder that neighborhood change is seldom a zero-sum game.

Special thanks to Emily Dowdall for sharing the tract level data.

Schools and economic integration

There’s a growing body of evidence that economic integration—avoiding the separation of rich and poor into distinct neighborhoods—is an important ingredient in promoting widely shared opportunity. The work of Raj Chetty and his colleagues shows that poor kids who grow up in mixed income communities experience far higher rates of economic success than those who live in neighborhoods of concentrated poverty.

We know that one of the principal channels through which this process works is the quality of local schools. Schools in mixed-income neighborhoods tend to have students from both high-income and low-income strata, and benefit from the generally higher levels of parental involvement and resources that higher-income families are able to lavish on schools. Massey and Rothwell have shown that one’s neighbors educational level is nearly half as powerful as one’s own parent’s level of educational attainment in explaining children’s long term economic success, and they hypothesize that much of this effect is transmitted through the school system.

At the same time, the composition and quality of urban schools has been a critical challenge for cities around the country. For decades, as higher income families decamped cities for the suburbs—in part to get access to what were perceived as better schools—urban school districts have faced a triple whammy of declining enrollments, a growing concentration of students from poor families, and declining fiscal resources. The results are chronicled in a new Government Accountability Office report.

GAO compiled data from the National Center for Educational Statistics, and classified schools as low poverty, high poverty, and all other based on the fraction of students in each school eligible for free and reduced-price school lunches.  (Low poverty schools were those where no more than 25 percent of students were eligible; high poverty schools had at least 75 percent of students eligible.  The data show that in little more than a decade the number of students enrolled in low poverty schools has fallen by half (from 39 percent to 20 percent), while the number of students in high poverty schools has increased from 14 percent to 25 percent.  As the GAO report details, students of color are much more likely to attend high poverty schools; 48 percent of black students and 48 percent of Latino students attend high poverty schools, compared to only 8 percent of white students.

Share of K-12 Students Enrolled by Poverty Status of School

2000-01 2005-06 2010-11 2013-14 Change, 2000-01 to 2013-14
Low Poverty 39% 33% 24% 20% -19%
All Other 47% 51% 56% 54% 8%
High Poverty 14% 16% 20% 25% 11%

Source:  GAO, K-12 EDUCATION: Better Use of Information Could Help Agencies Identify Disparities and Address Racial Discrimination, April 2016, GAO-16-345.

 

In the past couple of decades, as we’ve long noted, there’s been a revival in the fortunes of urban centers. In many cities, population growth has been rekindled, particularly by the movement of well-educated young adults into urban centers. But the long-term resilience of this trend depends on whether young adults will stay in cities once they start having children, a question that hinges directly on the quality of urban schools.

Against this backdrop comes news that test scores in the Washington, DC school system have chalked up some impressive gains in recent years. According to the National Assessment of Educational Progress (NAEP), reading and math scores for fourth and eighth graders have seen significant increases.

Screen Shot 2016-05-26 at 10.59.22 AM

 

As the population of the District of Columbia has changed in recent years, that’s begun to alter the demographic characteristics of the students in DC schools. More kids are from wealthier and whiter families, fewer are from poor families, immigrant households and families of color. But as we’ve written, the growing wealth of urban centers has not yet entirely converted them into the sort of playgrounds for the white and wealthy that is sometimes supposed: it’s still the case that two out of every three school age kids in the District of Columbia are black, and an even higher fraction of public and charter school students. Some have feared that the increase in test scores is solely a result of these demographic changes—that scores are higher simply because different students are taking the tests.

A new analysis from the Urban Institute challenges that view. After controlling for changes in the race and ethnicity of the student body, they find that scores have increased much faster than can be explained by demographic changes. The analysis also concludes that the gains in test scores can’t be explained solely by changes in parental educational levels—one key measure of socioeconomic status. The data show gains in scores in both conventional public schools, and also in charter schools. The Urban Institute findings echo an earlier analysis by District of Columbia’s Office of Revenue Analysis, that shows that adjusting for race and ethnicity does little to change increase in test scores.

While the Urban Institute study confirms that test score increases are real, it doesn’t answer the question of why scores improved. There have been a series of changes enacted in the District in the past decade: new educational management under Michelle Rhee and her successors, a stronger Mayor role in school governance, and increased resources and more widespread adoption of charters.

And while the increase in educational scores isn’t simply a product of the district’s changing population, it could well be that gentrification in the district has a synergistic or interactive effect with these other forces. Education reform measures in the district may be more successful if they’re undertaken with a slightly different mix of students, in schools where a higher fraction of families have the resources to support learning and engage in the schools. They may also be more politically effective in holding the city and schools accountable for results.

While the data gathered so far can’t definitively answer these questions, the noticeable improvement in educational results in the District of Columbia is an encouraging sign for the city’s future growth. It suggests that city schools can improve, and that, in turn, makes the city a more attractive home for young families who might have felt compelled to move to suburbs to get better school quality. And for city students who otherwise might have been isolated in economically segregated, under-performing schools, it means that they have better educational, and in the long run economic opportunities. We’re looking forward to future research that can help measure and sort out these explanations.

Nothing about pedestrian safety that more technology won’t fix

In auto-land, pedestrians are just one more patented gimmick away from being safe

The dominant approach to automobile safety has, for many years, been the quintessential technical fix. Some combination of new technologies (anti-lock brakes, collapsible steering columns, crush zones, multiple air bags, etc) will make cars safer and safer (well, at least for their occupants).  And soon, self-driving cars will (it is hoped) eliminate human errors that produce most crashes. 

Still the humble pedestrian remains under-engineered for this brave new world of technologically assured density. But that’s changing.

Some months back, we noted that Google unveiled drawings of a novel plan to coat the exterior of self-driving cars with a special adhesive that would cause any pedestrians the vehicles struck to adhere to the car rather than being thrown by the impact.  Now, Automotive News is sharing reports of a new General Motors patent that would put airbags on the outside of cars to deflect errant pedestrians.

Whether it would be better to find oneself stuck to the car that struck you, or being pushed aside by an exploding airbag, is far from clear. But let it not be said that automotive engineers and major corporations are the only ones who can come up with such far-fetched ideas. Here at City Observatory, we’ve come up with our own concepts for, if you will, lessening the impact of cars on pedestrians. In the interest of safety and advancing the state of the art, we’re putting our ideas into the public domain, and not patenting any of them.

 

Personal airbags. Airbags are now a highly developed and well-understood technology. Most new cars have a suite of frontal impact, side curtain and auxiliary airbags to insulate vehicle passengers from collisions. The next frontier is to deploy this technology on people, with personal airbags. Personal airbags could have their own sensors, inflating automatically when the pedestrian was in imminent danger of being struck by a vehicle.

 

google car diagrams-02

Rocket Packs. While a sufficiently strong adhesive might keep a struck pedestrian from flying through an intersection and being further injured, perhaps a better solution would be to entirely avoid the collision in the first place by lifting the pedestrian out of the way of the collision in the first place. If pedestrians were required to wear small but powerful rocket packs, again connected to self-driving cars via the Internet, in the event of an imminent collision, the rocket pack could fire and lift the pedestrian free of the oncoming vehicle.

 

google car diagrams-03

We offer these ideas partly in jest, but mostly to underscore the deep biases we have in thinking about how to adapt our world for new technology.  Let’s be clear: while these are labeled “pedestrian safety” technologies, they are really about making the environment better for cars and driving. Like crosswalks and electronic “walk-don’t walk” signs, these are not technologies that are needed in pedestrian-only environments. No matter how crowded, no mall, stadium, or concert hall has stoplights for pedestrians.

It has long been the case with private vehicle travel that we’ve demoted walking to a second class form of transportation. The advent of cars led us to literally re-write the laws around the “right of way” in public streets, facilitating car traffic, and discouraging and in some cases criminalizing walking. We’ve widened roads, installed beg buttons, and banned “jaywalking,” to move cars faster, but in the process making the most common and human way of travel more difficult and burdensome, and make cities less functional. And at some point, the existence of these kinds “pedestrian protection” technologies becomes a basis for rationalizing making the physical environment even more hostile to actual humans. The ultimate objective of much engineering practice seems to be to exterminate walking (if not walkers). As one Washington State engineer bluntly described Seattle’s 1950s era transportation plan: “Pedestrians, who are a constant hazard to city driving, are entirely removed.”

Everywhere we’ve optimized the environment and systems for the functioning of vehicle traffic, we’ve made places less safe and less desirable for humans who are not encapsulated in vehicles. A similar danger exists with this kind of thinking when it comes to autonomous vehicles; a world that works well for them may not be a place that works well for people.

 

Not all of our problems can be solved with better technology. At some point, we need to make better choices and design better places, even if it means not remaking our environment and our communities to accommodate the more efficient functioning of technology.


Thanks to Matt Cortright for providing the diagrams for our proposed pedestrian protection devices.

Self-driving cars versus pedestrians

For many, it’s all but a certainty that our world will soon be full of self-driving cars. While Google’s self-driving vehicles have an impressive safety record in their limited testing, it’s just a matter of time until one is involved in a serious crash that injures someone in a vehicle, or a pedestrian.

So, in a way, it’s good news that Google is devoting some of its considerable intellectual energy to figuring out ways that we might lessen the seriousness of pedestrian injuries in the event of such collisions. Earlier this month, Google unveiled plans for a novel plan to coat the exterior of self-driving cars with a special adhesive that would cause any pedestrians the vehicles struck to adhere to the car rather than being thrown by the impact.

Whether it would be better to find oneself stuck to the car that struck you, rather than being pushed aside, is far from clear. But pedestrian safety in a world of self-driving cars is clearly an issue that needs to be dealt with.

Here at City Observatory, we’ve come up with our own concepts for, if you will, lessening the impact of autonomous cars on pedestrians. In the interest of safety and advancing the state of the art, we’re putting our ideas into the public domain, and not patenting any of them.

Pedestrian Shock Bracelets. Most pedestrians are already instrumented, thanks to cell phones, and a large fraction of pedestrians have fit-bits, apple watches and other wearable, Internet-connected devices. We propose adding a small electroshock device to these wearables, and making it accessible to the telematics in autonomous vehicles. In the event that the autonomous vehicle’s computer detected likelihood of a car-pedestrian collision, it could activate the electroshock device to alert the pedestrian to, say, not step off the curb into the path of an oncoming vehicle.

google car diagrams-01

Personal airbags. Airbags are now a highly developed and well-understood technology. Most new cars have a suite of frontal impact, side curtain and auxiliary airbags to insulate vehicle passengers from collisions. The next frontier is to deploy this technology on people, with personal airbags. Personal airbags could have their own sensors, inflating automatically when the pedestrian was in imminent danger of being struck by a vehicle.

 

google car diagrams-02

Rocket Packs. While a sufficiently strong adhesive might keep a struck pedestrian from flying through an intersection and being further injured, perhaps a better solution would be to entirely avoid the collision in the first place by lifting the pedestrian out of the way of the collision in the first place. If pedestrians were required to wear small but powerful rocket packs, again connected to self-driving cars via the Internet, in the event of an imminent collision, the rocket pack could fire and lift the pedestrian free of the oncoming vehicle.

 

google car diagrams-03

We offer these ideas partly in jest, but mostly to underscore the deep biases we have in thinking about how to adapt our world for new technology.

It has long been the case with private vehicle travel that we’ve demoted walking to a second class form of transportation. The advent of cars led us to literally re-write the laws around the “right of way” in public streets, facilitating car traffic, and discouraging and in some cases criminalizing walking. We’ve widened roads, installed beg buttons, and banned “jaywalking,” to move cars faster, but in the process making the most common and human way of travel more difficult and burdensome, and make cities less functional.

Everywhere we’ve optimized the environment and systems for the functioning of vehicle traffic, we’ve made places less safe and less desirable for humans who are not encapsulated in vehicles. A similar danger exists with this kind of thinking when it comes to autonomous vehicles; a world that works well for them may not be a place that works well for people.

Consider this recent “Drivewave” proposal from MIT Labs and others to eliminate traffic signals and use computers to regulate the flow of traffic on surface streets. The goal is to allow vehicles to never stop at intersections, but instead travel in packs that create openings in traffic on cross streets that allowed crossing traffic to flow through without delay. Think of two files of a college marching band crossing through one another one a football field.

It’s thoroughly possible to construct a computer simulation of how cars might be regulated to enable this seamless, stop-free version of traffic flow. But this worldview gives little thought to pedestrians—the video illustrating drivewave doesn’t show any pedestrians, although the project description implies they might have access to a new form of beg button to part traffic flows to enable crossing the street. That might be technically feasible, but as CityLab’s Eric Jaffe pointed out, “it would be a huge mistake for cities to undo all the progress being made on human-scale street design just to accommodate a perfect algorithm of car movement.”

Not all of our problems can be solved with better technology. At some point, we need to make better choices and design better places, even if it means not remaking our environment and our communities to accommodate the more efficient functioning of technology.


Thanks to Matt Cortright for providing the diagrams for our proposed pedestrian protection devices.

The technological fix for our pedestrian problem

What the obsession with technological fixes says about how we fail to prioritize people in cities

In the best traditions of engineering, clever minds are working on new technologies that can prevent or reduce the carnage on our nation’s roadways. A couple of years ago, we note that Google had patented a technology to coat self-driving cars with a special adhesive that would cause any pedestrians the vehicles struck to adhere to the car rather than being thrown by the impact.

Appalled, and with tongue firmly in cheek, we offered up three of our own equally absurd technological fixes for the pedestrian “problem,” including vehicle-activated shock bracelets that would paralyze pedestrians before they could jay-walk, rocket packs that could lift errant pedestrians out of a vehicle’s path, and perhaps most fancifully, pedestrian mounted air-bags, that could cushion pedestrians struck by vehicles.

 

google car diagrams-02

But, as is often said, truth is stranger than fiction.  Because its now apparently the case that General Motors (and several other automobile manufacturers) are patenting passenger airbags, although their concept is that the airbags be mounted on the outside of vehicles, rather than on the pedestrian.  Here’s the drawing for GM’s idea, from the Automotive News:

Ultimately, the purpose of such technologies is not to make cities safer for people, but to make them more universally available to cars and car travel. It acknowledges that we’re going to design our urban space around stroads that inherently put fast-moving cars in conflict with people on foot. It has long been the case with private vehicle travel that we’ve demoted walking to a second class form of transportation. The advent of cars led us to literally re-write the laws around the “right of way” in public streets, facilitating car traffic, and discouraging and in some cases criminalizing walking. We’ve widened roads, installed beg buttons, and banned “jaywalking,” to move cars faster, but in the process making the most common and human way of travel more difficult and burdensome, and make cities less functional.

Everywhere we’ve optimized the environment and systems for the functioning of vehicle traffic, we’ve made places less safe and less desirable for humans who are not encapsulated in vehicles. A similar danger exists with this kind of thinking when it comes to autonomous vehicles; a world that works well for them may not be a place that works well for people. As CityLab’s Eric Jaffe pointed out, “it would be a huge mistake for cities to undo all the progress being made on human-scale street design just to accommodate a perfect algorithm of car movement.”

Not all of our problems can be solved with better technology. At some point, we need to make better choices and design better places, even if it means not remaking our environment and our communities to accommodate the more efficient functioning of technology.


Thanks to Matt Cortright for providing the diagrams for our proposed pedestrian protection devices.

Cities are adding people, jobs and businesses

A trio of reports released in the past week provide new data showing the economic strength of the nation’s cities.

Whether we look at population growth, new business formation, or job creation, big cities, urban centers and close-in urban neighborhoods are big drivers of national growth. While the data are drawn from different sources and use slightly different geographies, the messages are quite similar.

The Economic Innovation Group, a Washington-based think tank, used county-level data from the Census Bureau’s County Business Patterns program to chart net new business formation and job growth. Over the period 2010-2014, the 20 counties with the largest increase in new businesses (all big urban counties) accounted for a majority of all net new business formation in the current economic expansion. The 20 counties with the largest job increases accounted for 28 percent of all new jobs. This represents a dramatic turnaround from previous expansions in the 1990s and early 2000s, when smaller, less populous counties tended to grow faster. The dwindling rate of new firm formation is a topic of growing concern as we think about the nation’s long term growth prospects; in the current recovery, a few large metropolitan areas have played a dramatically disproportionate role in fueling new business activity.

EIG_Job_Growth_Chart
Credit: Economic Innovation Group

 

In the 1990s, there was a strong negative correlation between county population and job growth rates, meaning that less populous counties grew much faster than more populous ones. But in the past four years, that relationship has reversed. Small counties are growing the most slowly; larger counties are growing more rapidly. Counties with a million or more residents grew only half as fast as those with fewer than 100,000 residents in the mid-90s (7.7 percent vs. 16 percent); and the larger counties grew more than twice as fast (9.9 percent vs 4.4 percent) since 2010.

The Brookings Institution’s Bill Frey crunched the numbers from the latest (2015) Census population estimates to track population growth in the nation’s 50 largest metropolitan areas, comparing growth rates in the principal city in each metropolitan area, with the remaining jurisdictions. As Frey notes, principal cities—the largest municipality in each metropolitan area—are growing faster than the remaining portion of metropolitan areas, reversing a long-standing pattern of suburban growth outpacing city growth. Cities have grown faster than suburbs since 2010.

Frey_City_Suburb_Pop

 

And at City Observatory, we’ve released our updated figures on central city job growth. Using fine-grained establishment level data from the Census Bureau’s Local Employment and Housing Dynamics (LEHD) database, we’ve plotted employment change for the three mile radius surrounding the center of the central business district in the nation’s large metropolitan areas. While job growth in urban centers lagged well behind suburbs a decade ago, job growth rates urban centers are today very similar to those in more suburban locations. Bloomberg View columnist Justin Fox addressed the findings of our report in a recent column, and highlighted the pattern of change over time:

Fox_CCJ_Graphic

 

While these three reports draw on different data sources, and use somewhat different geographies (large counties, principal cities and a three-mile radius) they tell very similar stories about the persistence of urban-led population and economic growth in the US at least through the middle of the present decade.

More jobs, more businesses, more people. These three reports add to a growing body of data suggestion that large metropolitan areas and urban centered economies are increasingly driving national economic prosperity.

As we’ve noted before, using city and county boundaries to measure differences between “cities” and “suburbs” and particularly to make comparisons across metropolitan areas can be problematic. City and county boundaries often don’t correspond well to patterns of urbanization, and the scope of a largest county or principal city varies widely across metropolitan areas.

The demand for city living is behind the urban rent premium

The US faces a shortage of cities. More and more Americans, especially talented, young workers with college degrees, are looking to live in great urban locations. As we’ve explored at City Observatory, the demand for urban living has increased faster than the supply of great urban spaces—with the predictable result that the price of land is appreciating faster in cities. We’ve pointed to a growing body of data—stronger residential price growth in urban cores, faster appreciation for homes in more walkable areas, a more rapid growth of office rents in walkable locations—all of which signal the growing market value of living and working in cities.

This trend also clearly manifests itself in the residential rental marketplace. New data from multifamily market analyst RealPage shows that apartments in big cities have seen higher rent increases than in smaller ones.

The RealPage data enable us to divide the US top 100 US markets into three broad groups: the nine hottest coastal markets, the remaining 41 of the 50 largest metropolitan areas, and 50 smaller metropolitan areas. Data shown here are for the fourth quarter of 2008 and 2015 and are expressed in inflation-adjusted dollars. Percentages shown are the percent increase in real, inflation-adjusted rents over the entire seven-year period.

 

The most obvious point is that the large, hot markets are showing the biggest increases, at all price points. The nine markets are all on the East Coast (New York, Boston, Washington) and West Coast (Southern California, the San Francisco Bay area and Seattle). In the nine hot markets, median rents increased about 13 percent, compared to 8 percent for other metro areas in the top 50 and just 2 percent for metros ranked 50th through 100th in size.

As interesting as the pattern of change is across markets, the change within metro markets is also instructive. In every size market, rents are increasing fastest in the highest priced segment of the market place. RealPage reports the average rents for the median apartment, as well as the apartments representing the 25th and 75th percentiles of the marketplace. Think of the 75th percentile as representing the cutoff for the top one-quarter “nicest” apartments, and the 25th percentile representing older, less desirable and more affordable apartments. (We explored the importance of looking at all segments of the rental marketplace to understand affordability a few months back; in many ways, having a broad range of price points within the market is a better indicator of affordability than the median).

The most expensive apartments in the most expensive cities are seeing the fastest rate of price appreciation. For example, in the top nine coastal markets, rents for the 75th percentile apartment increased by about 23 percent, while apartment rents for the median and 25th percentile apartments increased only about half as fast (10 percent and 12 percent respectively). What this signals is that the demand for urban living is being fueled by the preferences of high income households rather than simply a generalized increase in rents that affects every segment of the marketplace equally.

The next 41 largest metropolitan areas–those rounding out the list of the fifty largest metropolitan areas have lower rates of price appreciation overall, but exhibit the same within metro pattern of greater price increases at the higher end of the market. Rents at the 75th percentile increased about 13 percent between 2008 and 2015, but only about 8 percent for the median apartment and 7 percent at the 25th percentile.

What this signals is robust demand for high-end apartments in the nation’s largest and hottest markets. As RealPage economist Shane Squires points out in his commentary on this data, much of the demand for new, high-end product is in walkable locations in cities, where it doesn’t compete directly with more affordable (but far less accessible) suburban locations. This is also consistent with the hypothesis offered by Edlund, Machado and Sviatchi, that time-pressured, dual income households are increasingly willing to pay higher rents in urban centers for better accessibility to jobs and consumption opportunities.

The RealPage data provide nuanced insights into rental price trends that aren’t fully reflected in the usual measures that talk only about the “median” apartment. They show that rental prices are changing at very different rates in different markets, and at different price points with metropolitan markets. This more detailed view of where–and why–rents are increasing should be a useful guide to policy makers as they consider how to define and grapple with housing affordability problems.

(RealPage makes its estimates based on data drawn from lease transactions covering about 10 million apartments nationwide.)

Nationally, apartment supply may be catching demand

There’s more evidence that housing market supply is beginning to catch up to demand in a way that is likely to moderate rent increases. Nothing, it seems, is more infuriating to those caught in a market of steady rent hikes that being lectured by some economist that what is needed to resolve the problem is an increase in supply. Nice to know, but that’s not going to pay the rent any time soon.

But just as after a long winter there are some early signs of spring, there are a few hopeful indicators from housing markets that the long promised relief from increased supply is starting to show up, at least in a small way. Today we look at two recent market reports, one national, and one quite local, that are beginning to indicate a market shift.

Credit: Kelly Sims, Flickr
Credit: Kelly Sims, Flickr

 

There’s no denying that rents in the US have escalated over the past several years. Overall rents are up 4.6 percent in the past year, and the national rental vacancy rate has plunged from more than 10 percent to about 7 percent, signaling that their are relatively more tenants bidding for every available apartment. As a result the share of households spending more than 30 percent of their income on housing has increased.

For the past several years—and for a variety of reasons—we’ve seen a surge in demand for rental properties. Some of that had to due, especially initially with the collapse of the housing bubble, and several million households being moved, quite involuntarily from the ownership market, into renting. At the same time, younger adults have been much more likely to rent that previous generations, and seem especially enamored of centrally located, walkable apartments in great urban environments. The net effect is that the demand for rental housing has risen steadily.

At City Observatory, we think there are two fundamental, and widely under-appreciated facts about housing markets. First, that when supply does catch up to demand, rent increases soften. Second, supply almost always moves much slower than demand. The supply of rental housing has responded only slowly; and has mostly struggled to keep up with increasing demand.

In the past few months there’s growing evidence that supply is starting to catch up. The market analysts at REIS follow national trends in apartment construction tracking delivery (the completion of new apartments) and absorption (how many newly completed apartments get leased. Absorption is a “net’ figure: the difference between the number of previously vacant apartments that get leased and the number of previously occupied apartments that become vacant over any time periods. The difference between the completions and absorptions drives vacancy rates. When lots more apartments get leased than new apartments are built, vacancy rates fall; when completions outpace absorptions, vacancy rates rise.

REIS tracks these numbers on an annual basis, and their estimates for the past three decades are shown here:

c_2016-04_Typecast_chart

 

REIS analysts are reading the data to suggest that construction of new apartments is finally starting to have an impact on the market. REIS’s Scott Humphreys says:It’s official: developers are finally building more apartments than there are renters to fill them.

Similarly, at the blog Calculated Risk, Bill McBride reports on data from the National MultiFamily Housing Council (NMFC) which tracks vacancy rates for apartments around the country. Their data show that “market tightness” has been trending downwards for the past couple of years, and leads McBride to conclude that “it appears supply has caught up with demand—and I expect rent growth to slow.”

These data illustrate an important fact about supply and demand: Demand is highly volatile and can change quickly, while supply responds only slowly. Consider: the first decade of the 2000s was mostly a pretty bad time to be a landlord. A steady supply of new apartments was being built, but net absorptions fell short of the number of completions. In fact, at the height of the housing bubble, net absorptions were negative: more households moved out of apartments than moved in.

In the last two quarters of 2015, according to REIS data, completions outstripped net absorption by about 13,000 units in the third quarter and by nearly 15,000 units in the fourth quarter, with the result that the national vacancy rate ticked upward.

While national trends provide a helpful background, like politics, all housing is local. In an important sense, there really is no “national” apartment marketplace: apartments built in Cheyenne Wyoming really aren’t good substitutes for apartments in San Francisco. While there are broad national trends, the trajectory of supply and demand can play out differently in different local markets. The collapse of oil prices, for example, has dramatically altered the demand/supply balance in the oil patch town of Williston, ND, with the result that rents are off more than 20 percent from a year ago.

The evidence from one of the nation’s tightest housing markets, Boston, suggests that supply may be getting closer to catching up with growing demand. The Boston Globe reports that in the fourth quarter, rents in Boston increased by just one-tenth of one percent from the previous year, the smallest increase in five years. About 3,800 units came on line in 2015, and about another 5,000 are in the development pipeline. The news from Boston echoes reports from markets as diverse as Seattle, Denver, and Houston, that the growing number of new properties being completed is producing at least temporarily higher vacancy rates and more favorable rental offerings for tenants.

Nobody’s predicting a glut of unoccupied apartments—either in Boston, or nationally—that will push rents down. But slowly, and inexorably, the supply of housing is catching up to the demand, both in the aggregate and in the specific places where demand has grown most rapidly. It’s a reminder that if policy enables housing supply to expand, relief from higher rents can be delivered through the market.

Supply starting to catch up with demand

Fundamentally, the nation’s housing affordability problems are due to demand outpacing supply:  there’s more demand to live in some cities–and especially in great urban neighborhoods–than can be met from the current supply of housing, especially apartments. As demand surges ahead of supply, rents get bid up, which is the most visible manifestation of the affordability problem. But higher rents are also a signal to developers that they ought to build more housing, and when and where local zoning allows it, they do.  Over the past two years, rising rents have produced a surge of new apartment construction, and there’s growing evidence–in some markets, and at least for the short-term–these increments to supply are moderating the rate of rent hikes.  

There’s no denying that rents in the US have escalated over the past several years. Overall rents are up 4.6 percent in the past year, and the national rental vacancy rate has plunged from more than 10 percent to about 7 percent, signaling that their are relatively more tenants bidding for every available apartment. As a result the share of households spending more than 30 percent of their income on housing has increased.

Nothing, it seems, is more infuriating to those caught in a market of steady rent hikes that being lectured by some economist that what is needed to resolve the problem is an increase in supply. Nice to know, but that’s not going to pay the rent any time soon.  But just as after a long winter there are some early signs of spring, there are a few hopeful indicators from housing markets that the long promised relief from increased supply is starting to show up, at least in a small way. Today we look at recent market reports, national and  local, that provide some evidence that the predicted market forces are at work.

First, in New York, home to some of the most expensive real estate in the nation, there’s growing evidence that rent inflation is easing, at least at the high end of the rental market.  According to Bloomberg, the inventory of vacant, for-rent apartments in New York has increased by between 20 and 30 percent in the past year.  As a result, nearly a quarter of all apartments listed for rent in Manhattan in October included landlord-offered incentives or concessions, the highest level recorded in the past decade. Figuring in the value of these concessions, brokers estimate that median rents in Manhattan fell by about one percent, year-over-year. Even in Brooklyn, where rents have been rising rapidly, the number of for-rent units with landlord concessions increased from 8 percent to 12 percent in the past few months.

Credit: Kelly Sims, Flickr
Credit: Kelly Sims, Flickr

 

The same pattern observed in New York is starting to play out around the country.  Market analyses by Freddie Mac (the federal mortgage agency) and by real estate analytics firm Axiometrics suggest that in most metro areas around the country, rent increases are moderating, and that this trend is likely to continue in the near future.  Comparing first half 2016 rent growth with the same period in 2015, the data show that rent increases are slowing in most markets.  In 28 of the markets shown below, rent increases are slower now than in the previous year; in three markets they are about the same, and in eight markets, rents are accelerating.  (In the chart, white outlined bars indicate 2015 rent growth, and blue bars represent 2016 rent growth).  Rent increases are significantly lower in 2016 than the previous year in Portland, Oakland, San Jose, Boston, Austin, Houston and Philadelphia, to name a few.

freddie_mac_rent_index2016

Looking forward, Axiometrics predicts that for the next couple of years, rent increases will moderate.  Rents increased about 4.6 percent nationally in 2015, and are expected to rise a further 3.4 percent in 2016. But the outlook for 2017 is for an increase of 2.1 percent.

 

While national trends provide a helpful background, like politics, all housing is local. In an important sense, there really is no “national” apartment marketplace: apartments built in Cheyenne Wyoming really aren’t good substitutes for apartments in New York. While there are broad national trends, the trajectory of supply and demand can play out differently in different local markets.  But even in San Francisco, where rental inflation has been severe, and where its famously difficult to build new housing, there’s an indication that added supply is also moderating rent increases:  according to Freddie Mac, rents there have increased only about 0.9 percent over the past year, just a fraction of the increases being recorded a year ago. And while much of this is attributable to rising supply, at least some of the problem is what the analysts call “renter fatigue”–renters simply can’t afford higher rates than now being charged.

Nobody’s predicting a glut of unoccupied apartments that will push rents down. But slowly, and inexorably, the supply of housing is catching up to the demand, both in the aggregate and in the specific places where demand has grown most rapidly. It’s a reminder that if policy enables housing supply to expand, relief from higher rents can be delivered through the market.

How economically integrated is your city?

Last week, we looked at some of the growing body of academic evidence that shows that mixed income neighborhoods play a key role in helping create an environment where kids from poor families can achieve economic success.

One of our key urban problems is that economically, we’ve grown more segregated over time:  the poor tend to live in neighborhoods that are substantially poor, and the better off live in neighborhoods with few poor residents.  As a result, one of the key metrics we ought to be paying attention to  the level and change in economic segregation in our metropolitan areas.  

There are a variety of different facets to economic segregation.  It encompasses the segregation of poverty (the concentration of the poor in predominantly poor neighborhoods), the segregation of affluence (enclaves of high income households) and the separation of the middle class from high income and low income households. Also, in any metropolitan area, segregation levels will be influenced by the degree of overall income inequality.

The most comprehensive analyses of trends in economic segregation come from the outstanding research by Kendra Bischoff and Sean Reardon, whose report is worth diving into if you want more details.

Over the past four decades, economic segregation trends are extremely easy to summarize: they’re up. American cities are far more segregated by income today than they were in 1970 by every measure we’re aware of, indicating more “secession of the successful,” more concentrated poverty, and even more sorting among the lower-middle and upper-middle income tiers.

Credit: Kendra Bischoff and Sean Reardon
Credit: Kendra Bischoff and Sean Reardon

 

In large metro areas, in 1970, just 5.5 percent of families lived in “poor” neighborhoods (where median income is below 67 percent of the regional median), and 4.4 percent lived in “affluent” neighborhoods (where median income is more than 150 percent of the regional median). By 2012, those figures had both more than doubled, to 13.1 and 8.5 percent, respectively—meaning that over a fifth of all families lived in either poor or wealthy neighborhoods, as opposed to one in ten in 1970.

So that’s how things have changed over the last 40 years. What about the last five?  In their most recent paper, Bischoff and Reardon focus on changes between 2007 and 2012. (For sticklers, these are actually averages of 5-year American Community Survey results from 2005-09 and 2010-2014). Over that period, income segregation has continued its rise, but the trends look somewhat different than they have over the longer term.

A neighborhood in Atlanta. Credit: Chris Yunker, Flickr
A neighborhood in Atlanta. Credit: Chris Yunker, Flickr

 

Over the last five years, the proportion of families in low- and high-income neighborhoods has continued to increase—but a more sophisticated look at the numbers suggest that’s more about changing income than actual segregation. Rather, Bischoff and Reardon show that most of the rise in income segregation between 2007 and 2012 came from the increasing segregation of lower-middle-income families (those between the 10th and 50th percentile of income) and upper-middle-income families (those between the 50th and 90th percentiles). The growing inequality of income overall is one factor fueling economic segregation.

There are several different ways to measure economic segregation–and the Bischoff and Reardon paper has measures for the segregation of the poor from everyone else, the segregation of the rich, and a combined measure showing how much the rich and poor are segregated from the middle class. Their most comprehensive measure of aggregate segregation is an indicator called “H”, which is an entropy index that captures the degree of dispersion from an even distribution at all income levels.  We use this measure as the single best indicator of overall levels of income segregation. While values of H don’t have a simple intuitive description, higher levels correspond to greater segregation; lower values correspond to less segregation.

Using Reardon and Bischoff data, we’ve ranked all of the 51 largest US metropolitan areas according to their degree of income segregation from 1970 to 2012.  The most segregated areas are shown at the top of the table (you can use tools in the table to re-sort rankings for different years).  The final column in the table shows the change in the value of “H” for each metro area between 1970 and 2012.

Several findings stand out.  First, income segregation increased almost everywhere.  Only two of the 51 largest metro areas–Raleigh and New Orleans–didn’t experience an increase in income segregation over the past four decades. In addition, the rankings of metro areas are relatively stable over time–income segregation is an enduring and slowly changing feature of the built environment.  Among the metro areas with the highest levels of income segregation are Dallas-Fort Worth, Philadelphia and New York.  The three metros with the lowest levels of income segregation are Portland, Orlando and Minneapolis-St. Paul

To see how an individual metropolitan area has performed over time, you can also select it on the following chart.  The chart shows graphically, the value of H and other segregation indicators for a single metropolitan area for each of the years in the Bischoff Reardon database.  In addition to H (blue), the chart illustrates the percent of population in poor neighborhoods (red), the percent in high income neighborhoods (green) and the combined percent in high income and poor neighborhoods (orange). For each indicator, higher values indicate greater segregation.  These other measures help show the extent to which segregation in any place is driven more by concentration of poverty or secession of the successful.

The Bischoff and Reardon data confirm both the prevalence and growth of income segregation in American metropolitan areas. This data is an important tool urban leaders can use to understand how their region performs on this important dimension, and also lets us see which communities might be good places to examine to understand the policies and characteristics that have fostered higher levels of economic integration.

Successful cities and the civic commons

 

At City Observatory, we’ve been bullish on cities. There’s a strong economic case to be made that successful cities play an essential role in driving national economic prosperity. As we increasingly become a knowledge-driven economy, it turns out that cities are very good at creating the new ideas of all kinds that propel economic progress.

But cities aren’t simply economic machines. Nor are they merely large and efficient collections of buildings, pipes, wires, asphalt and concrete. Cities are importantly a collective social endeavor, what Ed Glaeser calls mankind’s greatest invention. It is the opportunity for interaction with others in cities that is their special power and attraction. Some of those interactions are purely economic, but they are deeply embedded in a much wider set of connections and relationships.

Despite all the strength and energy of cities today, there’s growing evidence that the web of social connections that ties cities together, and that underpins much of their economic importance, is coming apart at the seams. For decades cities have been pulling apart: sprawl has moved us physically further from one another, and within metropolitan areas, our neighborhoods have become more segregated by income.

As we pointed out in our CityReport, Less in Common, the shared opportunities and spaces that let Americans of all different backgrounds easily interact with one another have been steadily eroded. We spend more time alone, we socialize with others less, we’re segregated from one another by income, and we generally spend less time in public or in the company of those who are different from us.

 

Click to see the full infographic.

Click to see the full infographic.

 

 These trends are mirrored in how we get around, relying more and more on cars cars as a mode of transportation, replacing walking and public transit—modes in which, outside a sealed, private machine, you might actually interact with neighbors or others. In fact, while about 30 percent of Americans reported spending time with their neighbors in 1970, that number was down to about 20 percent today.

This privatizing of our once public lives has also manifested itself in further segregation of neighborhoods by economic status, a trend that has been well documented, and which we have explored at length at City Observatory.  Rich and poor Americans have become more spatially divided as we sort into high income and low income neighborhoods. While only 15 percent of Americans lived in rich or poor neighborhoods in 1970, by 2012, that figure was up to 34 percent.

The good news is that there’s a growing recognition of this challenge, and many people and communities are actively looking for ways to rekindle public life. There’s some compelling evidence that the move back to cities is propelled, at least in part, by a hunger for greater opportunities to interact with one another, to—as our friend Carol Coletta puts it—“to live life in public.” This shows up in the growing popularity of  “third places”—coffee shops, bars, farmers markets, and other settings where people gather away from home and work.

Its exciting that Knight Foundation and four other foundations have launched their new initiative “Reimagining the Civic Commons.” Over the next five years, these foundations will fund a $40 million grant program to promote innovative projects in five cities: Akron, Chicago, Detroit, Memphis and Philadelphia. The project aims to fund a series of experiments that consider how we might better use a range of public spaces–parks, libraries, and even sidewalks–to foster greater civic engagement. This could turn out to be a vital bit of public-minded venture capital that can help further illustrate—and develop—the vital role that public spaces play in underpinning a successful city.

Fundamentally, this strategy makes sense because of the strong interdependence of the social and economic network effects in cities. Economists portray city economies as being driven by agglomeration effects–the increased intensity and productivity that occurs when large numbers of people can interact (especially when they have diverse knowledge and backgrounds). As Jane Jacobs argued, diverse, well-connected cities produce the “new work” that propels economic progress. But this diversity and connectedness pays dividends in the form of widely shared opportunity. Raj Chetty and his colleagues have shown that cities with lower levels of racial and economic segregation–where its easier for people of different backgrounds to connect–have higher levels of intergenerational mobility, especially for children of low income families.

At their root, as Ed Glaeser has argued, cities are about the absence of distance between people, and that’s not simply physical distance, but social distance as well. Having a civic realm that works well, where people from throughout the city can easily interact is not a mere public amenity, but a vital component of a successful city.

 

Our infographic for thinking about the civic commons

City Observatory is about cities, and while much of the discussion of urban policy surrounds the physical and built environment, ultimately cities are about people. When cities work well, they bring people together. Conversely, when cities experience problems, its often because we’re separated from one another or driven apart.  

A critical feature of cities is how people experience their neighborhoods as communities—as places where people gather, interact, and enrich each others’ lives. In our 2015 report “Less in Common,” we explored the ways in which increasing auto-centric development has degraded this aspect of our urban life. Now, as we did with our report “Lost in Place,” City Observatory and Brink Communication have put together an infographic to make these important ideas easy to share—and as always, this and all of our work is licensed under Creative Commons-Attribution, so feel free to incorporate it in your own presentations or reports.

The infographic illustrates many of the key findings of “Less in Common,” which illustrate ways in which increasing sprawl has weakened our communities, and show how a broader trend of Americans living more widely separated private lives has created a space for smart urban planning to strengthen the public realm.

Click to see the full infographic.
Click to see the full infographic.

 

Perhaps one of the clearest connections is in recreation: While Americans who went swimming in 1950 would probably go to a community pool, since then, the number of private, in-ground pools has increased from 2,500 to 5.2 million in 2009, as large-lot zoning and the construction of highways far into the suburban periphery has essentially subsidized the consumption of private land, at the expense of public facilities. These trends are mirrored in how we get around, relying more and more on cars cars as a mode of transportation, replacing walking and public transit—modes in which, outside a sealed, private machine, you might actually interact with neighbors or others. In fact, while about 30 percent of Americans reported spending time with their neighbors in 1970, that number was down to about 20 percent today.

This privatizing of public life has also encouraged further segregation of neighborhoods by economic status, a trend that has been well documented, and which we have explored at length at City Observatory.  Rich and poor Americans have become more spatially divided as we sort into high income and low income neighborhoods. While only 15 percent of Americans lived in rich or poor neighborhoods in 1970, by 2012, that figure was up to 34 percent.

The erosion of the civic commons also has a profound impact on economic opportunity: In regions with more economic segregation, children from low-income households are much less likely to be able to improve their income status as adults.

As the rapper Ice Cube told National Public Radio earlier this year, reflecting on the school integration policies of his childhood:

I liked it because I was being bused with a lot of my homies. So we was, like, all going out there, and then it was a lot of different neighborhoods. So it was, like, buses from all these different neighborhoods all converging on this white school. And it was kind of cool because we had a chance to see different things, different people, have different conversations, hear different music and just get a chance to see that the world was bigger than Compton, South Central or, you know, whatever. You know, so we had a chance to really kind of open our horizons…

In other words, the strength of our public spaces and institutions is crucial both for educational and economic opportunity, as well as expanding our sense of collective potential and identities. That’s something we should all be able to get behind.

Click here to see the full infographic.

Storefronts and job growth

Earlier this week, we introduced the Storefront Index, a measure of the location and clustering of customer-facing retail and service businesses. A primary use of the index is to identify places that have the concentration of retail activity that we generally associate with a vibrant neighborhood commercial area, and that can support a high level of walkability.

It’s also possible to construct the Storefront Index at different points in time to measure the growth or change in neighborhood commercial activity. For example, we presented data for Portland’s Alberta Street commercial neighborhood in 1997 and 2014. This neighborhood is situated in Northeast Portland, which has historically been home to some of the city’s lowest income households, and which has had a high share of the region’s African-American population.

Screen Shot 2016-04-27 at 2.55.13 PM

 

The data show the substantial growth in storefront businesses over that time period.

As we noted in our report, storefronts are not important simply as commercial destinations or focal points for pedestrian activity. These retail and service businesses are an important source of jobs. To explore the connection between storefronts and jobs, we gathered data from the Census Bureau’s LEHD program for this same neighborhood. LEHD uses administrative records to provide very geographically detailed (block-by-block) estimates of the number of persons employed throughout the U.S. These data show a dramatic increase in local jobs over this time period. Firms on and near Alberta Street employed about 650 workers in 2002; by 2014 that had nearly tripled to 1,838. (We excluded the administration sector from these tabulations to exclude employment reported by local temporary help and employment leasing firms whose employees actually work outside the local neighborhood).

 

In the case of Alberta Street, the flourishing of the local storefront businesses has been associated with a significant increase in local employment—not just in retail and services businesses, but in a wide range of other sectors. Tracking the presence of storefronts over time can be an indicator not just of a neighborhood’s sidewalk vitality, but employment strength.

A rebound in millennial car-buying?

Except for boomers, we’re all less likely to be buying new cars today

One of the favorite “we’re-going-to-debunk-the-claims-about-millenials-being-different” story ideas that editors and reporters seem to love is pointing out that millennials are actually buying cars.  Forget what you’ve heard about bike-riding, bus-loving, Uber-using twenty-somethings, we’re told, this younger generation loves its cars, even if they’re a bit slow realizing it. Using a combination of very aggregate sales data and usually an anecdote about the first car purchase by some long-time carless twenty-something, reporters pronounce that this new generation is actually just as enamored of car ownership as its predecessors.  

At least this generation still loves its cars.

The latest installments in this series appeared recently in Bloomberg and in the San Diego Union-Tribune. “Ride-sharing millennials found to crave car-ownership after all,” proclaimed Bloomberg’s headline.   “Millennials enter the car market — carefully,” adds the San-Diego Union Tribune.  San Diego’s anecdote is 32-year old Brian, buying a used Prius to drive for Uber; Bloomberg relates market research that shows that young car buyers especially like sporty little SUVs, like the Nissan Juke.  Like other studies, Bloomberg relies on a vague reference to aggregate sales figures by generation:  “Millennials bought more cars than GenXers,” we are told.  

Earlier this year, and previously, in 2015, City Observatory addressed similar claims purporting to show that Millennials were becoming just as likely to buy cars as previous generations. Actually, it turns out that on a per-person basis, Millennials are about 29 percent less likely than those in Gen X to purchase a car.  We also pointed out that several of these stories rested on comparing different sized birth year cohorts (a 17-year group of so-called Gen Y with an 11-year group of so-called Gen X). More generally though, we know that there’s a relationship between age and car-buying. Thirty-five-year-olds are much more likely to own and buy cars than 20-year-olds. So as Millennials age out of their teen years and age into their thirties, it’s hardly surprising that the number of Millennials who are car owners increases. But the real question is whether Millennials are buying as many cars as did previous generations at any particular point in their life-cycle.

This is a question that economists at the Federal Reserve Bank turned their attention to in a study published this past June.  Christopher Kurz, Geng Li, and Daniel Vine, used detailed data from JD Power and Associates to look at auto buying patterns over time, controlling for the age of car purchasers.  (Their full study, “The Young and the Carless,” is available from the Federal Reserve Bank). Here’s their data showing the number of car purchases, per 100 persons in each of several different age groups.

These data show a number of key trends.  First, the data confirm a pronounced life-cycle to car purchasing:  those under 35 purchase very few new cars; car purchasing peaks in the 35 to 45 age group, and then declines for those over 55.  Second, the state of the economy matters.  Especially compared to 2000 and 2005, auto purchasing declined sharply for all age groups in 2010 (coinciding with the Great Recession) and has rebounded somewhat since then as the economy has recovered.  Third, as of 2015, auto purchasing was lower for all age groups under 55 years of age than it was in either 2000 or 2005.  Fourth, the big factor in driving car sales growth in the past decade was the over 55 group (increasingly swelled by the aging Baby Boom generation).  Car sales for the over 55 crowd fell proportionately less during the great recession, and are at a new high (5.7 per 100 persons over 55).  There’s clearly been an aging of the market for car ownership.  The authors summarize this data as follows:

In summary, the average age of new vehicle buyers increased by almost 7 years between 2000 and 2015. Some of that increase reflected the aging of the overall population, but some of it reflected changes in buying patterns among people of different age groups. The most relevant changes in new vehicle-buying demographics over this period were a decline in the per-capita rate of new vehicle purchases for 35 to 54 year olds and an increase in the per-capita purchase rate for people over 55.

Kurz, Li and Vine look at the relationship between the decline in auto sales to these younger age groups and other economic and demographic factors.  They find that declining sales are correlated with lower rates of marriage and lower incomes; that is to say:  much of the decline in car purchasing among these younger adults can be explained statistically by the fact that un-married people and people with lower incomes are less likely to buy new vehicles, and as a group, there are relatively more un-married people and lower incomes among today’s young adults.  

Their argument is essentially that if young adults today got married at the rate that they did in early generations, and if they earned as much as previous generations, that their car buying patterns would be statistically very similar to those observed historically.  While the authors cite this as evidence that young adults taste for car-buying may not be much different that previous generations, in our view, this has to make some strong assumptions about the independence of later marriages and lower marriage rates and changed attitudes about car ownership. While those who do marry may exhibit the traditional affinity for car ownership, it may be that those who delay marriage (or who never marry) have different attitudes about cars.  In addition, there’s growing evidence that the relative weakness of generational income growth may persist for some time, lowering the demand for car ownership.

 

On the road again?

Hot on the heels of claims that Millennials are buying houses come stories asserting that Millennials are suddenly big car buyers. We pointed out the flaws in the home-buying story earlier this month, and now let’s take a look at the car market.

The Chicago Tribune offered up a feature presenting “The Four Reasons Millennials are buying cars in big numbers,” assuring us that millennials just “got a late start” in car ownership, but are now getting credit cards, starting families and trooping into auto dealerships “just like previous generations.”

Similar stories have appeared elsewhere. The Portland Oregonian chimed in: “Millennials are becoming car owners after all.”

Not quite a year ago, we addressed similar claims purporting to show that Millennials were becoming just as likely to buy cars as previous generations. Actually, it turns out that on a per-person basis, Millennials are about 29 percent less likely than those in Gen X to purchase a car.

We pointed out that several of these stories rested on comparing different sized birth year cohorts (a 17-year group of so-called Gen Y with an 11-year group of so-called Gen X). After applying the highly sophisticated statistical technique known as “long division” to estimate the number of cars purchased per 1,000 persons in each generation, we showed that Gen Y was about 29 percent less likely than Gen X to purchase a car.

More generally though, we know that there’s a relationship between age and car-buying. Thirty-five-year-olds are much more likely to own and buy cars than 20-year-olds. So as Millennials age out of their teen years and age into their thirties, it’s hardly surprising that the number of Millennials who are car owners increases. But the real question—as we pointed out with housing—is whether Millennials are buying as many cars as did previous generations.

The answer is no.

Auto industry analysts at the National Automobile Dealer’s Association—who have a very strong stake in the outcome—are pretty glum about sales prospects of the Millennial generation. NADA’s economist Steven Szakaly predicts it will take four Millennials to equal the sales impact of a single Boomer. This is due to a combination of factors, including Millennials’ weaker income and job prospects, and lower propensity to drive and own cars. Its also the case that waiting longer to buy one’s first car means that one is likely to own fewer cars over a lifetime, and as with housing there’s no evidence that young adults are catching up to previous generations as they age.

As we said last year, the real gold standard for intergenerational comparisons would be to look at the rate of car ownership for different generations when they were at the same point in their life-cycle, i.e., look at the car buying habits of Boomers when they were in late twenties and early thirties (during the seventies and eighties), and compare them with the habits of Gen X (25-34 in the nineties) and Millennials (25-34 from 2005 onward). Alas, we don’t have that data.

But we do have another indicator: the fraction of young adults who have drivers licenses.

Michael Sivak and Brandon Schoettle at the University of Michigan’s Transportation Research Institute analyzed US DOT data, and found big declines in the rate at which young adults get driver’s licenses. At every age up to 50 years old, a smaller fraction of U.S. adults is getting a driver’s license than a few decades ago. Today, only about 60 percent of 18-year-olds have a licence, compared with about 80 percent in the 1980s. Even among those in their late twenties and early thirties the licensing rate is down 10 or more percentage points from earlier decades. And the declines appear to be persisting and continuing as time goes on: Between 2008 and 2014, the rate of licensing declined from 82.0 percent to 76.7 percent among 20 to 25-year-olds.

 

Another bit of evidence comes from financial markets. The Federal Reserve Bank of New York studied credit records to determine what fraction of persons in each age group took out an auto loan. While the total volume of automobile lending declined sharply between 2003 and 2015, the decline was most pronounced among younger age groups. Lenders made about 25 auto loans per 100 persons in their mid thirties in 2003, but only about 17 loans per 100 the same age cohort in 2015. Only those 65 or older are more likely to take out a car loan today than in 2003.

 

When it comes to America’s much storied romance with the car, it’s apparent that ardor has cooled, especially among Millennials. We’re buying fewer cars, we’re taking out fewer car loans, we’re waiting longer to get a driver’s license, and fewer of us are ultimately doing so.

An infographic summarizing neighborhood change

One of City Observatory’s major reports is “Lost in Place,” which chronicles the change in high-poverty neighborhoods since 1970. In it, you’ll find a rich array of data at the neighborhood level showing how and where concentrated poverty grew.

We know it’s a complex and wonky set of data, so we’ve worked with our colleagues at Brink Communication to develop a compact graphic summary of some of our key findings. We’re proud to present that here. And like all material on City Observatory, it’s available for your free use under a Creative Commons-Attribution license, so feel free to incorporate it in your own presentations, email, and social media to help explain the processes of neighborhood change in your city.

You might find it especially useful paired with more local-specific content from “Lost in Place,” such as these interactive city-by-city, neighborhood-by-neighborhood maps. Further down this page, you can also find an interactive dashboard with full statistics for your city, including trends in high-poverty, low-poverty, rebounding, and “fallen star” neighborhoods, and the total number of people living in high-poverty neighborhoods from 1970 to 2010.

Screen Shot 2016-04-18 at 2.39.55 PM

Screen Shot 2016-04-18 at 2.44.38 PM

 

Click the thumbnail below for the full infographic. We’ve also included some further narrative context below.

Click for full infographic.
Click for full infographic.

 

Neighborhood change has been a hot topic in many American cities—and, increasingly, on the national stage—for a number of years. At City Observatory, we’re especially interested in shifting community demographics as they relate to economic and racial integration, which have been shown to have profound impacts on people’s class mobility, longevity, and more.

But while most of the focus has been on gentrification—the process of middle- and upper-income people moving into lower-income neighborhoods—our own research shows that low-income communities are much more likely to suffer from the opposite problem: increasing poverty and severe population decline. Three-quarters of neighborhoods with a poverty rate twice the national average in 1970 still had very high levels of poverty in 2010, and had lost an average of 40 percent of their population. That represents a much larger number of people who have been “displaced” by a lack of opportunity or high-quality public services than have been displaced by gentrification.

Our perceptions of neighborhood change are often shaped by those places that are experiencing the greatest pace of change.  The data in “Lost in Place”—available for all of the nation’s 50 largest metro areas—lets anyone look to see how poverty has changed and spread in their city since 1970. And our new infographic helps explain the major components of change. We invite you to use these tools to explore and discuss the process of neighborhood change in your city.

A new look at neighborhood change

One of City Observatory’s major reports is “Lost in Place,” which chronicles the change in high-poverty neighborhoods since 1970. In it, you’ll find a rich array of data at the neighborhood level showing how and where concentrated poverty grew.

We know it’s a complex and wonky set of data, so we’ve worked with our colleagues at Brink Communication to develop a compact graphic summary of some of our key findings. We’re proud to present that here. And like all material on City Observatory, it’s available for your free use under a Creative Commons-Attribution license, so feel free to incorporate it in your own presentations, email, and social media to help explain the processes of neighborhood change in your city.

You might find it especially useful paired with more local-specific content from “Lost in Place,” such as these interactive city-by-city, neighborhood-by-neighborhood maps. Further down this page, you can also find an interactive dashboard with full statistics for your city, including trends in high-poverty, low-poverty, rebounding, and “fallen star” neighborhoods, and the total number of people living in high-poverty neighborhoods from 1970 to 2010.

Screen Shot 2016-04-18 at 2.39.55 PM

Screen Shot 2016-04-18 at 2.44.38 PM

 

Click the thumbnail below for the full infographic. We’ve also included some further narrative context below.

Click for full infographic.
Click for full infographic.

 

Neighborhood change has been a hot topic in many American cities—and, increasingly, on the national stage—for a number of years. At City Observatory, we’re especially interested in shifting community demographics as they relate to economic and racial integration, which have been shown to have profound impacts on people’s class mobility, longevity, and more.

But while most of the focus has been on gentrification—the process of middle- and upper-income people moving into lower-income neighborhoods—our own research shows that low-income communities are much more likely to suffer from the opposite problem: increasing poverty and severe population decline. Three-quarters of neighborhoods with a poverty rate twice the national average in 1970 still had very high levels of poverty in 2010, and had lost an average of 40 percent of their population. That represents a much larger number of people who have been “displaced” by a lack of opportunity or high-quality public services than have been displaced by gentrification.

Our perceptions of neighborhood change are often shaped by those places that are experiencing the greatest pace of change.  The data in “Lost in Place”—available for all of the nation’s 50 largest metro areas—lets anyone look to see how poverty has changed and spread in their city since 1970. And our new infographic helps explain the major components of change. We invite you to use these tools to explore and discuss the process of neighborhood change in your city.

Excessive expectations: A first look at the DOT’s new road performance rules

We’ve just gotten our first look at the new US Department of Transportation performance measurement rule for transportation systems. The rule (nearly three years in gestation, since the passage of the MAP-21 Act) is USDOT’s attempt to establish performance measures to guide investment and operation of the nation’s urban transportation system. One of the criticisms—fair, in our view—of the nation’s transportation system is that there are few, if any, quantitative standards against which performance can be measured, and against which the merits and results of alternative policies and investments can be judged. The new DOT standards aim to do just that. In the next few weeks, we’ll all have an opportunity to weigh in on whether these standards help address this problem.

Standards like these may seem like technocratic trivia. But we’ve routinely witnessed obscure and seemingly innocuous rules of thumb about things like road width or parking requirements end up profoundly shaping our cities—and often in ways we don’t like. Getting these performance measures right can help push transportation investment in a direction that supports more successful cities. Getting them wrong runs the risk of repeating past mistakes. The proposed rules are voluminous, running to more than 400 pages in all—and incredibly detailed. So it will take some time to dig into them. But at first glance, we have some reactions. We’ll take a closer look in the days ahead, and revise and extend our comments as we (and others) go through the minutiae here.

Two key words: “Excessive” and “Expectations”

Initially, we’re focusing on three standards that the USDOT is proposing apply to metropolitan areas of a million or more population. The USDOT has also proposed separate standards for travel time reliability and freight that we’ll look at in a future post.

Briefly, the standards are outlined in the following table and further analyzed below. They deal with total hours of excessive delay, and the share of the Interstate system or national highway system where travel times don’t exceed 150 percent of a locally established “expectation.”

Performance Measures for Large Metropolitan Areas

Objective Indicator Standard Reference
Congestion Annual hours of excessive delay per capita Time in excess of what trips would take at 35 MPH for freeways; 15 MPH for other roads 490.507(b)(1)
Interstate Performance Percent of peak hour travel times that meet expectations Not more than 150% of locally set expected travel time 490.507(b)(2)
National Highway System Performance Percent of peak hour travel times that meet expectations Not more than 150% of locally set expected travel time 490.707

Implementing these performance measures will require a heavy reliance on technology and a substantial investment. USDOT believes that it will be able to use speed data collected from vehicle telemetry, including cell phones to determine performance for individual street segments. While the technology is promising—it does have some limitations—an analysis of the performance of HOT lanes was flawed because it couldn’t distinguish between vehicles traveling in the tolled and free lanes. USDOT estimates that complying with the data collection and reporting requirements of this rule will cost states and local governments $165 to 224 million over the next decade.

Congestion: “Excessive hours of delay”

The core measure of whether a metropolitan area is making progress in addressing its congestion problem is what USDOT calls “annual hours of excessive delay per capita.”  This congestion measure essentially sets a baseline of 35 mph for freeways and 15 pmh for other roads.  If cars are measured to be traveling more slowly than these speeds, the additional travel time is counted as delay.  The measure calls for all delay hours to be summed and then divided by the number of persons living in the urbanized portion of a metropolitan area.

The proposed measure is, in some senses, an improvement over other measures (like the Texas Transportation Institute’s Travel Time Index) that compute delay based on free flow traffic speeds (which in many cases exceed the posted speed limit). But despite its more realistic baseline, this measure suffers from a number of problems:

—This is all about vehicle delay, not personal delay. So a bus with 40 or 50 passengers has its vehicle delay weighted the same amount according to this metric as a single occupancy vehicle.

—This ignores the value of shorter trips. As long as you are traveling faster than 15 miles per hour or 35 on freeways, no matter how long your trip is, the system is deemed to be performing well.

Interstate and National Highway System performance: “Peak hour travel times meet expectations”

If “meets expectations” sounds a bit squishy for a federal planning standard, it’s because it is.

Under the rule, state DOTs or Metropolitan Planning Organizations (MPOs) would establish “expectations” for how long (or how fast) trips would take on each segment of a metropolitan area’s major freeways and highways. Segments which experienced peak hour travel times that were 50 percent more than these “expected” travel times would be deemed to be congested. The metric would track the share of a region’s highway segments that didn’t experience this level of congestion-related delay.

The pivotal policy question here is what are “expected” travel times. Here the USDOT simply punts: It’s up to state DOTs or metropolitan planning organizations to make this call. As the DOT says: “Under this proposed approach, FHWA does not plan to approve or judge the Desired Peak Period Travel time levels or the policies that will lead to the establishment of these levels.”

In effect, this means that performance metrics are likely to vary widely from place to place. These performance measures beg the essential question of what constitutes a reasonable expectation of travel times. As we’ve pointed out, it’s a regular occurrence in daily life that Americans have come to tolerate very different levels of delay for the same service at different times, for example, when they order their morning coffee, as we documented in the Cappuccino Congestion Index. In our first reading, it’s also not clear whether states and MPOs can adjust expectations over time. It’s an interesting question: Should we adjust our expectations as conditions change, or once established, should “expected” travel times be an unchanging baseline against which performance is measured?

Credit: Daniel Lobo, Flickr
Credit: Daniel Lobo, Flickr

 

The “expectations” terminology begs a larger question as well: congestion reduction measures are seldom free. It costs money to expand capacity, improve transit, or implement other measures that might reduce travel times at the peak hour. The big question is whether the value commuters attach to such potential travel time savings come anywhere close to being commensurate with the cost of achieving expectations. Because USDOT offers no guidance or guidelines as to what might constitute reasonable expectations for travel times, and because they’re unmoored from any standard of cost effectiveness, this performance standard is likely to be of limited usefulness.

In addition, the “peak hour travel times meet expectations” measure is, of course, a variant of the classic travel time index that we—and others—have long critiqued. One of its chief problems is the denominator. In this case, the denominator is the size of a region’s highway system (in DOT parlance “segment length”.) The indicator is the percent of Interstate (or NHS) roadways that aren’t congested. So—at least in theory—if a region expands its “segment length” by building new, under-utilized highway capacity, it can improve the ratio of uncongested to congested roads—thus improving its performance. Conversely, a metropolitan area that doesn’t increase the segment length of its Interstate (or NHS system) in the face of increasing travel seems likely to see a decline in its rated performance. As a result, this measure seems to impart a strong “build, baby, build” bias to the indicators.

DOT whiffed on greenhouse gases

Despite some hopes that the White House and environmentalists had prevailed on the USDOT to tackle transportation’s contribution to climate change as part of these performance measures, there’s nothing with any teeth here. Instead—in a 425 page proposed rule—there are just six pages (p. 101-106) addressing greenhouse gas emissions that read like a bad book report and a “dog-ate-my-homework” excuse for doing nothing now. Instead, DOT offers up a broad set of questions asking others for advice on how they might do something, in some future rulemaking, to address climate change.

Three ideas for what DOT might have done

  1. Make VMT per capita a core measure. Vehicle Miles Traveled (VMT) per capita is strongly correlated with important transportation system outcomes. It’s correlated with total system costs, costs to households, greenhouse gas emissions, crashes, injuries and fatalities.
  2. Shift from excess travel time to total travel time. A total travel time measure, which recognizes the value of shorter trips, even when they occur at somewhat lower speeds better recognizes the economic and environmental value of more compact development patterns. Implement a “total travel time” measure that computes total travel time per resident, and gives equal weight to measures that reduce the distance of trips and the need for travel, especially at the peak hour, when it will have the greatest effects on congestion.
  3. Establish a separate methodology for transit delays. How much additional time do transit riders incur from transit systems that don’t have average running speeds of some reference number (like DOTs 35 MPH for freeways and 15 MPH) for roads, or locally established expectations. The amount of this delay could easily be calculated from transit system operating records and ridership counts.

Our objective in writing about these standards is to encourage others to take a close look, and help provide a robust discussion of this important policy. We invite your comments and corrections—and we’ll update and add to this post as we learn more about the rule. Stay tuned.

Daytime and nighttime segregation

In cities, you’ll sometimes hear people talk about a “daytime population”: not how many people live in a place, but how many gather there regularly during their waking hours. So while 1.6 million people may actually live in Manhattan, there are nearly twice that many people on the island during a given workday.

Most studies on segregation deal with what you might call the “nighttime population,” or actual locations of residence. And of course, that kind of segregation has been shown to have significant negative effects. But it’s also in large part a matter of convenience: the Census means that we have detailed data on where people live. It’s harder to get data on where they happen to spend their time when they’re not at home.

But a fascinating study asks whether, and how, waking mobility affects patterns of segregation. The authors—Taylor Shelton, Ate Poorthuis, and Matthew Zook—used geotagged Twitter and Foursquare data in Louisville, KY to determine whether users likely lived in that city’s West End (predominantly black) or East End (predominantly white). Then they mapped the ratio of the number of tweets by East End residents to the number of tweets by West End residents all across the city.

Screen Shot 2016-04-18 at 10.11.39 AM

 

The results are striking: While the West End is visible as a block of nearly solid purple, indicating virtually no tweets from East End residents, Louisville’s East End appears as various splotches of orange, grey, and purple—indicating a much greater mix of East and West End residents.

The implication is that West End residents, who are mostly black, are much more likely to cross boundaries of segregation than East End residents, who are mostly white. In part, that may be a matter of necessity, as the wealthier East End has more jobs, stores and services.

But it also fits in a pattern of racial stigma and avoidance described by other studies as well. What’s ironic here is that while racial segregation is often described as a limitation on the movement of the disadvantaged population—and in many important ways, from health outcomes to employment, it is—in terms of physical mobility, it turns out that the driver of the West End’s isolation isn’t that West Enders never leave, but that East Enders never visit.

A view of 9th Street, which divides the East and West Ends of Louisville, KY. Credit: Google Maps
A view of 9th Street, which divides the East and West Ends of Louisville, KY. Credit: Google Maps

 

That dovetails with research by Ed Glaeser, who suggests that since around 1970, the persistence of “nighttime” residential segregation has been driven primarily by whites’ decisions to avoid neighborhoods that have a significant black population, and to leave their own neighborhoods when blacks move in. It also resonates with research from Robert Sampson, who found a significant stigma attached to predominantly black neighborhoods, and Maria Krysan, who found that while blacks’ knowledge about predominantly white neighborhoods in Chicago depended on their distance and economic class, whites were much more likely to describe themselves as knowing nothing about black neighborhoods, regardless of other factors.

Shelton, Poorthuis, and Zook did find that a few specific activities could draw East End residents west: a cluster appeared near the Churchill Downs racetrack during horse racing season, but then disappeared when the season ended. In other words, while this is a hopeful sign that some kinds of activities clearly generate geographic crossover, these kinds of visits appeared to be to few or limited to have any wider spillover effects in increasing even daytime integration elsewhere in the West End.

That suggests the remedy for this kind of separation will have to go deeper than just an occasional event that draws people from around the city for a few hours. But this paper helps underscore that when we think about segregation, we need to think about more than just where people sleep at night.

How great cities enable you to live longer

Low income people live longer in dense, well-educated, immigrant-friendly cities

Some of the most provocative social science research in the past decade has come from the Equality of Opportunity Project, led by Stanford economist Raj Chetty. The project’s major work looks at the factors contributing to intergenerational economic mobility–the extent to which different communities actually enable the American dream of people in the lowest income groups being able to move up economically. In another research project, Chetty and his colleagues have looked at how life expectancy varies by community.  

The bulk of the paper concerns the relationship between longevity and income, and has been well-reported elsewhere. It highlights patterns that anyone following issues of inequality in the US would have long suspected to be true—that life expectancy is strongly correlated with income, and that the gap in life expectancy between high- and low-income people has grown—but which are now confirmed, in detail, in hard numbers.

But because Chetty et al also analyzed their data by commuting zone (akin to a metropolitan area) and county, we can also draw important conclusions about the link between place and life expectancy, just as their earlier research linked place and economic opportunity. And it appears that strong urban environments can boost their residents’ longevity—especially for the low-income.

Screen Shot 2016-04-13 at 9.53.25 AM

 

This wide variation in life expectancy by region provides some insights into the community characteristics that are most closely associated with longer lives. Here we’ve reproduced a key chart from Chetty’s paper, which shows the correlation between a series of regional characteristics and the life expectancy of people in the bottom income quartile.

Credit: Chetty et al
Credit: Chetty et al

 

Dots correspond to the point estimate, lines represent the 95 percent confidence interval of the estimate. Positive values indicate that life expectancy increases with increases in the local characteristic; negative values indicate that life expectancy decreases as the value of the local characteristic increases.

In part, these statistics, affirm what we already know:  Places where people smoke more and where obesity is more prevalent have shorter life expectancies; places where people exercise more have longer life expectancies. Regional variations in key health behaviors are reflected directly in the life expectancy of the poor. And, the report casts some doubt on some other factors that people think influence health and mortality.  Chetty et al looked at the role of a range of health care measures, the presence of social capital, and the role of inequality and of unemployment and found that regional variations in these characteristics had weak, if any correlation with regional variations in life expectancy.

The unexpected importance of place.

The most interesting result of this paper is the strong, consistent positive contribution of several community level variables to life expectancy.  P, poor people tend to live longer in places with more immigrants, more expensive housing, higher local government spending, more density, and a better educated population. Consider each of the five characteristics in the category “Other Variables” at the bottom of Chetty, et al’s Figure 8.

What these data show are a string of strong positive correlations. Places with more immigrants have longer life expectancy for the poor. The same holds for places with more expensive housing: here, too, the poor live longer. The poor also live longer in places with high levels of government spending, more density, and a better educated population. Taken together, these correlations suggest the importance of positive spillover effects from healthy urban places. Large cities tend to have higher levels of density. The most successful cities tend to attract more immigrants, have more expensive housing, and a better educated population. These data suggest that the poor have longer life expectancies in thriving cities.

The authors explain that their data make a strong case for a relationship between cities and greater longevity of the poor:

. . . the strongest pattern in the data was that low-income individuals tend to live longest (and have more healthful behaviors) in cities with highly educated populations, high incomes, and high levels of government expenditures, such as New York, New York, and San Francisco, California. In these cities, life expectancy for individuals in the bottom 5% of the income distribution was approximately 80 years. In contrast, in cities such as Gary, Indiana, and Detroit, Michigan, the expected age at death for individuals in the bottom 5% of the income distribution was approximately 75 years. Low-income individuals living in cities with highly educated populations and high incomes also experienced the largest gains in life expectancy during the 2000s.

As noted, these correlations don’t show causation; some of the effect may have something to do with those—like immigrants—who self-select to move to cities. But the strength of these correlations (and their absence for other variables like access to medical care) signals a need for further scrutiny. As always, this kind of broad statistical work comes with caveats: the paper takes only a first-pass, high-level look at correlations between geographic variables and life expectancy. This analysis shows the simple and direct relationship between each tested variable and life expectancy—but doesn’t measure any interactions among variables. And the standard caveat applies: correlation doesn’t prove causation. Still, by examining the correlation between selected local characteristics and life expectancy, we can begin to answer some of our questions about what aspects of place affect this aspect of quality of life.

There’s long been a good body of circumstantial evidence to support the proposition that cities are healthier. We know the people in cities and denser environments tend to walk more, a key factor associated with longevity. They also tend to drive less, and suffer less from the toll of crashes and the sedentary life styles associated with car dependent living. We know that cities promote higher levels of innovation and productivity, and that city economic success is correlated with education, but these data suggest that there may be important spillover benefits in terms of life expectancy even for those who are relatively low income.

“Live long and prosper” was Spock’s famous admonition in Star Trek. Together with the earlier research on the connections between place and inter-generatinal mobility, this new work highlighting the role of community characteristics in influencing life expectancy signals that successful cities may be an important contributor to realizing those twin goals.

A surprising message about the connection between place and life expectancy

There aren’t many economists whose research findings are routinely reported in the New York Times and Washington Post. But Raj Chetty—and his colleagues around the country—have a justly earned reputation for clearly presented analyses with detailed findings and direct policy relevance. Last year, they released the most detailed study yet on how place affects intergenerational mobility. And the paper they released Monday is the latest to draw a link between the qualities of urban spaces and the most profound issues of opportunity—in this case, life expectancy.

The bulk of the paper concerns the relationship between longevity and income, and has been well-reported elsewhere. It highlights patterns that anyone following issues of inequality in the US would have long suspected to be true—that life expectancy is strongly correlated with income, and that the gap in life expectancy between high- and low-income people has grown—but which are now confirmed, in detail, in hard numbers.

But because Chetty et al also analyzed their data by commuting zone (akin to a metropolitan area) and county, we can also draw important conclusions about the link between place and life expectancy, just as their earlier research linked place and economic opportunity. And it appears that strong urban environments can boost their residents’ longevity—especially for the low-income.

Before exploring the details, an important note: the Chetty paper takes only a first-pass, high-level look at correlations between geographic variables and life expectancy. This analysis shows the simple and direct relationship between each tested variable and life expectancy—but doesn’t measure any interactions among variables. And the standard caveat applies: correlation doesn’t prove causation. Still, by examining the correlation between selected local characteristics and life expectancy, we can begin to answer some of our questions about what aspects of place affect this aspect of quality of life.

Screen Shot 2016-04-13 at 9.53.25 AM

Credit: Chetty et al
Credit: Chetty et al

 

Here we’ve reproduced a key chart from Chetty’s paper, which shows the correlation between a series of regional characteristics and the life expectancy of people in the bottom income quartile.

Credit: Chetty et al
Credit: Chetty et al

 

Dots correspond to the point estimate, lines represent the 95 percent confidence interval of the estimate. Positive values indicate that life expectancy increases with increases in the local characteristic; negative values indicate that life expectancy decreases as the value of the local characteristic increases.

We can split these findings into three categories:

Confirming the obvious. places where people smoke more and where obesity is more prevalent have shorter life expectancies; places where people exercise more have longer life expectancies. Regional variations in key health behaviors are reflected directly in the life expectancy of the poor.

Little evidence for the expected. Chetty et al looked at the role of a range of health care measures, the presence of social capital, and the role of inequality and of unemployment and found that regional variations in these characteristics had weak, if any correlation with regional variations in life expectancy.

Unexpected importance of place. Strikingly, poor people tend to live longer in places with more immigrants, more expensive housing, higher local government spending, more density, and a better educated population. Consider each of the five characteristics in the category “Other Variables” at the bottom of Chetty, et al’s Figure 8.

What these data show are a string of strong positive correlations. Places with more immigrants have longer life expectancy for the poor. The same holds for places with more expensive housing: here, too, the poor live longer. The poor also live longer in places with high levels of government spending, more density, and a better educated population. Taken together, these correlations suggest the importance of positive spillover effects from healthy urban places. Large cities tend to have higher levels of density. The most successful cities tend to attract more immigrants, have more expensive housing, and a better educated population. These data suggest that the poor have longer life expectancies in thriving cities.

The authors explain that their data make a strong case for a relationship between cities and greater longevity of the poor:

. . . the strongest pattern in the data was that low-income individuals tend to live longest (and have more healthful behaviors) in cities with highly educated populations, high incomes, and high levels of government expenditures, such as New York, New York, and San Francisco, California. In these cities, life expectancy for individuals in the bottom 5% of the income distribution was approximately 80 years. In contrast, in cities such as Gary, Indiana, and Detroit, Michigan, the expected age at death for individuals in the bottom 5% of the income distribution was approximately 75 years. Low-income individuals living in cities with highly educated populations and high incomes also experienced the largest gains in life expectancy during the 2000s.

As noted, these correlations don’t show causation; some of the effect may have something to do with those—like immigrants—who self-select to move to cities. But the strength of these correlations (and their absence for other variables like access to medical care) signals a need for further scrutiny.

There’s long been a good body of circumstantial evidence to support the proposition that cities are healthier. We know the people in cities and denser environments tend to walk more, a key factor associated with longevity. They also tend to drive less, and suffer less from the toll of crashes and the sedentary life styles associated with car dependent living.

“Live long and prosper” was Spock’s famous admonition in Star Trek. Together with the earlier research on the connections between place and inter-generatinal mobility, this new work highlighting the role of community characteristics in influencing life expectancy signals that successful cities may be an important contributor to realizing those twin goals.

Note to journalists: Stop quoting bogus rent numbers

Hey reporters! We know you love rankings, especially ones that show some measure of widely shared pain, like traffic congestion or rent increases.

And some people, armed with a database and an infographic are more than happy to feed your hunger for this type of analysis.

But please: Stop using Abodo’s rent numbers. They’re wrong. They’re meaningless.

We’ve documented why these numbers are wrong. Abodo computes an average based on the apartments contained in its listings database. But Abodo has only a partial, un-represenative, and constantly changing sample of the marketplace. They don’t cover all apartments, and their monthly estimates are biased by composition effects: the apartments included in their sample in one month aren’t necessarily comparable to the apartments listed in their sample in the next month, and so changes in average prices reflect not overall inflation, but the different kinds of apartments that are for rent each month. As a result, Abodo rental inflation estimates fluctuate wildly from month to month.

Here’s a classic case of why you should ignore them:

Abodo said that in March, 2016, Portland, Oregon had the highest level of rent increases in the country, up 14 percent from February.

 

Screen Shot 2016-04-12 at 10.08.19 AM

 

Then, the next month, Abodo reported that rents in April 2016 in Portland were down seven percent over the previous month, and this represented the fifth biggest decline in the nation.

Screen Shot 2016-04-12 at 10.12.31 AM

 

And Colorado Springs, according to Abodo, showed exactly the reverse trajectory: it was one of the ten biggest losers in March 2016 (down 10 percent compared to February), but then had the fifth biggest increase in April (up 13 percent).

There’s nothing real or meaningful about either of these data points, or about the precipitous (and utterly absurd) changes they seem to imply. If you take Abodo at face value—and clearly no one should—Portland’s rental price inflation problem essentially disappeared in the last four weeks. And Colorado Springs when from catastrophic free-fall to boom in the same brief time period.

Abodo’s listings driven rent estimates are essentially a random number generator when used to calculate month-over-month changes in rental price inflation. Shame on Abodo for producing them in the first place. And shame on any journalist who credulously repeats them.

Acoustical engineers talk about a “signal-to-noise” ratio.  What we have here is almost all noise and no signal.

Housing affordability and rental price inflation are real issues, but they’re not ones that Abodo’s data sheds any light upon. There are lots of reliable sources of data on rent levels and on rent inflation. We’ve set about compiling a friendly user’s guide to these data. So reporters, if you care about helping people understand what’s going on in the local housing market, please use these, or similar, resources.

The limits of technology: Let’s hack an app

A Hollywood staple of the 1930s and 1940s was the story of a plucky band of young kids—usually led by Mickey Rooney and Judy Garland—who, their dreams of making it on Broadway dashed by some plot twist, decide to stage a show of their own. They would find a barn or a warehouse, sing and dance and tell a few jokes on a makeshift stage, and then some theatrical bigshot standing in the wings would snap his fingers—and in the final scene, we’d see Mickey and Judy opening on the Great White Way.

If they remade one of those movies today, the plot would be nearly the same, but would undoubtedly revolve around software.

Case in point: Last month, New York University’s Rudin Center and the Transit Center hosted a one-day hackathon to come up with ideas for improving bus service on Staten Island. Like many places, transit routes on the island are mostly a slightly evolved hodge-podge based on historical streetcar lines, with service levels and timetables that have changed incrementally over the years. New York City’s fifth borough is its most suburban, and its residents have long average commutes, and a wide range of destinations, including Manhattan, Brooklyn and New Jersey.  

The Hackathon was held March 5 and drew a solid cadre of laptop-wielding, data-infused programmers, who labored all day to produce a software and presentations to show how the buses could indeed be better. The competition’s $2,000 Grand Prize went to Sri Kanajan for a map the visualized the concentration of bus trip origins and destinations.

Credit: Sri Kanajan
Credit: Sri Kanajan

 

While colorful and interesting, the map is chiefly a restatement of the problem, rather than a proposed solution. It’s useful to have a data-driven, visual representation of the Staten Island origins (dispersed) and Manhattan destinations (concentrated, mostly in the financial district and Midtown), of current bus riders. And, from all accounts, the hackathon generated a healthy discussion of the key problems that need to be addressed in improving transit service.

In our view, anything that gets people talking about buses is generally a good thing. Buses are the underappreciated work-horses of the public transit system, and anything we do that gets them running more efficiently can produce widespread benefits. Case in point: Houston’s rearrangement of its bus routes to implement a gridded, timed-transfer system, providing more frequent service to the most heavily patronized destinations.

And we’re the last ones to argue that only expensive experts can have good ideas. It’s often very productive to have bright outsiders with a fresh perspective re-frame questions and brainstorm new ideas. But at some point—and optimizing transit routes is one of them—a level of expertise and practical experience that even the best programmer isn’t going to deduce from even the biggest data—at least not in the first attempt.

But maybe the bigger problem is that there’s a certain tyranny in the limitations of existing data: current transit ridership data reflects, in part, the pattern and quality of the current service system. There’s some necessary risk-taking and abstraction to apply the lessons from experience in say, other cities, in order to be able to imagine the potential opportunities of a service model for which there is no current local exemplar. In addition, the organizers of the hackathon decreed that all proposals for rearranging transit service had to be “cost-neutral”—a condition that begs the question of what level of resources ought to be devoted to bus service (an especially ironic issue, given the billion-dollar-plus price tag of other New York City transit investments like the $1.4 billion Fulton Center and $4 billion Calatrava WTC station).

The blinders clamped on by the limits of existing data or institutionally siloed problem definitions isn’t just the province of plucky volunteers. Last month we pointed out that design firm IDEO’s efforts to reimagine transportation on behalf of one of its corporate clients was plagued with a fundamental failure to correctly (and broadly define the problem) which led them to overlook thinking about the important roles that urban form and location choices could play in addressing transportation needs.

A couple of years back, a Brooklyn based startup, then called “Significance Labs,” proudly unveiled its smartphone app to enable the poor to prepare applications for food stamps, trumpeting it as a way for the poor to more readily access nutrition benefits. Its founder, a former Facebook and Linkedin veteran, said he wanted to extend SNAP to the “80 percent of the poor” who own smartphones. It would help them cope with the ordeal of applying for SNAP benefits. Next City related their business plan:

“We’ve all been annoyed at some point by navigating government services like waiting in line at the DMV, or filing taxes or applying for a passport,” says Chen. “For low-income Americans, it’s no longer just an annoyance, it’s a necessity. The system, despite being more important for these Americans, isn’t any easier for them.”

Never mind that according to Pew, aggregate smartphone for the entire population was 68 percent in 2015 (and considerably lower for the poor). People who are poor, and who have difficulty getting SNAP benefits are pretty much the least likely among us to own and use smartphones. There’s precious little evidence that the application process is a very big barrier to accessing SNAP benefits: eligible non-recipients are disproportionately elderly and disabled or lack language skills—barriers that seldom correlate strongly with smartphone ownership, much less use.

What’s especially egregious is the implication that it is the “app” that’s solving the hunger or poverty problem. The secret sauce here is not the app, but the dollar value of SNAP benefits. And the problem with SNAP benefits is not that there isn’t an app, but that Congress has been steadily whittling away at the population eligible for benefits and reducing the value of the benefits themselves. Defining this as primarily a technology problem distracts attention from this central issue.

More generally, it’s far from clear that privately developed apps are necessarily a good way to provide access to public services. (While there are plenty of useful, free apps—Google maps has seamlessly integrated transit directions in most cities, thanks to the widespread adoption of open data standards for transit schedules). One article praising Easy Food Stamps called it a “Turbo Tax” for the SNAP program. Turbo Tax has certainly simplified the process of filing one’s taxes, for a price. But the company has also fiercely defended its turf: lobbying Congress to prevent the IRS from offering automatic simple and free tax preparation services, something that could replace the 1040EZ form.

At some point, this kind of techno-centrism is more than just tone-deaf. Blogger Alex Payne diagnosed this particular behavior in claims that BitCoin—the cybercurrency—would help the un-banked poor:

Silicon Valley has a seemingly endless capacity to mistake social and political problems for technological ones, and Bitcoin is just the latest example of this selective blindness. The underbanked will not be lifted out of poverty by conducting their meager daily business in a cryptocurrency rather than a fiat currency.

Open data, apps and technology are all important tools for addressing a range of problems. We really ought to use technology to broaden the audience and inject fresh perspectives into a wide range of discussions. But we shouldn’t let our fascination with technology lead us to believe that there are cheap and easy solutions to complex problems. Technology is usually a complement to, and not a substitute for resources, programs and investments. Open tech needs to be viewed as a way of expanding the discussion, bringing in new perspectives and widening the range of considered alternatives, rather than being treated as a panacea.

So let’s definitely hack away. But if you’re looking to figure out a real challenge for programmers, here’s a thought: instead of spending a billion dollars widening I-75 in Detroit, let’s have a freeway hackathon.

What lifecycle and generational effects tell us about young people’s homebuying

It’s been debunked, right? Though we’ve long been told that millennials want to live in cities, renting rather than owning, and biking instead of driving, a new round of articles are here to tell us that all of that is a myth: as soon as they find their financial footing, young people are buying homes in the suburbs just like previous generations.

These latest claims about housing are based on a new National Association of Realtors study. Based on its annual surveys of recent homebuyers, the NAR reports that millennials now make up 35 percent of those who had purchased homes in the past year, up from 32 percent in 2014, and 31 percent in 2013.

But, if you’ll forgive the paraphrase, these numbers—they do not mean what the NAR thinks they mean.

The basic problem is that the NAR—and other, similar reports (for example, about car-buying habits)—are confusing lifecycle effects with generational effects. Lifecycle effects describe how people’s behavior changes predictably as they get older. A generational effect describes how a cohort of people born at a certain time are different from another cohort of people born at a different time: so, for example, how a millennial behaves at age 25, compared to how previous generations—gen x-ers or boomers—behaved at age 25.

Life Cycle vs. Generational Effects

So when the NAR reports that millennials are buying more homes as they get older, we have to ask if this is a lifecycle effect or a generational one. After all, even if millennials are less likely to buy homes compared to earlier generations, it would be very strange if millennials bucked the well-established lifecycle effect of being more likely to buy a home in their 30s compared to their 20s.

The NAR statistics about market share can’t answer this question, because they compare the home buying habits of 18- to 33-year-olds in 2013 with 20- to 35-year-olds in 2015. For market share to remain the same across these samples, the 34- and 35-year-olds added in 2015 would probably have to buy homes at the same rates as the 18- and 19-year-olds who were dropped. But people in their mid-30s have always bought more homes than people just out of high school, and that lifecycle effect doesn’t signal any shift away from the underlying generational decline in homebuying.

In effect, all the NAR has proven is that, when they are older, millennials buy more homes than when they were younger. But as we’ll show below, at any given age, millennials are still less likely to be homeowners than previous generations at the same age.

Comparing Millennials to Previous Generations

To check for generational change, you have to compare identical age cohorts over time. And if you do that, it’s clear that millennials are, in fact, less likely to buy a home than earlier generations. Homeownership for every cohort under age 65 is depressed well below historic levels, especially for people now in their thirties, and this decline continuing, not reversing.

 

This chart shows how homeownership rates have varied by age for two years: 2001 and 2014. You can see that in both years, homeownership rates increase steeply in the for persons in their late 20s and early 30s, and then grow more slowly as people age into their 50s or 60s.

But it’s also clear that at every age below 60, homeownership rates are now significantly lower than they were in 2001. If we chart the difference itself, it looks like this:

 

For those aged 21 to 60, homeownership rates have fallen by six to 10 percentage points since 2001. The biggest declines have been for those in their thirties. (Note, too, that while some accounts make sharp distinctions between generations, this chart shows how continuous and consistent is the relationship between age and the decline in housing tenure, with the behavior of the oldest millennials flowing into that of the youngest gen-Xers.)

To distinguish between a lifecycle change and generational change, it can also be helpful to focus on a single age group over a long period of time. In the following chart, we look at the homeownership rates of 32-year-old and 34-year-old heads of household in each year from 2001 to 2014.

 

There are two obvious takeaways. First, 34-year-olds are more likely to be homeowners than 32-year-olds (the blue line is higher than the red line). Second, over time, the homeownership rate of both 34-year-olds and 32-year-olds has gone down (both lines slope down to the right). Again: This signals a generational change in home buying tendencies. Both 32- and 34-year-olds are less likely in 2014 (by a wide margin) to be homeowners than they were in 2001. And if there were evidence that in the last few years millennial home buying tendencies were reverting to those of previous generations, it would show up here, as a reversal or upswing in these lines. But they continue to slope down, indicating that homeownership, far from rebounding to historic patterns, is continuing to become less common among this generation than its predecessors.

We can use the change in age-specific home-ownership propensities to compute how many fewer millennials own homes today than would have been the case if they had behaved as previous generations did. To do this, we multiply the 2001 home-ownership rate for each age group by the number of household heads in 2014, and compare the the predicted number of homeowners in each age group with the actual number reported by the census. The calculation is shown in the following table. In all, we estimate that there are 1.7 million fewer millennial home-owning households today than would have been the case if homeownership were as prevalent today as it was in 2001.

“Just delaying” homeownership results in big change in real estate

Finally, there’s a statistical claim being made here that Millennials are just like other generations in their home buying habits and preferences, they’re just buying homes and moving to the suburbs later in life. There are two problems with this argument. First, a one- or two-year change in the average age of first home purchase significantly reduces the number of homeowners permanently. It automatically means that those who own homes will be homeowners for about 2.5 to 3 percent less of their lifetime—over the whole population, over several decades that translates into a permanent decline in the homeownership rate. Second, the implication is that somehow, though they wait longer to buy their first home, eventually, they’ll catch up to historic levels of homeownership . . . someday. But there’s no evidence for this “catch up” theory—in fact, homeownership levels are dropping for everyone up through about age 60. American’s are both delaying homeownership until they’re older, and buying fewer homes over their lifetimes—which together represent a huge change in how US housing markets work.

Far from debunking the story about Millennial home-buying habits being different, the latest data confirm that serious, long-term changes are afoot. We’re well past the dark days of the 2009 recession. The economy has been growing again for six years, and home buying rates at all ages remain depressed well below historic levels—no more so that for those Millennials in their mid-30’s who are fully ten-points less likely to be homeowners than the Gen-Xers of 2001 were in their mid 30s.

Data Notes:

These data are taken from the American Community Survey for the years indicated. There are many different definitions of what constitutes the “millennial” generation: for these tabulations we follow NARs definition of those born between 1980 and 2000.

Introducing the Pedestrian Pain Index

America’s pedestrians are in pain.

Every day, tens of millions of Americans waste tens of thousands of hours stuck waiting on the side of streets for car traffic to get out of their way. We estimate that the annual value of time lost waiting to walk totals $25 billion annually.

Today, City Observatory announces the launch of our latest data product: the Pedestrian Pain Index (PPI). Following the techniques developed over the past thirty years by the highway-oriented Texas Transportation Institute (TTI), PPI uses similar methods and assumptions —to calculate the amount of time pedestrians lose each year having to wait their turn to cross streets to allow cars to proceed.

7188127495_4a2c8c26a8_z
Credit: Billie Grace Ward, Flickr

We attribute 100 percent of pedestrian wait time as “delay” due to automobiles for two reasons. First, our methodology mirrors exactly that used by the TTI, which counts traffic delay as any slowdown in traffic below the level that motorists enjoy at so-called “free flow speeds,” even if the free flow speed is higher than the posted speed limit. Second—and perhaps more importantly—pedestrians are only forced to wait at intersections because of vehicle traffic. In pedestrian-only environments, there is no need for “Don’t Walk” signs. In that sense, traffic lights and crosswalks are not walking infrastructure—in places without cars like inside shopping malls or in Venice, Italy, there is no need to have signals to tell people when they can walk or paint lines to show people where they can walk.

There’s little question that walking has been made a second-class form of transportation—and that pedestrians regularly feel the pain of being subordinated to automobiles. One of the best examples is “beg buttons” can delay law-abiding pedestrians up to a minute and a half in order to cross a city street—a point illustrated by Gizmodo.

Here’s how we came up with our PPI estimate. According to data tabulated by John Pucher and his colleagues from the the most recent National Household Transportation Survey, the typical American spends about 112 hours walking about 37.7 miles per year. We estimate that out of a typical walk, a pedestrian spends about five percent of their time waiting for traffic, either as they cross the street an un-signaled location, or waiting for a traffic signal. Our five percent estimate corresponds to waiting about 55 seconds during the average 18.5 minutes that each American walks on a daily basis. For those in low-traffic, low-density areas, these 55 seconds will likely be an overestimate; in urban settings with traffic lights on most corners—where a disproportionate share of walking occurs—55 seconds will be an underestimate.

We multiply our daily delay estimate of 55 seconds per person by 365 days and by the roughly 300 million Americans five years of age or older to come up with an estimate of about 1.6 billion hours of pedestrian delay experienced by Americans annually. Valuing that delay at $15 per hour—a figure somewhat lower than that used in studies of automobile congestion delay—produces a total estimate of $25.2 billion in time lost in pedestrian pain waiting for automobiles.

4986564644_8128c2d547_b
Credit: michael brooking, Flickr

The Pedestrian Pain Index is a first, rough approximation of the time lost by pedestrians due to automobile traffic. Constructing this index is complicated by the fact that, unlike the case for automobile travel, we have very limited data on walking travel. As Tom Vanderbilt put it in Slate, “Walking in America is a bit like sex: Everybody’s doing it, but nobody knows how much.” It’s a classic instance of the old adage “if you don’t count it, it doesn’t count.” Lacking any data about pedestrians in most settings, the costs and consequences of land use and engineering decisions on walking are simply invisible——and therefore ignored.

Traffic engineers have begun to recognize that the waits imposed on by signals on pedestrians impose major costs and discourage people from walking. The National Association of City Transportation Officials (NACTO) writes in the Urban Street Design Guide:

Long signal cycles, compounded over multiple intersections, can make crossing a street or walking even a short distance prohibitive and frustrating. This discourages walking altogether, and makes streets into barriers that separate destinations, rather than arteries that stitch them together.

According to the 2012 National Traffic Signal Report Card (yes, there really is such a thing: it gives us a D+), the United States has about 311,000 traffic signals (about 1 for every 1,000 Americans), with an estimated replacement cost of about $83 billion. Most of these signals control pedestrian travel, as well as vehicles. Pedestrians face delays not just at traffic signals, but when crossing roads at un-signalized intersections, and when crossing mid-block (as is frequently necessitated by the serpentine, uninterrupted roadways found in most US suburbs).

Those of you who regard this as a bit of early April data-whimsy, think again. If anything, the estimates presented here profoundly understate the costs travel time costs that our auto-centric transportation system imposes on those who would like to walk. Recent national survey data collected by Jennifer Dill and her colleagues at Portland State University show that walking is a highly valued form of transportation. Two-thirds of Americans of all ages agreed or strongly agreed with the statement “I like walking.” Younger Americans preferred walking to driving, with the share of Millennials saying they like to walk outpacing those who agreed they liked driving by 12 percentage points.

Significantly, the most commonly cited barrier to walking (identified by two-thirds of the entire sample) was the relative remoteness of destinations——and destinations are more remote because they are scaled to the size of automobile market-sheds, and because parking requirements (coupled with bans on mixed use zoning) mean that it is uneconomical or illegal to build communities that are convenient for walking. The growing demand for walkable communities, coupled with their relatively short supply is one of the key reasons that values for walkable residential and commercial areas have been rising faster than for auto-dependent locations.

3744562877_37c2446ee8_b
Credit: annshi, Flickr

Like last year’s Cappuccino Congestion Index, the Pedestrian Pain Index illustrated that armed with a modicum of data and a few assumptions, one can easily craft an impressive (or at least impressive sounding) estimate of the dollar cost of some delay that we face in our lives. But being able to monetize delay is not the same thing as saying it’s worth spending scarce public resources to remedy.

In a complex, crowded, and interconnected world, no system can be designed so that no user ever experiences a moment of delay. While it is possible to tally and monetize the value of time spent waiting, that doesn’t necessarily mean that the problem is a serious one, it would be—dare we say it “foolish”—to insist we ought to spend scarce public resources to lessen what are in many cases mostly private costs. That’s something to remember the next time you hear anyone quoting impressive sounding numbers from the Texas Transportation Institute—or anyone else—about the billions and billions lost to traffic congestion.

How brain drain measures can mislead

A new measure purports to gauge city attractiveness by measuring whether local college graduates stick around. But these raw numbers can be a misleading indicator, and we’ll show how it can be adjusted to more accurately measure how good a job a city is doing of producing and retaining talent.

There’s powerful evidence that the educational attainment of population is the single most important factor affecting a region’s economic success. We’ve observed that you can explain about 60 percent of the variation in per capita incomes among metropolitan areas simply by knowing what fraction of the adult population has completed a four-year college degree. While there are many ways to increase a region’s talent base, one core strategy is doing a great job in educating your own young people—and then building the kind of community that they will want to stay in.

Is Detroit doing particularly well in fighting brain drain? Credit: Bryan Debus, Flickr
Is Detroit doing particularly well in fighting brain drain? Credit: Bryan Debus, Flickr

 

Does retaining local graduates mean you’re stemming brain drain?

But measuring the migration of talented workers can be tricky. In a recent article in CityLab, “The U.S. Cities Winning the Battle Against Brain Drain,” Richard Florida presents some findings on the tendency of college graduates to stay in the metropolitan areas where they got their degrees. Using data cleverly assembled by the Brookings Institution’s Jonathan Rothwell and Siddharth Kulkarni from LinkedIn profiles, Florida shows which cities have the highest and lowest levels of retention of college graduates.

Some of the results are, at least at first glance, surprising. According to the Brookings figures, the Detroit metro has retained 70.2 percent of its graduates—one of the highest figures in the nation. This seems surprising, because the Detroit metro area actually experienced a 10 percent decline in the number of 25- to 34-year-olds with a four-year degree between 2000 and 2012 (as we documented in our report, “Young and Restless”).

Conversely, fast-growing tech powerhouse and hipster haven Austin, Texas ranks among the ten lowest cities, handing on to just 38.2 percent of its recent college graduates.

What’s going on here?

Well, it turns out that this particular set of college graduate retention statistics tell us a lot more about the size and characteristics of the local higher education system than they do about the attractiveness of the local city, either in terms of its amenities or its job prospects. In other words, it’s more about the supply of college graduates produced by local colleges and universities than the demand of college graduates for living in a particular city.

Different cities have different kinds of higher ed systems

To understand why, think about two kinds of cities. In a college town like Madison, WI or State College, PA—or even larger cities with high concentrations of college students, like Boston or Austin—the local colleges or universities are effectively a big export industry, producing far more degrees than the local economy demands, and then shipping them out to a statewide, regional or national market. Students come to Austin from all over to get a degree at the University of Texas, and many return to their hometowns—or relocate somewhere else for a job—immediately after graduating.

The University of Wisconsin exports graduates all over the state, country, and world. Credit: Ron Cogswell, Flickr
The University of Wisconsin exports graduates all over the state, country, and world. Credit: Ron Cogswell, Flickr

 

In other cities, the local colleges and universities aren’t so “export-oriented.” In these cities, local higher education mostly serves the local market. As a result, graduates in these cities are more likely to remain in the city where they studied, because that’s where they started out. These cities will have a much higher “retention rate” than export-oriented higher education markets, but that has everything to do with who’s coming in, and much less to do with how attractive graduates find the city when they get out. The key here is that the difference is in thehigher education institutions, not the cities they’re located in.

As part of our research for the Talent Dividend Prize—a competition funded by the Kresgeand Lumina Foundations—to see which US metropolitan area could achieve the largest increase in the number of two- and four-year college degrees awarded to local students over a four-year period, we assembled data from the Integrated Post Secondary Education Data System (IPEDS) on the number of college degrees awarded in large metropolitan areas. Among the 50 largest metropolitan areas, the number of BA and higher degrees awarded in 2012 varied from a low of 2.8 (in Riverside, California) to a high of 17.5 (in Phoenix, Arizona). The typical large metropolitan area grants about 8 BA or higher degrees per 1,000 population on an annual basis.

In the following table, we’ve matched our BA degree award rate data with the information provided by Brookings on the BA retention rate for the ten highest rated and ten lowest rated metropolitan areas.

 

There’s an obvious pattern here: Metropolitan areas with high levels of BA retention have very small higher education establishments (as measured by the number of BAs awarded per 1,000 population. Conversely, metro areas with low levels of BA retention have, on average, much higher levels of BA award granting. This is exactly what we’d expect when thinking about cities that are home to large universities that attract many students from elsewhere.

To get a better sense of whether a metro area is experiencing a brain drain or a brain gain, when can combine these data. The final column of the tables does that by multiplying the number of BA degrees issued per 1,000 population by the BA retention rate. This is a rough estimate of the number of additional BA degree holders (per 1,000 population) who reside in a metro area after graduation.

These data come closer to our intuition about which places are gaining talent. Larger metropolitan areas (New York, Los Angeles) have relatively high rates of local BA growth (5.9 and 5.2 per 1,000 population respectively. Cities with strong tech economies, like San Jose (5.7) also do well on this measure. Conversely, economically challenged places don’t do as well—Detroit’s local BA per 1,000 population rate is 2.9; even though it does a relatively good job of retaining those who graduate locally, the size (or at least output) of local higher education institutions is so small (relative to the size of the region), that it is not gaining talent as much as the other areas on this list.

So, as it turns out, the retention rate of college graduates is at best an incomplete indicator of whether cities are stemming the tide of a brain drain (or not). If local higher education institutions are small, and chiefly serve students from local high schools, a high retention rate is not necessarily a sign of success. Conversely, if your area colleges and universities are large and attract students from around the nation, a low retention rate may not be a sign that you’re doing poorly.

Cities matter even before graduation

As these data make clear, the competition for talent begins long before students receive their bachelor’s degree. The number and kind of local colleges and universities is a decisive asset in positioning a city to attract talent.

At least a portion of the brain drain dynamic occurs when students decide where they are going to college. For metros like Detroit, where there are relatively fewer local universities than in the typical metropolitan area, more students are going to leave the local metro area to get a degree. And even if a high number of those who study locally stick around (as appears to be the case) that effect can easily be swamped by those who leave for college and never return. (Detroit’s college and universities award about 4 degrees per 1,000 population annually; compared to about twice that many for the typical metropolitan area). Because local universities and colleges are so small, relative to the average, Detroit has to retain 100 percent of its graduates to have as many new BA degree holders, proportional to its population, as a metro see half of its graduates migrate away.

For some students, the city in which their university is located can be an important factor in deciding where to enroll. Part of Philadelphia’s “Campus Philly” recruiting program—aimed at out of town students— has been to promote the city’s urban amenities as one of the advantages of choosing one of its many local colleges and universities. The program follows up with activities and internships that look to connect students to the community while in school and after graduation.

Nurturing, attracting and retaining talent are all mutually reinforcing strategies for bolstering the regional economy. Cities need to pay attention to the size and quality of their colleges and universities, as well as to build the kind of communities that they (and other well-educated persons) will want to live in. Because this process is so multi-faceted, no single measure captures all of the dynamics at play. What we’ve provided here shows that a simple retention rate is not enough to understand whether a city is doing well in attracting and retaining college grads.

Data notes

The numbers for Phoenix are something of an anomaly. The University of Phoenix, the nation’s largest distance learning institution reports its BA degrees to IPEDS as being awarded in the Phoenix metropolitan area, even though nearly all of its students are located in other metropolitan areas. This tends to greatly exaggerate its local output of graduates.

Why mixed-income neighborhoods matter: lifting kids out of poverty

There’s a hopeful new sign that how we build our cities, and specifically, how good a job we do of building mixed income neighborhoods that are open to everyone can play a key role in reducing poverty and promoting equity. New research shows that neighborhood effects—the impact of peers, the local environment, neighbors—contribute significantly to success later in life. Poor kids who grow up in more mixed income neighborhoods have better lifetime economic results. This signals that an important strategy for addressing poverty is building cities where mixed income neighborhoods are the norm, rather than the exception. And this strategy can be implemented in a number of ways—not just by relocating the poor to better neighborhoods, but by actively promoting greater income integration in the neighborhoods, mostly in cities, that have higher than average poverty rates.

In the New York Times, economist Justin Wolfers reports on groundbreaking work by Eric Chyn of the University of Michigan that found previous research may have understated the effect of neighborhoods on lifetime earnings and employment. The paper shows that moving low-income children in very poor neighborhoods to less poor neighborhoods can have a major positive effect on their life chances.

Most media outlets have covered this story as reinforcing the importance of “mobility programs”: that is, policies that encourage residents of very low-income neighborhoods to move to more economically integrated areas, usually with some form of direct housing assistance like vouchers. And the ability to move to neighborhoods with good amenities and access to jobs, without having to pay unsustainable amounts for housing or transportation, is a crucial part of creating more equitable, opportunity-rich cities.

But the coverage may be missing the other half of the policy equation: Chyn’s paper adds to the evidence about the value of mixed-income neighborhoods in general, not just mobility. That means it’s just as important that cities find a way to invest in low-income neighborhoods to bring opportunity to them, rather than simply trying to move everyone out.

Why the new research is so important

The results of the voucher demonstration illustrate that there can be large benefits from even modest changes in economic integration. The average household moved about 2 miles from their previous public housing location, and still lived in a neighborhood that had a higher than average poverty rate. Chyn’s results show the effects of moving from neighborhoods dominated by public housing (where the poverty rate was 78% on average), to neighborhoods that had poverty rates initially 25 percentage points lower, on average. Most participants still lived in neighborhoods with far higher levels of poverty than the typical American neighborhood. But compared to their peers who remained in high poverty neighborhoods, they enjoyed better economic results later in life.

This chart shows that children who moved out of very low-income neighborhoods were about 5-10 percentage points more likely to be employed as adults.
This chart shows that children who moved out of very low-income neighborhoods were about 5-10 percentage points more likely to be employed as adults.

 

In this chart, you can see the growing earnings benefit to children who left very low-income neighborhoods in their adult years.
In this chart, you can see the growing earnings benefit to children who left very low-income neighborhoods in their adult years.

 

This study—on the heels of a widely-cited study led by Harvard economist Raj Chettyreleased last year—adds even more heft to the growing body of evidence that helping people with lower incomes move to mixed-income neighborhoods can play a huge role in spreading economic opportunity.

The new research improves on older studies by getting rid of an important confounding factor that affected some earlier research by more closely replicating a true “natural experiment.”

The experiment was made possible by the decision to demolish large scale public housing in Chicago in the early 1990s. The families dislocated from the old style public housing—which were in neighborhoods of extremely concentrated poverty—had to find new housing. The Chicago Housing Authority (CHA) provided the families with vouchers to move to privately operated rental housing, typically in neighborhoods with far lower levels of poverty. The kids who moved to new lower-poverty neighborhoods saw a significant increase in their lifetime earnings compared to otherwise similar kids who remained in the public housing that wasn’t torn down.

This natural experiment has an important advantage over the “Moving to Opportunity” (MTO) housing experiment conducted by the federal government in the 1990s. In MTO, public housing households had to apply for a voucher lottery. This created the possibility that the people who had applied were particularly motivated and able to make the transition to a new neighborhood. That would mean that even those households that lost the lottery might have better-than-average outcomes, reducing the gap between those who moved and those who didn’t, and making the effect of moving appear smaller than it really was.

But unlike MTO, the participants in the CHA relocation program were not self-selected. They represented a more or less random cross-section of public housing residents, and so the differences between the outcomes of treatment groups (those who got vouchers) and those who didn’t (control groups) could be treated as purely the result of the voucher program.

The policy implication: Mixed-income neighborhoods promote opportunity

But it’s important to put this finding in a broader context. Evidence about mobility programs, in turn, are part of a larger body of research that neighborhoods matter for economic opportunity. While the focus has been helping people leave neighborhoods with high concentrations of poverty, it’s also possible to bring investments and resources to these communities.

Of course, when that happens, it often happens in conjunction with—or even because of—a return of middle- and upper-income people to the neighborhood. In other words, gentrification.

For some, that’s enough to reject that policy avenue. But some research suggests we ought to give it another look. While news from neighborhoods in San Francisco and Brooklyn, where incredibly high levels of demand and tight supply have led to spiraling housing costs, makes it sound like gentrification inevitably and utterly displaces all a neighborhood’s residents, other research suggests that displacement is far less widespread than commonly thought. While housing costs can be an issue, a recent study from the Philadelphia Federal Reserve suggests that displacement is much less common than we might expect—and another study of New York public housing residents in gentrifying areas showed an increase in earnings and school test scores.

This research also occurs against a backdrop of widening inequality and economic segregation. And inequality has an important spatial dimension: low-income and high-income households are increasingly segregated from one another in separate neighborhoods. As we’ve documented in our research at City Observatory, the effects of this segregation on the poor, in the form of the growing concentration of poverty, are devastating, and the number of Americans living in neighborhoods of concentrated poverty in large metropolitan areas hasmore than doubled since 1970, from 2 million to 4 million.

While the spatial response, as we’ve said, has focused on mobility, enabling the poor to move to higher income neighborhoods is challenging for a number of reasons. The raison d’etre of many suburbs is exclusion—using zoning requirements to make it essentially impossible for low income households to afford housing—and efforts by outside organizations or governments to reduce these barriers have been difficult. If we want to make the biggest difference in economic integration, we need to try to integrate low-income neighborhoods as well as high-income neighborhoods.

Neighborhoods for everyone

Taken together, the new Chyn results add to the growing body of literature on neighborhood effects and strongly suggest that we ought to be looking for all kinds of opportunities, large and small, to promote more mixed-income neighborhoods. Even the small steps—like lowering the poverty rate in a kid’s neighborhood from 75 percent to less than half—pays clear economic dividends.

But we also need to remember that integration isn’t just about moving around people with low incomes. We can reinvest in neighborhoods of concentrated poverty in ways that improve quality of life and enhance opportunity in place.

Flood tide–not ebb tide–for young adults in cities

The number of young adults is increasing, not declining, and a larger share of them are living in cities.

Yesterday’s New York Times Upshot features a story from Conor Dougherty–”Peak Millennial? Cities Can’t Assume a Continued Boost from the Young.” It questions whether the revival in city living is going to ebb as millennials age, and the number of persons turning 25 decreases in the years ahead.

In our view, the story offers two mistaken premises: first, that the growth of young adults in cities has been driven primarily by the large size of the Millennial generation, and second, that the affinity of young adults for cities is waning. Our research shows that neither of these premises is true. The movement of young adults to the city has been gathering steam for more than 25 years, and the number of young adults in cities was actually increasing during the 1990s–at a time when the number of 25 to 34 year olds nationally was actually declining. And the relative preference of young adults for city living continues to increase.

The argument that as millennials age they will increasingly move to the suburbs mistakenly conflates life-cycle effects with generational change. As individuals age, the likelihood that they’ll live in different places changes. After high school, there’s a large migration to college towns. Young adults, starting out in their careers are disproportionately likely to rent and live in cities compared to other Americans. As they get older, find partners and have children, they’re more likely to own homes and live in the suburbs. This general pattern of succession holds for most recent generations as they age. But what’s different–and important–is how many people are in each generation, and how long they remain in each stage in this process. What’s happening now is that today’s young adults–the so-called Millennial generation–are both more numerous than the immediately preceding generation and are demonstrating a greater propensity to spend a larger share of their early adult years living in cities.  That’s the essence of what our research (and that of many others) shows has been happening.

The Upshot story takes issue with this thesis in two respects.  First, building on an argument advanced by USC’s Dowell Meyers––which we addressed when it first came out–Upshot says that the fortunes of cities will wane because the number of persons turning 25 years of age will decline slightly in the next decade. Second, Upshot says that as individual millennials get older, they’ll tend to move to the suburbs in greater numbers.

The essence of the Upshot story is two claims: (1) that the impact of millennials on cities will decline because their numbers will decrease, and (2) that their propensity to choose to live in urban settings will decline. Let’s consider each of these ideas in turn.

Numbers

The number of 25-34 year old millennials will increase by about 3 million over the next 7 years; this is the stage in the life cycle when they are most likely to live in cities.

Is the move to cities being buoyed by the rising number of millennials, and will their numbers decline soon, and therefore cause a decline in cities?

What’s interesting is that in 2015—for the first time—Millennials (those born between 1980 and 2000) constitute all of the persons aged 25-34. At the time of the last decennial census (2010), about half of the 25- to 34-year-old age group was composed of people in the tail end of generation X, and half were the early wave of Millennials. So strikingly, what the Census data shows is that the total number of 25 to 34 year olds in the US will increase from now through 2024. This chart shows the Census Bureau’s estimates of population aged 25 to 34 based on historical data through 2014 and its projections through 2035.

Screen Shot 2016-03-31 at 12.51.15 PM

During the 1990s the number of 25 to 34 year olds actually declined as Baby Boomers aged out of this age group and were gradually replaced by the numerically smaller Generation X. Between 2000 and 2010 the number of Gen-Xer 25-34 year-olds increased slowly, with all of the aggregate increase in this age group being recorded after 2008. So, to the extent there was a movement back to the city in the 1990s, and the first half of the 2000-2010 decade, it was propelled not by an aggregate increase in the number of 25-to-34 year olds in the nation, but the changing relative preference of young adults for urban locations.

So, what we—and others—have recorded as the movement of young adults back into cities has, until very recently, little to do with the size (or preferences) of the Millennial generation. In fact, as they turn 25, and as they now dominate this age cohort, the next decade will be the time when the Millennial generation’s effect on cities will be most fully felt. Rather than declining, the number of 25-to 34 year olds in the United States will increase each year from now through 2024, rising from 44.1 million in 2015 to 47.6 million in 2024. In reality, the Millennial wave of urbanism is just now hitting the beach.

The outlook after 2024 (when the 25-34 year olds will increasingly be “post-Millennials”) is not for a dramatic demographic collapse. Rather than a peak, the young adult population stabilizes at a fairly high plateau above 47 million 25 to 34 year olds through 2035. So there is little basis for forecasting a decline in the key population group that has driven urban growth.

Preferences

With each passing year, 25-34 year olds, especially those with a four-year college degree or more education are more likely to live in close-in urban neighborhoods than other Americans.

Are young adults becoming less likely to live in cities?

At City Observatory, we’ve tracked the carefully tracked the location of urban residents in America by age group over the past three decades.  We’ve measured the relative preferences of young adults for close-in urban neighborhoods (census tracts within three miles of the center of the central business district).  The relative preference is the probability that a young adult will live in a close in neighborhood compared to the probability than any other resident of any age would live in such a neighborhood. These figures are drawn from Table 5 of our Young and Restless report; we’ve computed relative preference by dividing the probability that a person aged 25 to 34 lives within a three-mile radius of the center of the CBD of one of the 51 largest metropolitan areas, and compared it to the probability that the average resident of a metropolitan area live in this radius. If 11 percent of 25 to 34 year olds live in the 3 mile radius, and 10 percent of the population as a whole lives inside that radius, the relative preference is 10 percent (11 percent/10 percent)=110 percent, meaning that a 25 to 34 year old is 10 percent more likely than the typical resident to live in this area.

Since 1980, the relative preference of young adults for close-in neighborhoods has increased steadily. In 1980, young adults were 10 percent more likely than all metro residents to live in these neighborhoods; in 1990 12 percent more likely, in 2000 32 percent more likely; and in 2010, 25 to 34 year olds were fully 51 percent more likely to live in close-in neighborhoods than other metro residents. The relative preference of 25 to 34 year olds with a four-year degree to live in such neighborhoods was even higher:  over 100 percent in 2010.

Another way of looking at this is examining census data on where young adults live.  The University of Virginia’s Luke Juday has prepared a revealing set of charts that show the concentration of the young adult population by distance to the city center for all of the nation’s 50 largest metro areas.  He has comparative data for 1990, 2012 and 2015.  The data–just as in the statistics cited above–show that in the aggregate, the share of young adults living in close-in neighborhoods has increased and the share living in more distant neighborhoods has decreased. The orange line shows the share of young adults (age 22 to 34) as a share of the population in 1990.  The magenta and teal a lines show the share of young adults in 2012 and 2015 (respectively).  The share of young adults living in close in neighobrhoods (which had been higher in the center in 1990) increased over the next two decades. The share living in the suburbs declined. The steepening of this gradient is clear evidence of a growing relative preference of young adults for central locations.

This essential finding is actually the core of several well regarded academic papers.  Edlund, Machado and Siviatchi show that well-educated prime-age workers are increasingly concentrated in neighborhoods closer to the center of the metropolitan area. Couture and Handbury have replicated our findings, reporting:

A recent report by CEO for Cities (Cortright (2014)) – and covered extensively by the New York Times (Miller (2014)) – also uses 2000 census data and 2007-2012 ACS data, and shows that the 25-34 college-educated population are growing faster downtown than in the suburbs in the majority of the 51 largest MSAs. We confirm and expand this narrative to the older 35-44 college-educated group . . . Strikingly,we find that the college-educated 25-34 age group grows faster in the urban area of 23 of the 25 largest CBSAs. The exceptions are Riverside, which essentially lacks a downtown, and Detroit, which is famously struggling.

These papers, and Rebecca Diamond’s research show that the attraction of cities is amplified by the growth of and growing demand for urban amenities.

More young adults are migrating to cities

Finally, almost in passing, the Upshot article resurrects a claim made in 2015 by FiveThirtyEight.com’s Ben Casselman, which had asserted that more young adults were now moving from cities to suburbs than vice versa, and claiming “Whether by choice or economic circumstance, young Americans are still more likely to leave the city for the suburbs than the other way around.”  We had a lengthy back-and-forth with Casselman at the time, but the University of Virginia’s Luke Juday pointed out that the data that Casselman used from the Currently Population Survey effectively missed millions of young adults, and disproportionately missing those living in cities. Juday’s analysis shows that actually the reverse is true–young adults are increasingly moving to cities:

Over the past 5 years about 3 million more Americans age 20-29 moved from suburbs to principal cities than from cities to suburbs, with last year being the largest net gain for cities yet.

The Takeaway: More Young Adult Urban Growth is Coming

The number of 25 to 34 year olds—the key group driving urban living, will not decline, but will grow between now and 2024. The urban wave we’ve experienced starting in the 1990s, and accelerating in the past decade wasn’t propelled by generational growth, so much as by a growing preference for urban living by young adults. The shift of young adults to cities, drawn by urban amenities, is increasingly confirmed by academic researchers, and is borne out by the latest Census data. 

Data Notes

The Census data for our of estimates of the 25 to 34 year old population come from three sources. Data for the period prior to 2010 comes from the archive of historical Census population estimates (https://www.census.gov/popest/data/historical/index.html).

Data for the period 2011 to 2014 comes from Annual Estimates of the Resident Population for Selected Age Groups by Sex for the United States, States, Counties, and Puerto Rico Commonwealth and Municipios: April 1, 2010 to July 1, 2014, Source: U.S. Census Bureau, Population Division, Release Date: June 2015. See: https://www.census.gov/popest/data/. Census projections of the population by year and age for the period 2015 through 2035 come from the 2014 Census Population Estimates series http://www.census.gov/population/projections/data/national/2014.html.

* Note the discontinuity in the data between 1999 and 2000 reflects the disparity between the Census Bureau’s annual intercensal estimates of the 25-34 year-old population and the actually higher number of 25-34 year olds enumerated by the 2000 decennial Census. It’s likely that the actual number of 25-34 year olds was underestimated in the intercensal estimates, during a period of significant immigration).

Not peak Millennial: the coming wave

It’s an eye-catching, convention-tweaking claim: We’ve reached peak Millennial. And, so the argument goes, because Millennials have hit their “peak,” it’s time to junk all these crazy theories about Millennials not wanting to own cars, and not buying homes, especially in the suburbs. Sure, they had a youthful dalliance with city living, and the numbers of city-dwellers was temporarily pushed up by the now receding demographic wave, but now city living is now bound for a fall.

That, in a nutshell, is the argument being made by USC Professor Dowell Myers, who made the “peak Millennial call” in a lecture at the University of Texas in February. His argument was picked up and amplified by the Kinder Institute in “What if City-Loving Millennials Are Just a Myth?”, and most recently echoed by CityLab’s “Have U.S. Cities Reached ‘Peak Millennial’?

Have we hit peak Millennial? Does “peak Millennial” actually mean anything? Are we looking at a demographic ebb-tide for city living?

As it turns out, the answer to all of these questions is no.

We’ll explain the answers at greater length below, but the short synopsis is this:

The roughly 75-million-person group often called “Millennials” (those born 1980 to 1999) are now between 15 and 34 years of age. Just this year, for the first time, these Millennials made up 100 percent of those persons aged 25-34 (the age cohort that’s been fueling city growth). The number of 25 to 34 year-olds (all Millennials) will continue to increase from now through 2024, growing from 44.1 million to 47.6 million. Their impact on housing markets, in particular, is only beginning to be felt, and will grow in the decade ahead.

There’ll be more 25-to-34-year-old Millennials Every Year through 2024

Let’s start at the beginning. The core claim about the “peak” made by Professor Myers is based on one factoid: the highest number of births recorded in any year of the Millennial generation (those born between 1980 and 2000) occurred in 1990. The number of Millennials born in years after 1990 declines (slightly). So, by Myer’s math, the number of Millennials turning 25 has peaked in 2015. Myers core graph—reproduced here from Ryan Holyfield’s Kinder Institute blog, has a peculiar representation of the Millennial “peak.”

Credit: Kinder Institute
Credit: Kinder Institute

 

What it shows is the number of births in the US in each year from 1960 to 2013. Between 1980 and 2000, there’s a very soft peak in 1990, when 4.2 million people were born, as compared to an average of 3.8 million over the rest of the period. The people born during that soft peak turned 25 last year, which is what Myers is referring to as “peak Millennial.”

But why Myers picked age 25 to represent the “peak” of anything is unclear. For most young adult Americans, 25 is just the age where most (though not all) have finished their formal education, fewer than half have married, and most still don’t have children. It may be for some the end of an extended adolescence, but for most it’s essentially early onset adulthood.

More importantly, one year is never representative—it’s better to look at a larger age cohort. So as we go forward with our analysis, we’ll look at a 10 year wide age cohort, and lean on the Census Bureau’s forward-looking projections of the US young adult population. And in fact, we think it’s much more useful to talk about specific age cohorts (persons 25 to 34) than it is to talk about birth cohorts (those born in a particular time period), especially in undertaking time series analysis of economic trends. As we pointed out in our commentary debunking the National Association of Realtors claims about home buying trends, trying to deduce inter-generational changes by comparing Millennials when they are very young to those same people when they are older, inherently produces misleading results.

At City Observatory, we’ve focused keenly on the key 25- to 34-year-old age group. They’re highly mobile, likely to change jobs and homes, have generally completed their education, and have what economists would call “recent vintage human capital.” For all these reasons—and because they generally command lower wages than more experienced workers—they compose the dream demographic of fast-growing companies. They’re the most likely to move across state lines, and their migration decisions play a disproportionate role in determining which places experience a brain gain, as opposed to a brain drain.

What’s interesting is that in 2015—for the first time—Millennials (those born between 1980 and 2000) constitute all of the persons aged 25-34. At the time of the last decennial census (2010), about half of the 25- to 34-year-old age group was composed of people in the tail end of generation X, and half were the early wave of Millennials. So strikingly, what the Census data shows is that the total number of 25 to 34 year olds in the US will increase from now through 2024. This chart shows the Census Bureau’s estimates of population aged 25 to 34 based on historical data through 2014 and its projections through 2035.

Screen Shot 2016-03-31 at 12.51.15 PM

 

During the 1990s the number of 25 to 34 year olds actually declined as Baby Boomers aged out of this age group and were gradually replaced by the numerically smaller Generation X. .Between 2000 and 2010 the number of gen xer 25 – 34 year-olds increased slowly, with all of the aggregate increase in this age group being recorded after 2008. So, to the extent there was a movement back to the city in the 1990s, and the first half of the 2000-2010 decade, it was propelled not by an aggregate increase in the number of 25-to-34 year olds in the nation, but the changing relative preference of young adults for urban locations.

So, what we—and others—have recorded as the movement of young adults back into cities has, until very recently, little to do with the preferences of the Millennial generation. In fact, as they turn 25, and as they now dominate this age cohort, the next decade will be the time when the Millennial generation’s effect on cities will be most fully felt. Rather than declining, the number of 25-to 34 year olds in the United States will increase each year from now through 2024, rising from 44.1 million in 2015 to 47.6 million in 2024. In reality, the Millennial wave of urbanism is just now hitting the beach.

The outlook after 2024 (when the 25-34 year olds will increasingly be “post-Millennials”) is not for a dramatic demographic collapse. Rather than a peak, the young adult population stabilizes at a fairly high plateau above 47 million 25 to 34 year olds through 2035. So there is little basis for forecasting a decline in the key population group that has driven urban growth.

The Preference of Young Adults for Urban Living is Increasing

As far as cities and city living are concerned, we’re just now seeing the full impact of Millennials in this key 25-34 year old age group. And the 25 to 34 year old age group will be composed entirely of Millennials for the next decade (and then the oldest Millennials (those born in the late 1990s) will age out of this young adult category in the early 2030s. So Millennials and their preferences—whatever they are—will essentially determine the behavior of the “young adult” demographic for the next 15 years, or so.

While it’s fashionable to describe the trend toward city living as something caused by the unique preferences of Millennials, the shift toward city living both predates the maturation of the Millennial generation—and promises to continue as their places among the young adult age group are taken by the post-Millennials (or whatever name is attached to the succeeding generation).

The relative preference for urban living for 25 to 34 year-olds has been increasing over the past two decades.

  • 1980: 10 percent
  • 1990: 12 percent
  • 2000: 32 percent
  • 2010: 51 percent

(These figures are drawn from Table 5 of our Young and Restless report; we’ve computed relative preference by dividing the probability that a person aged 25 to 34 lives within a three-mile radius of the center of the CBD of one of the 51 largest metropolitan areas, and compared it to the probability that the average resident of a metropolitan area live in this radius. If 11 percent of 25 to 34 year olds live in the 3 mile radius, and 10 percent of the population as a whole lives inside that radius, the relative preference is 10 percent (11 percent/10 percent)=110 percent, meaning that a 25 to 34 year old is 10 percent more likely than the typical resident to live in this area.

Interestingly, the relative preference of young adults for urban living tripled during the 1990s—a time when the total number of 25- to 34-year olds was actually in decline in the US (down about 7.7 percent). And even though the number of 25- to 34- year olds was increasingly only slowly during the decade 2000 to 2010 (+3.2 percent), the preference for urban living grew substantially. In the coming decade (2015-2025) the size of the 25-34 year will growth by 7.7 percent.

Far from peaking, the Millennial generation is hitting the sweet-spot for urban living, plus their numbers will continue to grow, according to the Census, between now and 2024.

The implication of the Myers analysis is that the growth in urban living is tied somehow to the size of the Millennial generation rather than its growing relative preference for urban living. His thesis is that as the number of persons turning 25 declines by about 400,000 this will lead to a reduced number of Millennials moving to cities. But as the evidence of the 1990s shows, it’s entirely possible to see as sustained decline in the numbers of young adults and also to observe an increase in the relative preference of those young adults for urban living.

The Takeaway: More Young Adult Urban Growth is Coming

The number of 25 to 34 year olds—the key group driving urban living, will not decline, but will growth between now and 2024. The urban wave we’ve experienced starting in the 1990s, and accelerating in the past decade wasn’t propelled by generational growth, so much as by a growing preference for urban living by young adults.

Data Notes

The Census data for our of estimates of the 25 to 34 year old population come from three sources. Data for the period prior to 2010 comes from the archive of historical Census population estimates (https://www.census.gov/popest/data/historical/index.html).

Data for the period 2011 to 2014 comes from Annual Estimates of the Resident Population for Selected Age Groups by Sex for the United States, States, Counties, and Puerto Rico Commonwealth and Municipios: April 1, 2010 to July 1, 2014, Source: U.S. Census Bureau, Population Division, Release Date: June 2015. See: https://www.census.gov/popest/data/. Census projections of the population by year and age for the period 2015 through 2035 come from the 2014 Census Population Estimates series http://www.census.gov/population/projections/data/national/2014.html.

* Note the discontinuity in the data between 1999 and 2000 reflects the disparity between the Census Bureau’s annual intercensal estimates of the 25-34 year-old population and the actually higher number of 25-34 year olds enumerated by the 2000 decennial Census. It’s likely that the actual number of 25-34 year olds was underestimated in the intercensal estimates, during a period of significant immigration).

The beat goes on: More misleading congestion rankings from TomTom

Yesterday, TomTom released its annual rankings of the levels of congestion in world and US cities. Predictably, they generated the horrified, self-pitying headlines about how awful congestion is in the top-ranked cities. Cue the telephoto lens shots of bumper-to-bumper traffic, and tales of gridlock.

As we’ve long pointed out, there are big problems with the travel time index TomTom and others use to compare congestion levels between cities. Most importantly, some cities have much shorter commute distances than others—meaning that even if traffic moves slower at the peak hour, people spend less time commuting. For example, Houston has an average commute distance of 12.2 miles, while Portland has an average commute distance of 7.1 miles, according to the Brookings Institution. So even if Portland’s “congestion index number” is slightly higher (26 percent) than Houston’s (25 percent)—at least according to TomTom—average commute times are much shorter in Portland because of its more compact land use patterns. In effect, the travel time index, expressed as a percentage of total commute times, discounts the pain of traffic congestion in sprawling, car-dependent cities. That’s why its a lousy guide for talking about how well transportation systems work. The same problems plague the rankings released by Inrix two weeks earlier.

Credit: Nick Douglas, Flickr
Credit: Nick Douglas, Flickr

 

Plus, as Felix Salmon pointed out a couple of years ago, the TomTom data has a special bias: it’s chiefly gathered from people who’ve bought the devices, who almost by definition are not typical commuters. It’s highly likely that they represent those who drive the most, and who drive most in peak traffic (hence the value of Tom-Tom’s services); but data gathered from these devices is not necessarily typical of the experience of the average commuter.

TomTom, as you know, is in the business of selling real-time traffic data and navigation assistance to motorists. So in many respects, its rankings may be less a serious and balanced effort to assess congestion than they are to drum up demand for its product. The company’s top three suggestions for coping with traffic congestion are use a real time navigation system (like a TomTom device), “dare” to try alternate routes suggested by . . . your Tom Tom device, and “check traffic before you leave”—you get the picture.

Refreshingly, TomTom admits in its press release that building more highway capacity will do nothing to alleviate the congestion it identifies—we doubt that this is the part of the study that highway advocates will share with a wider audience. Nick Cohn, the company’s “traffic expert,” tells us:

Traffic congestion is a fact of life for every driver. And as we reveal the latest Traffic Index results this year, we can see that the problem is not going away.

We should not expect our transport authorities to simply ‘build away’ congestion. Studies have shown over the years that building new motorways or freeways does not eliminate congestion.

Even though Nick Cohn apparently knows better—and really just wants us to buy his company’s product—what TomTom and its peers are doing is just feeding a profoundly distorted view of traffic congestion problems. Those in the highway lobby routinely use this kind of data to try and scare us into spending billions for new highway construction projects that are often un-needed, or which do nothing to reduce congestion.

It’s time for a “big short” in parking

Last year’s hit film The Big Short depicted various investors who, realizing that there was a housing bubble in the years before the 2000s crash, found ways to “short” housing, betting against the market and ultimately making a killing when the crisis hit. Looking forward, there’s a plausible case to be made that this might be the time for a “Big Short” in parking, as a confluence of the growing popularity of walkable neighborhoods and the arrival of self-driving cars may make our current levels of parking way over-supplied compared to demand in the near future.

There’s a lot of speculation that the advent of self-driving vehicles could create a huge surplus of parking. A recent paper by University of Texas Professor Kara Kockelman and her colleagues estimates that in urban environments, self-driving cars could eliminate the need for about 90 percent of parking. The theory is that fleets of on-demand autonomous vehicles would substitute for most private car ownership, that cars would nearly always be in use—and when not in use could be stored in peripheral low value locations—with the result that the demand for parking, especially in urban centers would collapse. If that’s the case, a whole lot of private parking structures may suddenly find themselves with fewer customers, less revenue, and a badly broken business model: exactly the conditions for “shorting” this industry.

The parking garage of the future: empty? Credit: Joe Shlabotnik, Flickr
The parking garage of the future: empty? Credit: Joe Shlabotnik, Flickr

 

So who, exactly is “long” in the parking market? Well, there are some private firms who build and operate parking lots. But in many places around the country, the entities that have made substantial future bets on parking are local governments. Since the 1930s, city governments have been borrowing money to build and operate municipal parking lots for public use. Most big cities operate a substantial parking enterprise. Not only to most communities provide copious amounts of under-priced parking in the public right of way—with devastating impacts on travel behaviour and urban form—but many cities build off-street parking lots and structures, often in central commercial districts. For example, The city of Los Angeles owns 118 parking facilities with more than 11,500 parking spaces. And cities have been regularly expanding the supply of parking, often relying on debt financing, on the expectation that parking revenues will be sufficient to cover the costs of bond interest and principal. For example, the City of Miami Beach is issuing $67 million in in revenue bonds to expand its convention center parking garage. Like home mortgages, circa 1999, this mostly seems like a boring, low risk business. Cities borrow money on the bond market and then pay it back out of parking revenues. And so far, at least, municipalities have had little trouble making payments.

Given that the expected lifetime of parking structures—and perhaps even more critically, the repayment period for the bonds used to finance them—is measured in decades, the potential advent of autonomous vehicles is a live issue. So what happens if there’s a sea change in the market for parking, and if parking revenues fall—or perhaps fail to live up to municipal expectations? A couple of recent case studies show that shortfalls in parking demand are not purely an academic concern.

In New York, the $238 million parking garages built next to Yankee Stadium has gone bankrupt—it failed to meet its expected occupancy levels—and the local government is out more than $25 million so far in expected revenue from the garage—in addition to more than $100 million in public subsidies that supported its construction.

In Scranton Pennsylvania, a local parking authority issued millions of dollars in bonds backed up by the city’s guarantee of its full faith and credit. When demand for parking slumped, the parking authority could no longer pay debt service, and in 2012 came to the city to make up the shortfall. Initially, the city balked at making the payments, but found its credit rating jeopardized, and ultimately relented, using other city funds to make the bond payments. Even so, the crisis hasn’t abated: demand is still depressed, the garages are deteriorating, and the city is now looking at demolishing the top levels of two of the older garages rather than repairing them.

The financial viability and implied risk of borrowing millions to build parking garages hinges directly on the accuracy of forecasts of the demand for parking. That issue is a live one in Portland, where the city’s urban renewal authority is issuing $26 million in bonds to finance an 425-space parking structure adjacent to the city’s convention center and a proposed headquarters hotel. The site is also adjacent to the city’s most traveled light rail lines and is served by the newly built streetcar. It is just a few blocks from an apartment building with the nation’s largest off-street bike parking facility.

But the big question, raised by the Portland Shoupistas, is whether, ten or 20 years from now, there will be any market for hundreds of additional off-street parking spaces in a neighborhood that already has 3,300 on-street and structured spaces.

Already, according to Bike Portland, car rental demand is lagging far behind growth in hotel occupancy. Visitors to Portland—and especially attendees at convention events—choose not to drive, and instead take advantage of the city’s diverse transit system. In a brilliant bit of statistical journalism, Bike Portland’s Michael Anderson pulled together data showing how even as the city has recorded increasing numbers of tourists and convention attendees, visitor car rentals have been in steady decline.

Credit: Bike Portland
Credit: Bike Portland

 

In addition to growing uncertainty about the demand for parking in the future, the other factor which makes it hard to answer our question about whether now is the time for a “Big Short” in parking is the paucity of data about our public sector parking infrastructure. In a growing world of big data and smart cities, one thing that is surprisingly difficult to find is the total number of municipally owned and operating parking lots and structures. While some data sources show the location of publicly accessible parking—like Parkme.com—they don’t provide data in a way that allows one to easily discern the total number of spaces in a city or their ownership.

One hint as to the scale of the municipal parking enterprise comes from the Census, which tabulates data on city budgets. It reports (2013 State and Local Government Finances) that in 2013, the total parking revenues of municipal governments nationally totaled $2.7 billion.

There’s a good chance that many of these parking lots will become stranded assets: expensive, debt-financed projects that no longer generate enough revenue to cover their costs of construction and operation. When we add in the considerable social costs of subsidized parking and driving, newly constructed parking structures in cities may be the urban equivalent of new coal-fired power plants: obsolete, value-destroying activities. There’s not a lot cities can do about previous decisions to take on debt to build parking garages, but going forward, it seems like they ought to take a very careful look at whether it’s a sound investment, or whether they’re setting themselves up to be on the wrong side of tomorrow’s “Big Short.”

When supply catches up to demand, rents go down

Today, we spend a few minutes reviewing the recent history of housing markets in rural North Dakota. In a microcosm, we can see how the interplay of demand and supply drive housing market cycles. The speed and scale of changes in North Dakota dwarf what we usually see, but provide an illustration of the forces at work in many cities around the country.

For most of the past decade, the real estate market in Williston, North Dakota has been on an amazing tear. The region, home to the Bakken shale formation, has been the epicenter of the U.S. oil fracking industry. With sustained oil prices in the $100 a barrel range, everyone from global energy companies to independent producers have been drilling exploratory and production wells, and state’s oil output increased by a factor of ten, from about 100,000 barrels per day to more than 1,000,000 barrels per day.

Williston, ND. Credit: Andrew Filer, Flickr
Williston, ND. Credit: Andrew Filer, Flickr

 

Job growth quickly overwhelmed the local housing supply, spiking rents, and leading landlords to rent out travel trailers, garages, storage units, and outbuildings to oil workers—and those in the local service industry that grew in response to the population influx. Williston even became famous for “man camps”—quickly assembled fields of trailers and modular housing units, inhabited almost entirely by male oil workers.

But in the past year, everything has changed. First, the oil market has gone bust, with prices falling from more than $100 to recent lows of less than $38. In response, oil companies have drastically cut back on exploration and new well-drilling. The industry is shedding jobs.

Second—and importantly for our story—the local market has seen an incredible surge of new housing construction. The number of building permits issued in Williston grew ten-fold between 2009 and 2013.

 

The combination of flagging demand and a newly abundant supply has rental prices in Williston dropping like a rock. According to real estate analytics firm Zillow, average rents in the area have declined by 23.4 percent over the twelve month period ending in January 2016. Reuters reports that new apartments which were commanding rents of as much as $3,200 per month have now discounted rents sharply, added communal hot tubs and are providing free alcohol and snacks for residents.

The Williston experience provides a dramatic, but very clear, example of the dynamics of local real estate markets. The critical issue here is what you might call a “temporal mismatch” between demand and supply. Demand is the hare; supply is the tortoise. Demand can change in an instant—as quickly as new jobs open up, and as quickly as U-Hauls and moving vans deliver new residents to a city (or neighborhood). Supply takes time: planning, gaining financial and zoning approvals for new units, and then actually building and finishing out apartments and houses takes as much as 18 to 24 months. And when demand continues to change, supply can be struggling to keep up.

That’s just what happened in Williston. Developers did respond early on (building permits more than quadrupled in 2009 and then doubled in 2010) but demand grew even more, with the result that there continued to be a rise in rents.

Now, finally, due to the combination of flagging demand and the relentless (if comparatively plodding) increase in supply, the market is much closer to balance.

Williston is, in a supercharged microcosm, a metaphor for the housing market in US cities. In the past decade, the demand for rental housiing and urban locations has far outstripped the growth in supply. Lots more people have decided for economic reasons (or because they prefer city neighborhoods that they want to rent in cities. And housing supply, at least initially, has hardly budged—first because of the aftermath of the Great Recession, and still, in many cities because of local zoning restricts and slows the number of new housing units that can be built. But in true tortoise fashion, supply is beginning to catch up to demand. As we’ve seen in Denver, Seattle, and Washington, when a sufficient number of new apartments are built, they begin to shave rent inflation. The Williston story is also playing out in another oil-town: Houston, where new apartments are going begging and landlords are offering free first month’s rent to new tenants.

We think there are two key takeaways here. First, supply and demand do operate: building more housing is the key to addressing rental affordability. Second, housing markets are inevitably subject to a temporal mismatch between supply and demand. Unlike the neat whiteboard drawings of supply and demand curves that you may have seen in an undergraduate economics lecture, which can be erased and redrawn in a moment, in the real estate market, demand is fast, but new supply is slow.

Why the new Inrix Traffic Scorecard deserves a “D”

At City Observatory, we’ve long been critical of some seemingly scientific studies and ideas that shape our thinking about the nature of our transportation system, and its performance and operation. We’ve pointed out the limitations of the flawed and out-dated “rules of thumb” that guide our thinking about trip generation, parking demand, road widths and other basics. One of the most pernicious and persistent data fables in the world of transportation, however, revolves around the statistics that are presented to describe the size, seriousness and growth of traffic congestion as a national problem. This week saw the latest installment in a perennial series of alarming, but actually un-informative reports about traffic congestion and its economic impacts.

The same old story

On March 15, traffic data firm Inrix released its 2015 Traffic Scorecard, ranking travel delays in the largest cities in Europe and North America. As is customary for the genre, it was trumpeted with a press release bemoaning the billions of hours that we waste in traffic. That, in turn, generated the predictable slew of doom-saying headlines:

But at least a few journalists are catching on. At the Los Angeles Times, reporter Laura Nelson spoke with Herbie Huff from the UCLA transportation center who pointed out that “Aside from an economic downturn, the only way traffic will get better is if policymakers charge drivers to use the roads.”

And GeekWire headlined its story “Study claims Seattleites spend 66 hours per year in traffic, but some say that number’s deceptive” and reported Greater Greater Washington’s David Alpert as challenging the travel time index methodology used in the Inrix report.

Headlines aside, a close look at the content of this year’s report shows that on many levels, this year’s scorecard is an extraordinary disappointment.

As we’re constantly being told by Inrix and others, we’re on the verge of an era of “smart cities,” where big data will give us tremendous new insights into the nature of our urban problems and help us figure out better, more cost-effective solutions. And very much to their credit, Inrix and its competitors have made a wealth of real time navigation and wayfinding information available to anyone with a smart phone—which is now a majority of the population in rich countries. Driving is much eased by knowing where congestion is, being able to route around it (when that’s possible) and generally being able to calm down by simply knowing about how long a particular journey will take because of the traffic you are facing right now. It’s quite reassuring to hear Google Maps tell you “You are on the fastest route, you will arrive at your destination in 18 minutes.” This aspect of big data is working well.

By aggregating the billions of speed observations that they’re tracking every day, Inrix is in a position to tell us a lot about how well our highway system is working. That, in theory, is what the Scorecard is supposed to do. But in practice, it’s falling far short.

As impressive as the Inrix technology and data are, they’re only useful if they provide a clear and consistent basis for comparison. Are things measured in the same way in each city? Is one year’s data comparable with another? We and others have pointed out that the travel time index that serves as the core of the Inrix estimates is inherently biased against compact metropolitan areas with shorter travel distances, and creates the mistaken impression that travel burdens are less in sprawling, car-dependent metros with long commutes.

The end of history

For several years, it appeared that the Inrix work offered tremendous promise. They reported monthly data, on a comparable basis, using a nifty Tableau-based front end that let users track data for particular markets over time. You could see whether traffic was increasing or decreasing, and how your market stacked up against other cities. All this has simply been disappeared from the Inrix website—though you can still find it, with data through the middle of 2014, on an archived Tableau Webpage.

Screen Shot 2016-03-17 at 10.39.09 AM

This year’s report is simply a snapshot of 2015 data. There’s nothing from 2014, or earlier. It chiefly covers the top ten cities, and provides a drill down format that identifies the worst bottlenecks in cities around the nation. It provides no prior year data that let observers tell whether traffic levels are better or worse than the year before. In addition, the description of the methodology is sufficiently vague that it’s impossible to tell whether this year’s estimates are in fact comparable to one’s that Inrix published last year.

Others in the field of using big data do a much better job of being objective and transparent in presenting their data. Take for example real estate analytics firm Zillow (like Inrix, a Seattle-based IT firm, started by former Microsoft employees). Zillow researchers make available and regularly update a monthly archive of their price estimates for different housing types for different geographies, including cities, counties, neighborhoods and zip codes. An independent researcher can easily download and analyze this data to see what Zillow’s data and modeling show about trends among and within metropolitan areas. Zillow still retains its individual, parcel-level data and proprietary estimating models, but contributes to broader understanding by making these estimates readily available. Consistent with its practice through at least the middle of 2014, Inrix ought to do the same—if it’s really serious about leveraging its big data to help tackle the congestion problem.

A Texas divorce?

For the past couple of years, Inrix has partnered closely with the Texas Transportation Institute (TTI), the researchers who for more than three decades have produced a nearly annual Urban Mobility Report (UMR). Year in and year out, the UMR has had the same refrain: traffic is bad, and it’s getting worse. And the implication: you ought to be spending a lot more money widening roads. Partly in response to critiques about the inaccuracy of the data and methodology used in earlier UMR studies, in 2010, the Texas Transportation Institute announced that henceforth it would be using the Inrix data to calculate traffic delay costs.

But this year’s report has been prepared solely by the team at Inrix, and has no mention of the the Texas Transportation Institute or the Urban Mobility Report in its findings or methodology. Readers of the last Inrix/TTI publication—released jointly by the two institutions last August—are left simply to wonder whether the two are still working together or have gone their separate ways. It’s also impossible to tell if the delay estimates contained in this year’s Inrix report are comparable to those in last year’s Inrix/TTI report. (If the two are comparable, then the report is implying that traffic congestion dropped significantly in Washington DC from 81 hours reported by TTI/Inrix last August, to the 75 hours reported by Inrix in this report).

Have a cup of coffee, and call me in the morning

As we pointed out last April, the kind of insights afforded by this kind of inflated and unrealistic analysis of costs—unmoored from any serious thought about the costs of expanding capacity sufficiently to reduce the hours spent in traffic—are really of no value in informing planning efforts or public policy decisions. We showed how, using the same assumptions and similar data about delays, one could compute a cappuccino congestion index that showed Americans waste billions of dollars worth of their time each year standing in line at coffee shops.

Inrix data have great potential, but a mixed record, when it comes to actually informing policy decisions. On the one hand, Inrix data was helpful in tracking speeds on the Los Angeles Freeway system, and showing that after the region had spent $1.1 billion to widen a stretch of I-405, that overall traffic speeds were no higher—seeming proof of the notion that induced demand tends to quickly erase the time-saving benefits of added capacity. In Seattle, Inrix’s claim that high occupancy toll lanes hadn’t improved freeway performance were skewered by a University of Washington report that pointed out that the Inrix technology couldn’t distinguish between speeds on HOT-lanes and regular lanes, and noted that Inrix had cherry-picked only the worst performing segments of the roadway, ignoring the road segments that saw speed gains with the HOT lane project.

This experience should serve as a reminder that by itself, data—even, or maybe especially, really big data—doesn’t easily or automatically answer questions. It’s important that data be transparent and widely accessible, so that when it is used to tackle a policy problem, everyone can be able to see and understand its strengths and limitations. The kind of highly digested data presented in this report card falls well short of that mark.

Our report card on Inrix

Here’s the note that we would write to Inrix’s parents to explain the “D” we’ve assigned to Inrix’s Report Card.

Inrix is a bright, promising student. He shows tremendous aptitude for the subject, but isn’t applying himself. He needs to show his work, being careful and thorough, rather than excitedly jumping to conclusions. Right now he’s a little bit more interested in showing off and drawing attention to his cleverness than in working out the correct answer to complicated problems. We’re confident that when he shows a little more self-discipline, scholarship and objectivity—and learns to play well with others—he’ll be able to be a big success.

Super long commutes: a non-big, non-growing, non-problem

Last week, the Washington Post published an article repeating an old-refrain in transportation journalism—the horror of long commutes.

According to the Post, more and more Americans are commuting longer and longer distances to work each day. There’s growing scientific evidence that long commutes are bad for your physical and mental health, reduce happiness, and even cut into civic participation.

But if you look closely at the data cited in the Post article, it’s pretty clear that long commutes are quite rare, and aren’t really becoming more common.

A 2013 Census study defined “mega commuters” as those traveling more than 90 minutes and more than 50 miles each way. They found that while mega commuting grew from about 1.6 percent of all commuters to 2.7 percent between 1970 and 2000, the share of such long commutes was flat to declining from 2000 to 2011.

Source: US Census
Source: US Census

 

Who are these mega-commuters? The Census report says that they’re most likely to be male, with a higher than average salary, older, and married to a spouse who doesn’t work. Also, most mega-commuters are commuting from one metropolitan (or micropolitan) area to another one—not just traveling from a very far-flung suburb to a business district in their own region.

In just the last two years, stories detailing the horrors of long commutes or describing strategies for coping have appeared in:

The Atlantic: The rise of the outrageously long commute

Fortune: 6 Ways to Survive a Hellishly Long Commute

Men’s Fitness: Long commutes can kill

US News: 3 Strategies for Surviving a Long Commute

While articles about mega commuting imply that it’s a stable, externally imposed lifestyle, we don’t know that mega commuting isn’t temporary, isn’t a lifestyle choice, and isn’t closely related to telecommuting for many of these workers. Census data are snapshots of a single point in time—if a person living in one metropolitan area accepts a far away job, and chooses to commute a long distance while looking for housing, but later moves closer to work, that wouldn’t be captured in the Census data.

It’s surprising how much attention mega-commuting gets given how uncommon it is. About eight times as many Americans have “micro-commutes”—they either work at home or have a commute of five minutes or less—as mega commutes. The 2014 American Community Survey reports that nearly 20 million Americans, about 16 percent of all commuters, have self reported commute times of 0 to 5 minutes. Instead of fretting about the problems of an extremely small group of commuters, maybe we should be thinking about how we build communities and arrange work so that at even larger fraction of the population can enjoy the benefits of micro-commutes. That would be the best way to reduce the “human cost” of commutes.

One of the regular findings of historical analyses of commuting times is that despite huge variations in wealth and technology, humans have generally commuted an average of about half an hour to work—an observation generally termed “Marchetti’s constant.” More formally, several scholars have modeled commuting behavior using a “travel time budget” to reflect these seemingly consistent time choices.

To be sure, some people, in some very large metropolitan areas, travel long distances to work—at least for a time. Whether these patterns are temporary or stable is another question. The author of a Grist story citing the Washington Post’s lament recorded that she, herself, once suffered a long period of driving excessive distances to work in North Carolina—before she decided to move to Seattle, where she now has a pleasant and relatively short walk to work.

Part of what this should highlight is the important role that personal choice plays in commuting. Most people consciously make choices about where they want to live, where they will look for work, and how long a commute they can endure. For some people, the appeal of a particular job, or the the special amenities of a particular house or neighborhood, and our tolerance for hours spent in a car or bus may mean that a long commute is a reasonable choice. For many households, the extra time a prime breadwinner spends commuting may be the functional equivalent of “sweat equity” because frequently by commuting a longer distance a family can afford a bigger house—a phenomenon real estate professionals call “drive ‘til you qualify.”

Or paddle till you qualify. A commuter ferry in Australia. Credit: Rae Allen, Flickr
Or paddle till you qualify. A commuter ferry in Australia. Credit: Rae Allen, Flickr

 

In a sense, house prices, home sizes and commute times are like the famous shop sign: “Low Price, High Quality, Fast Delivery: Choose Any Two.” It would be great if everyone could get big houses at low prices with short commutes, but in reality, in most large metropolitan areas every household has to make its own decisions about how to trade-off one or more of these characteristics to get more of the things it wants. And, as we never tire of pointing out, the demand for urban living (and shorter commutes), in the face of a relatively slowly expanding supply of great urban neighborhoods has lead to a shortage of cities. The solution to our travel problem may be more in building cities than building roads and transit.

While we think the Post has mis-stated the trend, it’s hard not to agree with the basic premise of the article: Americans waste lots of time commuting. Some of that is the product of personal choices—some of which may make sense, and other less so. But a lot of it has to do with how we build our communities, and the kind of options we create about where people can live, and how they can travel from home, to work and other common destinations.

How should cities approach economic development?

Everyone interested in state or local economic development should read “Remaking Economic Development: The Markets and Civics of Continuous Growth and Prosperity.” In it, the Brookings Institution’s Amy Liu neatly synthesizes important lessons from the field about how metropolitan centered economic strategies are vitally important not just to revitalizing city economies, but to national economic progress. The report outlines a cogent list of lessons and sound advice for implementing a successful metro strategy.

There’s so much this report gets right that it’s difficult to find fault. But on a few key issues—mostly having to do with emphasis, rather than fundamentals—more could be said. Here are a six further thoughts about what remaking economic development ought to include, based on my own observations and experience.

Talent is central to economic development

“Remaking Economic Development” gives a vigorous nod to talent development as an economic strategy, but in our view, it should be front and center. We know that the educational attainment level of the population is the single most important factor shaping regional economic success: we can explain fully 60 percent of the variation in per capita incomes among metropolitan areas simply by knowing what share of the adult population has a four year degree. This relationship has grown steadily stronger over the past few decades, and promises to become even more important in the decades ahead.

While the report acknowledges the importance of education and skills, talent is third on the Brookings list of action principles. It should be number one, because it is something that applies everywhere, and without it, no economic strategy is likely to succeed. Unless you have a plausible approach to bolstering talent, anything else is irrelevant.

Placemaking is a key to anchoring talent

The Brookings report speaks to connections within the community as a broad umbrella for thinking about everything from widespread inclusiveness to infrastructure. But increasingly, placemaking—especially building great urban spaces and tackling issues of livability and housing affordability—is vital to attracting and retaining talent and growing the economy.

Placemaking is important because talent is mobile. Talented workers have choices of where to live, and are increasingly exercising their choices, disproportionately choosing to live in places that build great urban communities. The number of college-educated young adults is increasing twice as fast in close-in urban neighborhoods as in the rest of metro areas. That’s driven by the growing demand for dense, diverse, interesting, transit-served, bikeable, walkable neighborhoods. Companies are increasingly moving to be close to the workers living in (or seeking) these neighborhoods. Placemaking is essential to attracting and anchoring talent in place.

Exporting goods is best viewed as an indicator of success, rather than a tactic.

Brookings has worked with a number of cities, including Portland, to promote export strategies. There’s little question that a strong and growing export base is a correlate of a healthy economy. But simply telling cities to promote exporting glosses some important steps. In general, US-based firms and regional industry clusters aren’t successful because they export, they export because they are successful. In a high-cost location, facing global competition, US firms can be successful in global markets generally only if they are have demonstrably better products, more efficient production, and more continuous innovation. Portland’s largest exporter is Intel, which exports not because Portland has a particularly good export strategy or infrastructure (full disclosure: I was a state government official charged with trade policy for a dozen years in the 1980s and 1990s), but because Intel is utterly world-class in its research and manufacturing processes—regularly getting more patents for its Oregon-based technologies than from the rest of its US operations combined. The upshot: rather than focusing on raising exports, strategies should ask what it will take for a region’s industry clusters to be world class (better skills, improved technology, more entrepreneurs & innovation); these will be the places where the region should act.

In addition, especially for smaller and medium-sized firms, exporting is neither the best nor most profitable means to exploit global markets. Exporting can be risky and uncertain: smaller firms face formidable barriers to dealing with global logistics, trade finance, currency fluctuations, product localization, and market development. In many cases they may be better off licensing intellectual property or pursuing joint ventures with international partners, rather than exporting directly themselves. Note that Nike, based in Oregon, barely registers in the state’s export totals (and is actually a big net importer): but it’s a formidable global player because Portland is the hub of its design, marketing and finance functions.

I’d edit Brookings third principle for economic strategies to stress working to improve the health of traded sector clusters (the traded sector consists of businesses that sell their goods or services in competition with firms from other states or nations, regardless of whether they export them from their state of origin or the nation). Expanding exports is just one measure of how clusters are performing.

It’s better to have fewer goals than too many.

Brookings calls for economic development plans to have clear, measurable goals. No doubt this is good advice. But if you have 50 goals, you really don’t have any goals. Goals ought to help decision-maker set priorities. In practice, a laundry list of goals means that there no clear basis for choosing any one alternative action over others. A few key goals, including raising per capita income and assuring that opportunities to learn and earn are widely and equally available to everyone in the community, are key.

Strategy is about choosing what not to do.

There are a wealth of tactics, best practices, and exemplary case studies of how to do economic development. Brookings and others do a good job of cataloging such success stories, and retelling them to other cities. But while this can be informative, every city has its own distinct opportunities and liabilities, and what worked for one city, with one set of industries and resources at one time, may be simply irrelevant or unavailable to another city. As Brookings and others have documented, much economic development practice is rife with fads: witness the profusion of cities pursuing—at great expense, and with no evident results—the development of biotechnology industry clusters. It’s tempting to pursue a “one of each” economic development effort that shows that whatever set of model policies anyone has cataloged, your city has at least a token effort that qualifies. The essence of strategy is choosing—ruling out inappropriate or low-return efforts and focusing on the things that matter.

None of this is likely to work if federal macroeconomic policy doesn’t facilitate robust growth.

While it’s laudable, as well as necessary, that communities pursue their own economic strategies, it’s also important to recognize—especially from the perspective of those working in DC—that these are unlikely to be collectively successful unless national economic growth continues, and indeed accelerates. The backdrop to this entire policy environment is still a demonstrably weak recovery from the worst economic downturn in eight decades. The relatively small size and quick withdrawal of fiscal stimulus, and more recently, the Federal Reserve’s renewed hawkishness about non-existent inflation, signal that the macroeconomic environment in the next few years will work against many of these local economic initiatives. The increasingly metropolitan locus of competitive advantage may mean that a few places continue to prosper while many American metros languish—simply because the national economy isn’t expanding faster enough to power growth in any but the most adept and advantaged places. In addition to providing advice to Mayors and metro residents, it would be helpful if Brookings also spoke truth to the powerful in the federal government that all of these local economic development efforts hinge on a more ambitious macroeconomic policy.

There’s a growing recognition that many of the most important economic opportunities and decisions will be realized at the metropolitan level. “Remaking Economic Development” explains how past practice is simply inadequate to capitalizing on these opportunities and lays out the steps that cities (and metropolitan regions) will need to take. In many respects these efforts are still in their infancy, and more learning and evolution is needed (and will occur).

Additional disclosure: I’ve written three research papers published by Brookings on industry clusters and regional development, and for several years was a non-resident Senior Fellow at Brookings.

Muddling income inequality and economic segregation

The big divides between rich and poor in the US are drawing increased attention, which is a good thing. Income inequality has been steadily growing in the US, and it’s a big problem.

As we’ve pointed out, this problem has an important spatial dimension as well. The concentration of poverty, in particular, amplifies all of the negative effects of poverty—and unfortunately, over the past four decades, the number of high poverty neighborhoods has been increasing. Poor people are now considerably more likely to live in neighborhoods where a large fraction of their neighbors are also poor.

But some of what’s being written about inequality at the city level is misleading, meaningless, or simply wrong.

There’s a kind of conundrum that confronts us when we talk about income inequality. Judged at a national level, a wide diversity of income levels is a bad thing. But in any particular neighborhood, having a diversity of incomes is pretty much the opposite: an indicator of economic integration. Conversely, lower levels of variation in income at the national level could be taken as a sign of a more equal society. But if there are very low levels of variation in income in a particular neighborhood, that’s pretty much a sure sign of strong economic segregation (whether that’s a neighborhood composed largely of the well-to-do or of the poor). The key point is this: while greater equality is generally a good thing at a national level, it can be a bad thing at a highly local level.

The reason of course is that at the neighborhood level, the distribution of income is shaped not by the overall distribution of income in the economy, but by the price of housing and the desirability of neighborhoods.

The confusion generated by this conundrum is very much in evidence in an article that appeared in Next City last week. Entitled “Five Charts that Detail Wealth and Inequality in U.S. Cities,” the article summarizes a new report by the Washington, DC-based Economic Innovation Group using a range of zip code level Census data to assess levels of economic distress among and within metropolitan areas.

Screen Shot 2016-03-06 at 9.49.58 PM

 

The featured table in this report lists the “ten most prosperous cities.”

This is less a list of “most prosperous cities” than it is a list of “most exclusive suburbs.” In each case, these are suburban cities on the periphery of one of the nation’s larger and more successful metropolitan areas. The reason they score so low on the distress indicator is not because they’ve created lots of jobs, but because their land use planning systems and high priced housing effectively exclude poorer residents from locating there.

For example, consider Flower Mound, Texas. According to the Census Bureau’s “On the Map” data service, of the 32,152 workers who lived in the city in 2013, 28,482 (88 percent) worked outside the city limits. Flower Mound’s story is not about its localized economic success, but rather about being a bedroom community for relatively high income people who work somewhere else—and not being a place that many low income people can afford.

Flower Mound, TX. Credit: Google Maps
Flower Mound, TX. Credit: Google Maps

 

Describing such places as “the country’s most prosperous cities” isn’t so much wrong as it is incomplete and misleading.

And it diverts our attention from the fact that the creation of such exclusive enclaves is one of the factors that is amplifying the spatial economic segregation of metropolitan areas. Within a single metropolitan area—Phoenix—some suburban cities are classified as being prosperous and equal (Gilbert and Scottsdale) while another suburb a few miles away (Glendale) is among the most distressed.

This is clear when you look at the data in the EIG report: the two cities with the highest levels of “equality” are Cleveland and Detroit—essentially because poverty is so severe and widespread.

Just because we can compile data on income levels and economic inequality at the city level doesn’t mean that these are the useful units to use to assess or diagnose these problems.

In an important sense, municipalities are simply the wrong units for measuring economic performance—they don’t correspond to entire functioning economies, and they vary so widely in how their defined from region to region that comparisons simply aren’t meaningful.

Quantifying Jane Jacobs

Our storefront index shows where there’s a density of destinations to enable walkability

As Jane Jacobs so eloquently described it in The Death and Life of American Cities, much of the essence of urban living is reflected in the “sidewalk ballet” of people going about their daily errands, wandering along the margins of public spaces (streets, sidewalks, parks and squares) and in and out of quasi-private spaces (stores, salons, bars, boutiques, bars and restaurants).

Clusters of these quasi-private spaces, which are usually neighborhood businesses, activate a streetscape, both drawing life from and adding to a steady flow of people outside.

In an effort to begin to quantify this key aspect of neighborhood vitality, we’ve developed a statistical indicator—the Storefront Index (click to see the full report)—that measures the number and concentration of customer-facing businesses in the nation’s large metropolitan areas. We’ve computed the Storefront Index by mapping the locations of hundreds of thousands of everyday businesses: grocery and hardware stores, beauty salons, bookstores, bars and restaurants, movie theatres and entertainment venues, and then identifying significant clusters of these businesses—places where each storefront business is no more than 100 meters from the next storefront.

The result is a series of maps, available for the nation’s 51 largest metropolitan areas, that show the location, size, and intensity of neighborhood business clusters down to the street level. Here’s an example for Washington, DC. On this map, each dot represents one storefront business. This maps shows storefront businesses throughout the metropolitan area. In downtown Washington, there is a high concentration of storefronts; as one moves further out towards the suburbs, the number of storefronts diminishes, and storefronts are increasingly found arrayed only along major arterials, with a few satellite city centers (like Alexandria).

SFI_DC_zoomedout

The Storefront Index helps illuminate the differences in the vibrancy of the urban core in different metropolitan areas. Here we’ve constructed identically scaled maps of the Portland and St. Louis metropolitan areas, zoomed in on their central business districts. The light colored circle represents a three-mile buffer around the center of downtown. In Portland, there are about 1,700 storefront businesses in this three-mile buffer—with substantial concentrations downtown, and in the close-in residential neighborhoods nearby. St. Louis has only about 400 storefront businesses in a similar area, with a smaller concentration of storefront businesses in its center, and fewer and less dense commercial districts in nearby neighborhoods.

SFI_PDX

SFI_StLouis

The Storefront Index is one indicator of the relative size and robustness of the active streetscape in and around city centers. As this table shows, there’s considerable variation among US metropolitan areas in the number of storefront businesses with three miles of the center of downtown. New York and San Francisco have the densest concentrations of storefront businesses in their urban cores.

 

Maps of the Storefront Index for the nation’s 51 largest metropolitan areas are available online here. You can drill down to specific neighborhoods to examine the pattern of commercial clustering at the street level.

We also use the Storefront Index to track change over time, looking at the growth of businesses and street level activity in a rebounding neighborhood in Portland. There’s also strong evidence to suggest that concentrations of storefront businesses provide a conducive environment for walking. We’ve overlaid the storefront index clusters on a heat map of Walk Scores for selected metropolitan areas to explore the relationship between these two measures. While Walk Score includes destinations like parks and schools, as well as businesses, it also measures walkability from the standpoint of home-based origins, while our Storefront Index shows the concentration of commercial destinations.

City Observatory has developed the Storefront Index as a freely available tool for urbanists and city planners to use in their communities. The index material is licensed under a Creative Commons Attribution license (as is all City Observatory material), and a shape file containing storefront index information is available here.

Where you can walk and shop locally: The Storefront Index

Where are walkable local shopping districts in your city?

There are just six shopping days left until Christmas; while much of our shopping is done on-line or with at big box stores and national chains, many consumers look to support their local businesses during the holiday season.  Where, exactly, can you find the clusters of local shops that representative alternatives to getting malled?

Part of the answer comes from City Observatory’s Storefront Index–we’ve mapped the location of millions of customer-facing retail and service businesses in cities throughout the nation, with an emphasis on identifying clusters of businesses in close proximity to one another–where you could conceivably walk from one establishment to another (or several others).  One great feature of the Storefront Index is that it shows the pattern and density of retail activity throughout a metropolitan area.

Our Storefront Index includes a series of maps, available for the nation’s 51 largest metropolitan areas, that show the location, size, and intensity of neighborhood business clusters down to the street level. Here’s an example for Portland, Oregon. On this map, each dot represents one storefront business. This maps shows storefront businesses throughout the metropolitan area. In downtown Portland, there is a high concentration of storefronts; as one moves further out towards the suburbs, the number of storefronts diminishes, and storefronts are increasingly found arrayed only along major arterials.

SFI_PDX

A key message of the index is that retail businesses, especially small, local ones, don’t do well in isolation.  They thrive when they’re part of an environment that combines public spaces and private establishments in a dense, and mutually reinforcing array. A local book store, like Portland’s Broadway Books, is more likely to flourish if its located–as it is, near to restaurants, coffee shops, bakeries, florists, and other establishments that generate foot traffic and create a lively and enjoyable place to walk.

 

Broadway Books is located in one of these thriving l neighborhood business districts: Northeast Broadway.  We can use the Storefront Index map to zoom in to the local area to see the array of businesses lined up along both sides of the street, a walkable string of pearls for shoppers.

As Jane Jacobs so eloquently described it in The Death and Life of American Cities, much of the essence of urban living is reflected in the “sidewalk ballet” of people going about their daily errands, wandering along the margins of public spaces (streets, sidewalks, parks and squares) and in and out of quasi-private spaces (stores, salons, bars, boutiques, bars and restaurants). These urban street spaces are especially active this time of year, when the usual flow of customers is swelled by holiday shoppers. 

You can use the Storefront Index to zoom in to any neighborhood in any of of the fifty largest metropolitan areas in the US; it will show you which places have a diverse array of storefronts in close proximity to one another, and which places have a paucity of such concentrations.

 

Maps of the Storefront Index for the nation’s 51 largest metropolitan areas are available online here. You can drill down to specific neighborhoods to examine the pattern of commercial clustering at the street level.

We also use the Storefront Index to track change over time, looking at the growth of businesses and street level activity in a rebounding neighborhood in Portland. There’s also strong evidence to suggest that concentrations of storefront businesses provide a conducive environment for walking. We’ve overlaid the storefront index clusters on a heat map of Walk Scores for selected metropolitan areas to explore the relationship between these two measures. While Walk Score includes destinations like parks and schools, as well as businesses, it also measures walkability from the standpoint of home-based origins, while our Storefront Index shows the concentration of commercial destinations.

City Observatory has developed the Storefront Index as a freely available tool for urbanists and city planners to use in their communities. The index material is licensed under a Creative Commons Attribution license (as is all City Observatory material), and a shape file containing storefront index information is available here.

The Storefront Index

As Jane Jacobs so eloquently described it in The Death and Life of American Cities, much of the essence of urban living is reflected in the “sidewalk ballet” of people going about their daily errands, wandering along the margins of public spaces (streets, sidewalks, parks and squares) and in and out of quasi-private spaces (stores, salons, bars, boutiques, bars and restaurants).

Clusters of these quasi-private spaces, which are usually neighborhood businesses, activate a streetscape, both drawing life from and adding to a steady flow of people outside.

In an effort to begin to quantify this key aspect of neighborhood vitality, we’ve developed a new statistical indicator—the Storefront Index (click to see the full report)—that measures the number and concentration of customer-facing businesses in the nation’s large metropolitan areas. We’ve computed the Storefront Index by mapping the locations of hundreds of thousands of everyday businesses: grocery and hardware stores, beauty salons, bookstores, bars and restaurants, movie theatres and entertainment venues, and then identifying significant clusters of these businesses—places where each storefront business is no more than 100 meters from the next storefront.

The result is a series of maps, available for the nation’s 51 largest metropolitan areas, that show the location, size, and intensity of neighborhood business clusters down to the street level. Here’s an example for Washington, DC. On this map, each dot represents one storefront business. This maps shows storefront businesses throughout the metropolitan area. In downtown Washington, there is a high concentration of storefronts; as one moves further out towards the suburbs, the number of storefronts diminishes, and storefronts are increasingly found arrayed only along major arterials, with a few satellite city centers (like Alexandria).

SFI_DC_zoomedout

The Storefront Index helps illuminate the differences in the vibrancy of the urban core in different metropolitan areas. Here we’ve constructed identically scaled maps of the Portland and St. Louis metropolitan areas, zoomed in on their central business districts. The light colored circle represents a three-mile buffer around the center of downtown. In Portland, there are about 1,700 storefront businesses in this three-mile buffer—with substantial concentrations downtown, and in the close-in residential neighborhoods nearby. St. Louis has only about 400 storefront businesses in a similar area, with a smaller concentration of storefront businesses in its center, and fewer and less dense commercial districts in nearby neighborhoods.

SFI_PDX

SFI_StLouis

The Storefront Index is one indicator of the relative size and robustness of the active streetscape in and around city centers. As this table shows, there’s considerable variation among US metropolitan areas in the number of storefront businesses with three miles of the center of downtown. New York and San Francisco have the densest concentrations of storefront businesses in their urban cores.

 

Maps of the Storefront Index for the nation’s 51 largest metropolitan areas are available online here. You can drill down to specific neighborhoods to examine the pattern of commercial clustering at the street level.

We also use the Storefront Index to track change over time, looking at the growth of businesses and street level activity in a rebounding neighborhood in Portland. There’s also strong evidence to suggest that concentrations of storefront businesses provide a conducive environment for walking. We’ve overlaid the storefront index clusters on a heat map of Walk Scores for selected metropolitan areas to explore the relationship between these two measures. While Walk Score includes destinations like parks and schools, as well as businesses, it also measures walkability from the standpoint of home-based origins, while our Storefront Index shows the concentration of commercial destinations.

City Observatory has developed the Storefront Index as a freely available tool for urbanists and city planners to use in their communities. The index material is licensed under a Creative Commons Attribution license (as is all City Observatory material), and a shape file containing storefront index information is available here.

CBO on highway finance: The price is wrong

A new Congressional Budget Office (CBO) report confirms what we’ve known for a long time: our nation’s system of assessing the costs of roads—and paying for their construction and maintenance—is badly broken.

Entitled “Approaches to Making Federal Highway Spending More Productive,” the new CBO report is a treasure trove of details about the recent history of transportation finance in the United States. Though couched in the careful technocratic language of the budget analyst—you’ll read about how alternative financial arrangements would enable better “performance” and create greater “efficiency”—the translation is straightforward: the big cause of our transportation problems is that we’re charging road users the wrong price.

Collectively, road users are paying too little for what they use, which is why taxpayers have had to chip in more than $140 billion over the past seven years to make up shortfalls in the Highway Trust Fund. The Trust Fund is the repository of gas taxes and other road user fees and is supposed to cover the cost of building and maintaining the nation’s roads. But the underlying problem isn’t just that there’s too little money: it’s that the way we allocate costs to users, and the way we distribute funding among alternative investments produces lousy results.

Screen Shot 2016-03-03 at 9.49.21 AM

As the report puts it: “Spending on highways does not correspond very well with how the roads are used and valued.”

Translation: The price of roads is wrong. Drivers who use lots of expensive capacity (urban roads at peak travel times) don’t pay their costs, and money gets allocated to spending that produces limited value for the nation.

Under the current system of fuel taxes, all users pay basically the same amount whether they travel on highly congested roads or nearly empty ones. That means users have no incentive to adjust their travel times, routes, or modes to reduce the costs that their travel imposes on everyone else. The fact that many road users face prices that are far lower than the costs they impose on the system means that highways are over-used, and that there isn’t enough money to maintain or improve them. Getting prices right would lead to less peak demand (shifting travel to un-congested periods, when it can be accommodated with the existing infrastructure) and thus improving service for users who value travel time improvements.

It’s also important to keep in mind that this report only addresses the direct financial costs to government for constructing and operating the highway system. There are also huge social and environmental costs—from air pollution, climate change, and injuries and deaths associated with crashes—that aren’t reflected in the prices that that roads users pay. In an earlier report, CBO estimated that trucks were subsidized to the tune of $57 to 128 billion a year because of these costs and road damage.

The CBO has three recommendations: price roads, especially to reflect congestion, allocate funds based on a cost-benefit basis, and link spending to performance. TheCBO points out that road pricing would not only provide badly needed funds, but would provide valuable information about which highway system improvements would generate the largest economic benefits. They report that according to FHWA, pricing might reduce the expenditure needed to achieve a given performance level by 30 percent.

And—almost in passing—the CBO report casts doubt on the accepted wisdom that highway building triggers economic growth. They say: “Research suggests that increase in economic activity from spending for new highways in the United States have generally declined over time.” Translation: highway investment experiences diminishing returns. The nation gets a big gain from building the Interstate Highway system when there was none, but each successive increment to the system produces a smaller and smaller return.

Highway construction in Seattle, 1962. Credit: Seattle Municipal Archives, Flickr
Highway construction in Seattle, 1962. Credit: Seattle Municipal Archives, Flickr

 

We’ll grant that critics might point out that other modes, like transit or biking or walking, don’t cover their own costs with user fees, either—there’s no sidewalk maintenance toll, and nor should there be. But there’s a critical difference between car travel and these other modes. Users of those modes don’t create the same costs, either, and not just because sidewalks cost a tiny fraction of a tiny fraction of what roads cost. Each additional driver, for example, creates congestion for every other driver on the road at the same time, up to the point that travel times can be doubled or tripled at peak use. Additional riders on a subway, on the other hand, create only very modest increases in travel times because of the time it takes for them to board—and perhaps none at all, if more ridership causes the transit agency to run more trains, and their boarding time is canceled out by less waiting time. We’re not confronting the cost of multi-billion dollar sidewalk investment projects due to peak hour congestion caused by under-priced foot traffic. Just as importantly, transit riders, bikers, and riders don’t create any, or very, very small amounts, of the major social costs of driving, from deaths and injuries in crashes to pollution.”

Simply pumping more money into the existing highway finance system will produce limited economic benefits. Many projects are only needed because drivers don’t confront anything close to the actual costs of the roads they drive on—and if they did, demand would be far smaller. Congestion pricing would improve the flow of traffic and enable us to meet the nation’s transportation needs at much lower costs. And investments in the highway system face real diminishing returns, so that additional money invested in highways produces less and less economic benefit.

Cities can’t solve all our problems

As our name implies, we’re very focused on cities. We think cities are the key to solving many of the nation’s most challenging problems, from economic opportunity and social justice, to environmental sustainability. And we’re not alone: more and more, activists are looking to cities to take the lead on critical policy issues.

Cities should be fearless innovators. But at the same time, we have to recognize that there are some things that cities can’t do, there are some problems that transcend municipal boundaries in ways that defy effective or rational solution on a city-by-city basis. For some policy issues—including ones that play out in cities—there is simply no substitute for getting national policies right.

Help, please. Credit: Rob Crawley, Flickr
Help, please. Credit: Rob Crawley, Flickr

 

In a recent New York Times article, “Liberals turn to cities to Pass Laws and Spread Ideas,” Claire Cain Miller explained that progressives find a more receptive audience in the nation’s bluest political regions. It’s easier to try out controversial new policy ideas like paid parental leave in cities like San Francisco; and if a policy innovation can be shown to work at a smaller scale, it might lead to momentum to spread the policy nationally.

Viva la (Reagan) Revolution?

Progressives have increasingly turned to cities and states because of the seeming futility of engineering major changes in federal policy. If, prior to 1980, you had the expectation that the federal government was going to ride to the rescue in any major domestic policy arena, President Reagan sought to disabuse you of that notion. Three decades later, its apparent that many progressives have internalized this worldview.

With little hope or expectation that a federal government, hamstrung by chronic fiscal problems and paralyzed by deep partisan divisions would take any action, local activists have pushed cities and states governments to step forward to tackle problems, rather than waiting for a federal response that would never come. The shift to local, rather than national policy-making is, in effect, a concession to the Reaganite view of federalism.

With one notable exception—and we’ll return to this in a moment—federal domestic policy has consisted mostly of tokenism and misdirection. Token efforts that were too small to make any serious dent in a policy problem (like “Promise Zones”), or misdirection—the recently abandoned (or retooled if you like) “No Child Left Behind” act, that essentially imposed an overlay of federal regulation for states and local schools without providing new resources for K-12 education.

The single notable exception to the atrophy of national domestic policy is the Affordable Care Act, in which the federal government has re-written the rules for the health care sector, and put in place what seems to be (sorry skeptics) an irreversible move toward nearly universal health care coverage.

But with a nearly complete policy vacuum at the federal level in other policy areas, states and especially cities have upped their policy making game, on subjects as diverse as climate change, housing affordability, economic development and innovation.

New York mayor Bill De Blasio has been expected to deal with a housing crisis in his city with little federal help. Credit: Kevin Case, Flickr
New York mayor Bill De Blasio has been expected to deal with a housing crisis in his city with little federal help. Credit: Kevin Case, Flickr

 

Cities are clearly attractive arenas for policy innovation for a couple of reasons. Most importantly, there’s a political consensus. Most big cities are a deep, dark blue, so there’s little question of the political viability of some ideas—few push back in San Francisco or Boston if you fret about global warming or inequality.

To be sure, there are some issues of personal and individual rights and freedoms and local public health where cities are well-qualified and capable of implementing meaningful reforms, playing their fabled role as laboratories of democracy that test out solutions that can be emulated nationally. States and cities have taken important leadership roles in decriminalizing and legalizing marijuana, extending gay rights and marriage equality, limiting smoking in public places, and successful local policies pioneered in leading cities have lead to increasing policy adoption elsewhere.

Where better national policy is desperately needed

But even with best of intentions, there are some things that cities simply can’t do. Top of the list, the environment and the economy. Despite the noble efforts of cities and some states to push forward with greenhouse reduction goals and renewable portfolio standards, some form of carbon pricing (auctioning permits or taxing carbon) can only effectively be implemented at a federal level. Having Texas or Wyoming opt out of carbon limits would pretty much negate their impact, and puts a real brake on policies that meaningfully push costs on polluters. At some point, the learning and examples from local experimentation have to be rolled up to a national scale if we are to actually solve these problems.

Cities and states are increasingly concerned with promoting economic opportunity and raising wages. Many have devised useful programs to train workers, to provide better access to jobs for those cut off from them, and to legislate increases to minimum wages. But the success of all of these initiatives ultimately depends directly on the state of the national economy: the task of every city has been made harder by the stagnant economic growth which is a product of a contractionary federal fiscal policy, and which threatens to be pushed into recession at any time by a Federal Reserve Board that is still obsessed with fighting the imaginary inflation demons of the 1970s.

In many cases, changes to federal policy are essential simply to enable local experimentation. Some policies that seem to be amenable to local action have incentives that are so deeply embedded in federal law and regulation that it’s almost impossible to change. For example, the federal government dictates (and operates) the key features of the nation’s housing finance system from the term of mortgages and guarantees thereon, to a series of tax policies that prod families into homeownership. It also heavily subsidizes car and truck travel and highway expansion from general funds, and allocates money to local entities that are strongly oriented toward more road building.

Credit: Brian Imagawa, Flickr
Credit: Brian Imagawa, Flickr

 

Perhaps most fundamentally, local governments find it nearly impossible to directly tackle income inequality. Local governments that attempt income redistribution just impel the wealthy to secede to other jurisdictions; and our income inequality and segregation are actively worsened by exclusionary cities that use their police power to exclude the poor.

As well-intended and pragmatic as the shift to more favorable local venues seems, we think it is a mistake not to pursue strong national advocacy for specific federal policies that would help cities—even if there’s little chance of short-term adoption. What we need, in many cases is the policy framework that considers the root causes and scale of the problem, and which contributes to a dialog about what the nation as a whole needs to do—rather than relying solely on the courageous, but inherently limited efforts of individual cities. We’ve sketched out the case, for example, for shifting federal housing subsidies into vouchers or tax credits that would reach more of the nation’s rent burdened households. The ambivalence or latent hostility of existing federal policies is often a major headwind to local efforts to remedy these problems.

While it seems prudent in today’s political environment to “think globally and act locally,” we are quickly approaching the limits of what cities can do on their own to tackle big problems of inequality, housing affordability, environmental sustainability, and economic progress. Successful federalism hinges on a strong partnership, with a clear division of labor between the important roles to be played by both national and local governments. Increasingly, if cities are to continue to be crucibles of change, we’ll need new national policies that provide the frameworks, incentives and in some cases resources that are needed to realize the potential of solving major national problems by building more diverse, inclusive cities. As it turns out, 2016 is a presidential election year: Maybe now we should start having this conversation?

The limits of design thinking

The most difficult design challenge is asking the right question

Not long ago, a feature article at the New York Times described how the design wizards at IDEO are helping stodgy old Ford Motor Company re-imagine how transportation might work in the future.

IDEO conceptualized the design task by sending groups of its employees to a restaurant a few miles away via different transportation modes, so they could assess the challenges each faces: the subway (too smelly), Divvy bikeshare (too dirty), and Uber (too expensive).

The hope is this exercise sheds light on how will enable IDEO to brainstorm clever new ways to get from point A to point B in the future. (We’re pretty sure the solution will involve apps.)

In our view, this is an epic fail for several reasons. First, it just gets certain things wrong. For example, according to the article, one of IDEO’s big complaints about the subway was the lack of cellphone reception—a problem the CTA had already announced it would be fixing when they took their ride in October, with full coverage in all subways rolled out in December.

But there’s a more fundamental problem: Is the optimal place for a quick work lunch four miles from IDEO’s office? If it is, isn’t the real problem here that a company has located its office in a place where its employees have to travel four miles just to get lunch and have a meeting? And isn’t the restaurant’s problem that it’s located in a place where its customers are four miles away?

Google results for "restaurants" in IDEO's West Loop Chicago neighborhood.
Google results for “restaurants” in IDEO’s West Loop Chicago neighborhood.

 

And in fact, IDEO has already solved this particular design problem for itself. Fork and Tine, the restaurant they chose, might be four miles away, but there are dozens and dozens of restaurants within a quick walk of its downtown offices. City Observatory’s Chicago bureau chief, Daniel Hertz, spent an afternoon walking around IDEO’s West Loop neighborhood to find a bite to eat in the name of research. A five minute walk in one direction is Greektown, where you can get a quick gyro or souvlaki. A few minutes north of there is Randolph Street, one of the premier dining destinations in the city, with everything from high-concept burgers to Indian curries. Or, just about ten minutes straight north of IDEO’s offices is the French Market food court at a major commuter rail station, a hugely popular workday lunch spot. Not to mention that you pass at least a couple places to eat on every block on your way to any of these destinations.

It’s not a coincidence that there’s so much to do within a quick walk of IDEO’s offices—that’s almost certainly a major reason they chose to locate in the West Loop. Indeed, it’s a big reason that Chicago’s downtown has seen a steady stream of companies opening new or relocated offices there.

In other words, the issue here has at least as much—if not much more—to do with the design of cities than the relatively superficial features of different transport modes.

With rows of restaurants down the street from the IDEO offices, they don’t have to travel at all, save to walk a few hundred feet, saving them bundles of valuable time. So being in a dense urban location turns out to be the optimal design solution: relying as it does on the healthiest, least expensive, lowest carbon and most fully deployed transport technology in human history: walking. IDEO already knows this: that’s why they pay premium rents for their tidy, exposed-brick office space in the West Loop.

A train leaves the Ogilvie commuter rail station in the West Loop. Credit: Seth Anderson, Flickr
A train leaves the Ogilvie commuter rail station in the West Loop. Credit: Seth Anderson, Flickr

 

One of the subsidiary tasks IDEO assigned its testers was schlepping a couple of large shopping bags—to simulate, in some way how a busy person might have to mix some domestic errands with their business lunch. Fair enough. But if one lived in a mixed use neighborhood, where there was say a corner store or bodega down the block, one might easily handle all one’s shopping with more frequent but much more convenient walking trips to buy just a handful of necessities, rather than having perforce to carry a week’s worth of groceries because it was several miles to the big box store.

If we think about it correctly, dense, mixed use urban spaces are the ultimate design solution to our transportation problems. They provide low-cost, no-carbon, time-saving access to all manner of things that consumers want and need in their daily lives.

The real failure in design thinking here is IDEO viewing this task as primarily choosing between different transportation modes. Of course, they are free to frame this question however they—and their paying client—would like. But from a broader policy perspective—and from the perspective of citizens and consumers, we’d all be a lot better off if we’d make the design conversation about how we arrange our cities. And the design lens is often blinded by the look and feel of things, rather than comprehending basic, systemic issues: the quality of bus service, for example, has much more to do with schedule frequency, and running times, than the features of the transit system’s arrival time notification app: we think bus riders would be much more impressed with buses that arrived every ten minutes and made the trip faster thanks to dedicated lanes, than they would be by an app that told them the next bus was exactly 22 minutes from now.

Just focusing on transportation ignores and rules out the very substantial gains that could be made by better designing our cities for living. It’s hard to get the right answers under the best of circumstances. It’s just impossible to get the right answers if you ask the wrong questions.

A version of this commentary originally appeared at City Observatory in February 2016.

Designed to fail

 

A breathless feature article at the New York Times describes how the design wizards at IDEO are helping stodgy old Ford Motor Company re-imagine how transportation might work in the future.

IDEO conceptualized the design task by sending groups of its employees to a restaurant a few miles away via different transportation modes, so they could assess the challenges each faces: the subway (too smelly), Divvy bikeshare (too dirty), and Uber (too expensive).

The hope is this exercise sheds light on how will enable IDEO to brainstorm clever new ways to get from point A to point B in the future. (We’re pretty sure the solution will involve apps.)

In our view, this is an epic fail for several reasons. First, it just gets certain things wrong. For example, according to the article, one of IDEO’s big complaints about the subway was the lack of cellphone reception—a problem the CTA had already announced it would be fixing when they took their ride in October, with full coverage in all subways rolled out in December.

But there’s a more fundamental problem: Is the optimal place for a quick work lunch four miles from IDEO’s office? If it is, isn’t the real problem here that a company has located its office in a place where its employees have to travel four miles just to get lunch and have a meeting? And isn’t the restaurant’s problem that it’s located in a place where its customers are four miles away?

Google results for "restaurants" in IDEO's West Loop Chicago neighborhood.
Google results for “restaurants” in IDEO’s West Loop Chicago neighborhood.

 

And in fact, IDEO has already solved this particular design problem for itself. Fork and Tine, the restaurant they chose, might be four miles away, but there are dozens and dozens of restaurants within a quick walk of its downtown offices. City Observatory’s Chicago bureau chief, Daniel Hertz, spent an afternoon walking around IDEO’s West Loop neighborhood to find a bite to eat in the name of research. A five minute walk in one direction is Greektown, where you can get a quick gyro or souvlaki. A few minutes north of there is Randolph Street, one of the premier dining destinations in the city, with everything from high-concept burgers to Indian curries. Or, just about ten minutes straight north of IDEO’s offices is the French Market food court at a major commuter rail station, a hugely popular workday lunch spot. Not to mention that you pass at least a couple places to eat on every block on your way to any of these destinations.

It’s not a coincidence that there’s so much to do within a quick walk of IDEO’s offices—that’s almost certainly a major reason they chose to locate in the West Loop. Indeed, it’s a big reason that Chicago’s downtown has seen a steady stream of companies opening new or relocated offices there.

In other words, the issue here has at least as much—if not much more—to do with the design of cities than the relatively superficial features of different transport modes.

With rows of restaurants down the street from the IDEO offices, they don’t have to travel at all, save to walk a few hundred feet, saving them bundles of valuable time. So being in a dense urban location turns out to be the optimal design solution: relying as it does on the healthiest, least expensive, lowest carbon and most fully deployed transport technology in human history: walking. IDEO already knows this: that’s why they pay premium rents for their tidy, exposed-brick office space in the West Loop.

A train leaves the Ogilvie commuter rail station in the West Loop. Credit: Seth Anderson, Flickr
A train leaves the Ogilvie commuter rail station in the West Loop. Credit: Seth Anderson, Flickr

 

One of the subsidiary tasks IDEO assigned its testers was schlepping a couple of large shopping bags—to simulate, in some way how a busy person might have to mix some domestic errands with their business lunch. Fair enough. But if one lived in a mixed use neighborhood, where there was say a corner store or bodega down the block, one might easily handle all one’s shopping with more frequent but much more convenient walking trips to buy just a handful of necessities, rather than having perforce to carry a week’s worth of groceries because it was several miles to the big box store.

If we think about it correctly, dense, mixed use urban spaces are the ultimate design solution to our transportation problems. They provide low-cost, no-carbon, time-saving access to all manner of things that consumers want and need in their daily lives.

The real failure in design thinking here is IDEO viewing this task as primarily choosing between different transportation modes. Of course, they are free to frame this question however they—and their paying client—would like. But from a broader policy perspective—and from the perspective of citizens and consumers, we’d all be a lot better off if we’d make the design conversation about how we arrange our cities. And the design lens is often blinded by the look and feel of things, rather than comprehending basic, systemic issues: the quality of bus service, for example, has much more to do with schedule frequency, and running times, than the features of the transit system’s arrival time notification app: we think bus riders would be much more impressed with buses that arrived every ten minutes and made the trip faster thanks to dedicated lanes, than they would be by an app that told them the next bus was exactly 22 minutes from now.

Just focusing on transportation ignores and rules out the very substantial gains that could be made by better designing our cities for living. It’s hard to get the right answers under the best of circumstances. It’s just impossible to get the right answers if you ask the wrong questions.

The Storefront Index

As Jane Jacobs so eloquently described it in The Death and Life of American Cities, much of the essence of urban living is reflected in the “sidewalk ballet” of people going about their daily errands, wandering along the margins of public spaces (streets, sidewalks, parks and squares) and in and out of quasi-private spaces (stores, salons, bars, boutiques, bars and restaurants).

Clusters of these quasi-private spaces, which are usually neighborhood businesses, activate a streetscape, both drawing life from and adding to a steady flow of people outside.

In an effort to begin to quantify this key aspect of neighborhood vitality, we’ve developed a new statistical indicator—the Storefront Index (click to see the full report)—that measures the number and concentration of customer-facing businesses in the nation’s large metropolitan areas. We’ve computed the Storefront Index by mapping the locations of hundreds of thousands of everyday businesses: grocery and hardware stores, beauty salons, bookstores, bars and restaurants, movie theatres and entertainment venues, and then identifying significant clusters of these businesses—places where each storefront business is no more than 100 meters from the next storefront.

The result is a series of maps, available for the nation’s 51 largest metropolitan areas, that show the location, size, and intensity of neighborhood business clusters down to the street level. Here’s an example for Washington, DC. On this map, each dot represents one storefront business. This maps shows storefront businesses throughout the metropolitan area. In downtown Washington, there is a high concentration of storefronts; as one moves further out towards the suburbs, the number of storefronts diminishes, and storefronts are increasingly found arrayed only along major arterials, with a few satellite city centers (like Alexandria).

SFI_DC_zoomedout

The Storefront Index helps illuminate the differences in the vibrancy of the urban core in different metropolitan areas. Here we’ve constructed identically scaled maps of the Portland and St. Louis metropolitan areas, zoomed in on their central business districts. The light colored circle represents a three-mile radius around the center of downtown. In Portland, there are about 1,700 storefront businesses in this three-mile circle—with substantial concentrations downtown, and in the close-in residential neighborhoods nearby. St. Louis has only about 400 storefront businesses in a similar area, with a smaller concentration of storefront businesses in its center, and fewer and less dense commercial districts in nearby neighborhoods.

SFI_PDX

SFI_StLouis

The Storefront Index is one indicator of the relative size and robustness of the active streetscape in and around city centers. As this table shows, there’s considerable variation among US metropolitan areas in the number of storefront businesses with three miles of the center of downtown. New York and San Francisco have the densest concentrations of storefront businesses in their urban cores.

 

Maps of the Storefront Index for the nation’s 51 largest metropolitan areas are available online here. You can drill down to specific neighborhoods to examine the pattern of commercial clustering at the street level.

We also use the Storefront Index to track change over time, looking at the growth of businesses and street level activity in a rebounding neighborhood in Portland. There’s also strong evidence to suggest that concentrations of storefront businesses provide a conducive environment for walking. We’ve overlaid the storefront index clusters on a heat map of Walk Scores for selected metropolitan areas to explore the relationship between these two measures. While Walk Score includes destinations like parks and schools, as well as businesses, it also measures walkability from the standpoint of home-based origins, while our Storefront Index shows the concentration of commercial destinations.

City Observatory has developed the Storefront Index as a freely available tool for urbanists and city planners to use in their communities. The index material is licensed under a Creative Commons Attribution license (as is all City Observatory material), and shapefiles containing storefront index information is available here.

Urban myth busting: Why building more high income housing helps affordability

After fourteen seasons, Discovery Channel’s always entertaining “Mythbusters” series ended last year. If you didn’t see the show-and it lives on at Youtube, of course–co-hosts Adam Savage and Jamie Hyneman constructed elaborate (often explosive) experiments to test whether something you see on television or in the movies could actually happen in real life. (Sadly, it turns out that you can’t make a bullet curve no matter how fast you flick your arm.)  

Adam-Savage-and-Jamie-Hyneman-in-Mythbusters

At City Observatory, we feel compelled to enter into this void, and we’ll start by doing our own urban myth-busting. First up: Does building new high-priced apartments, affordable only by middle- and upper-income families, make housing less affordable for lower income households?

We’ve heard this claim time and again in public hearings: new rental housing charges higher rents than existing apartments, and must therefore be making affordability problems worse. Business Week’s Noah Smith shared the lament of the misunderstood economist who confided in a progressive friend that he favored building new market-rate (i.e. high priced) housing in San Francisco:

I was talking to a friend the other day, a San Francisco anti-eviction activist, and said that allowing more housing construction in the city would be a great way to lower rents. She looked at me in horror, blinked and asked “Market rate?” I nodded. She was speechless.

My experience was far from unusual. To my friend and many others, it has become an article of faith that building market-rate housing raises rents, rather than lowers them.  The logic of Econ 101 — that an increase in supply lowers price — is alien to many progressives, both in the Bay Area and around the country.

Even Harvard University’s Joint Center on Housing Studies reprised this line in one of their recent reports: “50 percent of rental households make less than $34,000 per year, but only 10 percent of new multi-family units are affordable at this income.”

From this statistical observation, it’s a short leap to the conclusion that building new housing is part of the affordability problem. The Wall Street Journal reported that “much of the new supply is aimed at higher-income renters.” In May, the Journal ran a story claiming: “A focus by builders on high-end apartments helps explain why rents are soaring across the country.”

New construction in San Francisco. Credit: torbakhopper, Flickr
New construction in San Francisco. Credit: torbakhopper, Flickr

 

On its surface, this sounds terrible. But the key context missing here is that in the United States, we have almost never built new market-rate housing for low-income households. New housing—rental and owner-occupied—overwhelmingly tends to get built for middle- and upper-income households. So how do affordable market-rate housing units get created? As new housing ages, it depreciates, and prices and rents decline, relative to newer houses. (At some point, usually after half a century or more, the process reverses, as surviving houses—which are often those of the highest quality—become increasingly historic, and then appreciate.)

What really matters is not whether new housing is created at a price point that low- and moderate-income households can afford, but rather, whether the overall housing supply increases enough that the existing housing stock can “filter down” to low and moderate income households. As we’ve written, that process depends on wealthier people moving into newer, more desirable homes. Where the construction of those homes is highly constrained, those wealthier households end up bidding up the price of older housing—preventing it from filtering down to lower income households and providing for more affordability.

This isn’t theoretical: As we’ve discussed before at City Observatory, the vast majority of today’s actually existing affordable housing is not subsidized below-market housing, but market-rate housing that has depreciated, or “filtered.” Syracuse economist Stuart Rosenthal estimates that the median value of rental housing declines by about 2.2% per year. As its price falls, lower-income people move in. Rosenthal estimates that rental housing that is 20 years old is occupied, on average, by households with incomes about half the level of incomes of those who occupy new rental housing.

Screen Shot 2015-11-09 at 9.55.56 AM
Apartments get cheaper up until they’re about 50 years old.

 

In its 2016 report on the state’s housing crisis, the California Legislative Analyst’s Office noted that as housing ages, it becomes more affordable. Housing that likely was considered “luxury” when first built declined to the middle of the housing market within 25 years. Take the 1960s-era apartments built in Marietta, a suburb of Atlanta: When they were new, they were middle to upper income housing, occupied by single professionals, gradually, as they aged, they slid down-market, to the point where the city passed an $85 million bond issue to acquire and demolish them as a way of reducing a concentration of low income households in the Franklin Road neighborhood. 

Another critical point is that if we don’t build more housing at the high end of the market, those households don’t just disappear, they take their demand “down-market” and bid up the price of housing that would otherwise filter down to middle and lower income households. That’s exactly what the Montgomery County Maryland housing department reports is happening there:

The shortage of rental housing at the high end of the market creates downward pressure on less affluent renters, the study found, because when higher-income households rent less expensive units, lower-income renters have fewer affordable choices. Cost-burdening is linked with this unbalanced market, especially at the lower end of the income spectrum.

Ironically, this problem persists in Montgomery County in spite of its widely touted inclusionary housing requirement that forces builders of new apartments to set aside a portion of them for low and moderate income households.

New Cars are Unaffordable to Low Income Households, too

Here’s another way to look at the connection between affordability and the price of new things: cars. (After houses, cars are frequently the most expensive consumer durable that most American’s purchase.)

Exactly the same thing could be said of new car purchases: Most new cars aren’t affordable to the typical household either—the average sale price of a new car is nearly $34,000.

Credit: Brian Timmermeister, Flickr
Credit: Brian Timmermeister, Flickr

 

In fact, using the same kind of approach that Harvard’s Joint Center for Housing Studies used to assess rental affordability, Interest.com reports that the median family can afford to buy the typical new car in only one large metropolitan area. Similar to the “30 percent of income” rule widely—and in our view inappropriately—used to gauge housing affordability, they assume that the typical household makes an 20 percent down payment, finances its purchase over four years and pays no more than 10 percent of its income for a car payment. They report in most metros that the typical family falls 30 to 40 percent short of being able to afford a new car. So most households deal with car affordability pretty much like they deal with housing affordability: by buying used.

When it comes to anything new and long-lived, higher-income households buy most of the output. According to Bureau of Labor Statistics data, households in the two highest income quintiles accounted for about 67 percent of the purchase of new cars in the US in 2001. New car buyers are getting progressively older, and are more likely to be high income. According to the National Automobile Dealers Association, the median new car buyer is 52 years old and has an income of about $80,000, compared to an average age of 37 and an income of $50,000 for the overall population.

But there’s no outcry about America’s “affordable car crisis.” The reason: high-income households buy newer cars; most of the rest of us buy used cars—which are more affordable after they’ve depreciated for a while.That’s even more true of housing, which is much longer lived. Nationally, 68 percent of the nation’s rental housing is more than 30 years old—so only about 10 percent of the nation’s renters live in apartments built in the last decade.

New houses, like new cars, are sold primarily to higher income households—and affordability comes from getting a bargain when the car (or house or apartment) has depreciated. Building more high priced new apartments, in fact, is critical to generate the filtering down of older housing that constitutes the affordable housing supply.

This myth is busted: building more high end housing doesn’t make housing less affordable.

Urban myth busting: New rental housing and median-income households

The price of new housing is a poor gauge of housing affordability

Increasing housing supply over time, coupled with individual housing units moving down-market as they age, provides affordability

New cars are unaffordable to most households; used cars are the source of affordable driving

Discovery Channel’s always entertaining “Mythbusters” series ran for fourteen seasons before ending in 2016. If you didn’t see the show, co-hosts Adam Savage and Jamie Hyneman constructed elaborate (often explosive) experiments to test whether something you see on television or in the movies could actually happen in real life. (Sadly, you can’t make a bullet curve no matter how fast you flick your arm.)  

Adam-Savage-and-Jamie-Hyneman-in-Mythbusters

At City Observatory, we feel compelled to enter into this void, and we’ll start by doing our own urban myth-busting. Today: Does building new high-priced apartments, affordable only by middle- and upper-income families, make housing less affordable for lower income households?

We’ve heard this claim time and again in public hearings: new rental housing charges higher rents than existing apartments, and must therefore be making affordability problems worse.

Even Harvard University’s Joint Center on Housing Studies has reprised this line: “50 percent of rental households make less than $34,000 per year, but only 10 percent of new multi-family units are affordable at this income.”

From this statistical observation, it’s a short leap to the conclusion that building new housing is part of the affordability problem. The Wall Street Journal reported that “much of the new supply is aimed at higher-income renters.” The Journal also claimed: “A focus by builders on high-end apartments helps explain why rents are soaring across the country.”

New construction in San Francisco. Credit: torbakhopper, Flickr
New construction in San Francisco. Credit: torbakhopper, Flickr

 

On its surface, this sounds terrible. But the key context missing here is that in the United States, we have almost never built new market-rate housing for low-income households. New housing—rental and owner-occupied—overwhelmingly tends to get built for middle- and upper-income households. So how do affordable market-rate housing units get created? As new housing ages, it depreciates, and prices and rents decline, relative to newer houses. (At some point, usually after half a century or more, the process reverses, as surviving houses—which are often those of the highest quality—become increasingly historic, and then appreciate.)

What really matters is not whether new housing is created at a price point that low- and moderate-income households can afford, but rather, whether the overall housing supply increases enough that the existing housing stock can “filter down” to low and moderate income households. As we’ve written, that process depends on wealthier people moving into newer, more desirable homes. Where the construction of those homes is highly constrained, those wealthier households end up bidding up the price of older housing—preventing it from filtering down to lower income households and providing for more affordability.

This isn’t theoretical: As we’ve discussed before at City Observatory, the vast majority of today’s actually existing affordable housing is not subsidized below-market housing, but market-rate housing that has depreciated, or “filtered.” Syracuse economist Stuart Rosenthal estimates that the median value of rental housing declines by about 2.2% per year. As its price falls, lower-income people move in. Rosenthal estimates that rental housing that is 20 years old is occupied, on average, by households with incomes about half the level of incomes of those who occupy new rental housing.

Screen Shot 2015-11-09 at 9.55.56 AM
Apartments get cheaper up until they’re about 50 years old.

In its 2014 report, the California Legislative Analyst’s Office noted that as housing ages, it becomes more affordable. Housing that likely was considered “luxury” when first built declined to the middle of the housing market within 25 years. Take the 1960s-era apartments built in Marietta, a suburb of Atlanta: When they were new, they were middle to upper income housing, occupied by single professionals, gradually, as they aged, they slid down-market, to the point where the city passed an $85 million bond issue to acquire and demolish them as a way of reducing a concentration of low income households in the Franklin Road neighborhood.

New cars are unaffordable to low income households, too

Here’s another way to look at the connection between affordability and the price of new things: cars. (After houses, cars are frequently the most expensive consumer durable that most Americans purchase.)

Exactly the same thing could be said of new car purchases: Most new cars aren’t affordable to the typical household either—the average sale price of a new car is nearly $34,000.

Credit: Brian Timmermeister, Flickr
Credit: Brian Timmermeister, Flickr

 

In fact, using the same kind of approach that Harvard’s Joint Center for Housing Studies used to assess rental affordability, Interest.com reported that the median family can afford to buy the typical new car in only one large metropolitan area. Similar to the “30 percent of income” rule widely—and in our view inappropriately—used to gauge housing affordability, they assume that the typical household makes an 20 percent down payment, finances its purchase over four years and pays no more than 10 percent of its income for a car payment. They report in most metros that the typical family falls 30 to 40 percent short of being able to afford a new car. So most households deal with car affordability pretty much like they deal with housing affordability: by buying used.

When it comes to anything new and long-lived, higher-income households buy most of the output. According to Bureau of Labor Statistics data, households in the two highest income quintiles accounted for about 67 percent of the purchase of new cars in the US in 2001. New car buyers are getting progressively older, and are more likely to be high income. According to the National Automobile Dealers Association, the median new car buyer is 52 years old and has an income of about $80,000, compared to an average age of 37 and an income of $50,000 for the overall population.

But there’s no outcry about America’s “affordable car crisis.” The reason: high-income households buy newer cars; most of the rest of us buy used cars—which are more affordable after they’ve depreciated for a while.That’s even more true of housing, which is much longer lived. Nationally, 68 percent of the nation’s rental housing is more than 30 years old—so only about 10 percent of the nation’s renters live in apartments built in the last decade.

New houses, like new cars, are sold primarily to higher income households—and affordability comes from getting a bargain when the car (or house or apartment) has depreciated. Building more high priced new apartments, in fact, is critical to generate the filtering down of older housing that constitutes the affordable housing supply.

This myth is busted: building more high end housing doesn’t make housing less affordable.

Urban myth busting: New rental housing and median-income households

After fourteen seasons, Discovery Channel’s always entertaining “Mythbusters” series is coming to an end later this year. If you haven’t seen the show, co-hosts Adam Savage and Jamie Hyneman construct elaborate (often explosive) experiments to test whether something you see on television or in the movies could actually happen in real life. (Sadly, you can’t make a bullet curve no matter how fast you flick your arm.)  

Adam-Savage-and-Jamie-Hyneman-in-Mythbusters

At City Observatory, we feel compelled to enter into this void, and we’ll start by doing our own urban myth-busting. First up: Does building new high-priced apartments, affordable only by middle- and upper-income families, make housing less affordable for lower income households?

We’ve heard this claim time and again in public hearings: new rental housing charges higher rents than existing apartments, and must therefore be making affordability problems worse.

Even Harvard University’s Joint Center on Housing Studies reprised this line in their recent report: “50 percent of rental households make less than $34,000 per year, but only 10 percent of new multi-family units are affordable at this income.”

From this statistical observation, it’s a short leap to the conclusion that building new housing is part of the affordability problem. The Wall Street Journal reported that “much of the new supply is aimed at higher-income renters.” In May, the Journal ran a story claiming: “A focus by builders on high-end apartments helps explain why rents are soaring across the country.”

New construction in San Francisco. Credit: torbakhopper, Flickr
New construction in San Francisco. Credit: torbakhopper, Flickr

 

On its surface, this sounds terrible. But the key context missing here is that in the United States, we have almost never built new market-rate housing for low-income households. New housing—rental and owner-occupied—overwhelmingly tends to get built for middle- and upper-income households. So how do affordable market-rate housing units get created? As new housing ages, it depreciates, and prices and rents decline, relative to newer houses. (At some point, usually after half a century or more, the process reverses, as surviving houses—which are often those of the highest quality—become increasingly historic, and then appreciate.)

What really matters is not whether new housing is created at a price point that low- and moderate-income households can afford, but rather, whether the overall housing supply increases enough that the existing housing stock can “filter down” to low and moderate income households. As we’ve written, that process depends on wealthier people moving into newer, more desirable homes. Where the construction of those homes is highly constrained, those wealthier households end up bidding up the price of older housing—preventing it from filtering down to lower income households and providing for more affordability.

This isn’t theoretical: As we’ve discussed before at City Observatory, the vast majority of today’s actually existing affordable housing is not subsidized below-market housing, but market-rate housing that has depreciated, or “filtered.” Syracuse economist Stuart Rosenthal estimates that the median value of rental housing declines by about 2.2% per year. As its price falls, lower-income people move in. Rosenthal estimates that rental housing that is 20 years old is occupied, on average, by households with incomes about half the level of incomes of those who occupy new rental housing.

Screen Shot 2015-11-09 at 9.55.56 AM
Apartments get cheaper up until they’re about 50 years old.

 

In its recent report, the California Legislative Analyst’s Office noted that as housing ages, it becomes more affordable. Housing that likely was considered “luxury” when first built declined to the middle of the housing market within 25 years. Take the 1960s-era apartments built in Marietta, a suburb of Atlanta: When they were new, they were middle to upper income housing, occupied by single professionals, gradually, as they aged, they slid down-market, to the point where the city passed an $85 million bond issue to acquire and demolish them as a way of reducing a concentration of low income households in the Franklin Road neighborhood.

New Cars are Unaffordable to Low Income Households, too

Here’s another way to look at the connection between affordability and the price of new things: cars. (After houses, cars are frequently the most expensive consumer durable that most American’s purchase.)

Exactly the same thing could be said of new car purchases: Most new cars aren’t affordable to the typical household either—the average sale price of a new car is nearly $34,000.

Credit: Brian Timmermeister, Flickr
Credit: Brian Timmermeister, Flickr

 

In fact, using the same kind of approach that Harvard’s Joint Center for Housing Studies used to assess rental affordability, Interest.com reports that the median family can afford to buy the typical new car in only one large metropolitan area. Similar to the “30 percent of income” rule widely—and in our view inappropriately—used to gauge housing affordability, they assume that the typical household makes an 20 percent down payment, finances its purchase over four years and pays no more than 10 percent of its income for a car payment. They report in most metros that the typical family falls 30 to 40 percent short of being able to afford a new car. So most households deal with car affordability pretty much like they deal with housing affordability: by buying used.

When it comes to anything new and long-lived, higher-income households buy most of the output. According to Bureau of Labor Statistics data, households in the two highest income quintiles accounted for about 67 percent of the purchase of new cars in the US in 2001. New car buyers are getting progressively older, and are more likely to be high income. According to the National Automobile Dealers Association, the median new car buyer is 52 years old and has an income of about $80,000, compared to an average age of 37 and an income of $50,000 for the overall population.

But there’s no outcry about America’s “affordable car crisis.” The reason: high-income households buy newer cars; most of the rest of us buy used cars—which are more affordable after they’ve depreciated for a while.That’s even more true of housing, which is much longer lived. Nationally, 68 percent of the nation’s rental housing is more than 30 years old—so only about 10 percent of the nation’s renters live in apartments built in the last decade.

New houses, like new cars, are sold primarily to higher income households—and affordability comes from getting a bargain when the car (or house or apartment) has depreciated. Building more high priced new apartments, in fact, is critical to generate the filtering down of older housing that constitutes the affordable housing supply.

This myth is busted: building more high end housing doesn’t make housing less affordable.

Urban myth busting: Congestion, idling, and carbon emissions

Increasing road capacity to reduce greenhouse gas emissions will backfire

Widening roads to reduce idling simply induces more travel and more pollution

Cities with faster travel have higher greenhouse gas emissions

Time for another episode of City Observatory’s Urban Myth Busters, which itself is an homage to the venerable Discovery Channel series “Mythbusters” that featured co-hosts Adam Savage and Jamie Hyneman using something called “science” to test whether commonly believed tropes were really true. In each episode, they would construct elaborate (often explosive) experiments to test whether something you see on television or in the movies could actually happen in real life. (Sadly, you can’t make a bullet curve no matter how fast you flick your arm.)  

Adam-Savage-and-Jamie-Hyneman-in-Mythbusters

In our first installment, we took on the oft-repeated claim that somehow building more housing for middle and upper income people made housing less affordable for lower income households. (It doesn’t).

Today’s claim comes from the world of transportation. As we all know, transportation is now the single largest source of greenhouse gas emissions. Here, when confronted with the need to do something to address climate change, the highway lobby likes to point out that cars emit carbon, and when they’re idling or driving in stop and go traffic, they may emit  more carbon per mile than when they travel at a nice steady speed. And of course, they have a solution for that: spend more money expanding capacity so cars don’t have to slow down so much. That’ll be great for the environment, or so the argument goes.

This claim has been invoked by highway advocates everywhere. Most recently, its been raised by officials  speaking in favor of spending upwards of a billion dollars on three freeway widening projects in the Portland area.  State Senator Lee Beyer argued that truck idling due to congestion was contributing to global warming. Here’s what Beyer told Oregon Public Broadcasting’s Think Out Loud program on April 18, 2017:

To the extent that we have congestion, in Portland for example, or anywhere else, but there particularly, if you look at the amount of exhaust those trucks are spewing into the air during that 52 hours while they sit in traffic, that may have more of a negative impact on the environment and more carbon release than we would gain solely through the low carbon fuels piece as its currently structured.

His argument was echoed by City Commissioner Amanda Fritz:

It seems likely the emissions from vehicles crawling in this section are worse than those at normal speed.

When he was appointed Director of the Oregon Department of Transportation in 2019, Kris Strickler trotted out the same tired claim:

…. it’s clear about 40% of the greenhouse gas emissions are from the transportation sector, so it’s an important aspect of the work we do.   I believe that there is no silver bullet, there is no single answer to address GHG emissions overnight.   And its something on our task list and our to-do list as a priority for us as we go forward and we need to attack it in multiple avenues.   One is, clearly, through design decisions that we can help to free up and move congested areas, because we know that cars sitting in traffic, frankly, emitting the emissions is not necessarily the best way to manage greenhouse gas reductions.

So is there any truth to the idea that reducing traffic congestion will lower vehicle emissions?

In place of the now retired duo of Adam and Jamie, we’ll turn this question over to Alex and Miguel–Alex Bigazzi and Miguel Figliozzi, two transportation researchers at Portland State University. Their research shows that savings in emissions from idling can be more than offset by increased driving prompted by lower levels of congestion.  The underlying problem is our old friend, induced demand: when you reduce congestion, people drive more, and the “more driving” more than cancels out any savings from reduced idling. As they conclude:

Induced or suppressed travel demand . . . are critical considerations when assessing the emissions effects of capacity-based congestion mitigation strategies. Capacity expansions that reduce marginal emissions rates by increasing travel speeds are likely to increase total emissions in the long run through induced demand.

In a companion paper, they look at a variety of data, including variations among metropolitan areas, changes over time in congestion and emissions, and corridor level estimates of traffic and emissions. In each case, they find that carbon emissions are strongly correlated with the length of travel and weakly correlated (or uncorrelated) with levels of congestion.

Specifically, metropolitan areas with high levels of congestion do not have higher levels of CO2 emissions per capita than ones with low congestion. They conclude there is “no relation.” But vehicle miles traveled is a strong correlate. Here’s a chart showing daily peak period hours of vehicle travel per peak period travel (on the horizontal axis) and CO2 emissions per peak period traveler per day. More driving is correlated with more carbon emissions.

 

Similarly, metro areas that had an increase in congestion (as measured by the Texas Transportation Institute’s Travel Time Index), didn’t see proportionate increases in CO2 emissions.  The following panel shows how the change in emissions per traveler between 2000 and 2010 for an array of 101 metropolitan areas related to changes in congestion (left hand chart), changes in hours traveled per person (center chart) and vehicle miles traveled (right hand chart). There’s essentially no relation between increases in congestion and per traveler emissions; but more hours of travel and greater distances traveled translate very directly into more carbon emissions.

There’s also another kicker to the speed/emissions relationship that you’ll never hear highway advocates mention. While its true that cars emit more carbon per mile while idling and in stop and go traffic than they do when cruising at 30 to 45 miles per hour, traveling at higher speeds is actually less fuel efficient and produces more CO2 per mile driven. Hence one of the strategies that we ought to employ is imposing stricter speed limits (say 55 miles per hour). This also means that the more we build roads that let people drive at higher speeds (60 to 70 miles per hour) the more we’re increasing global warming.

This myth is busted: adding more capacity might reduce idling a bit, but it will actually induce more driving, which will lead to higher, not lower carbon emissions.

And, a technological post-script: Automakers are now increasingly equipping their vehicles with stop-start technology, which automatically turns the engine off when the car stops moving, and then re-starts the engine when the driver takes her foot of the brake. This virtually eliminates idling emissions, not just in traffic, but at red lights too. Some 15 million European cars already have stop-start, a majority of cars sold in North America are predicted to have in the next few years. In addition, electric vehicles don’t idle when they’re stopped. So in the long run, if we want to reduce emissions from idling, a technical fix is in the works–no need to widen roads to address this source of pollution.

This commentary was originally published by City Observatory in 2017.  Sadly, six years later, people are still claiming that we can fight climate change by widening roads to reduce the amount of time people spend idling.

 

Urban myth busting: Congestion, idling, and carbon emissions

Increasing road capacity to reduce greenhouse gas emissions will backfire

Time for another episode of City Observatory’s Urban Myth Busters, which itself is an homage to the long-running Discovery Channel series “Mythbusters” that featured co-hosts Adam Savage and Jamie Hyneman using something called “science” to test whether commonly believed tropes were really true. In each episode, they would construct elaborate (often explosive) experiments to test whether something you see on television or in the movies could actually happen in real life. (Sadly, you can’t make a bullet curve no matter how fast you flick your arm.)  

Adam-Savage-and-Jamie-Hyneman-in-Mythbusters

In our first installment, we took on the oft-repeated claim that somehow building more housing for middle and upper income people made housing less affordable for lower income households. (It doesn’t).

Today’s claim comes from the world of transportation. As we all know, transportation is now the single largest source of greenhouse gas emissions. Here, when confronted with the need to do something to address climate change, the highway lobby likes to point out that cars emit carbon, and when they’re idling or driving in stop and go traffic, they may emit  more carbon per mile than when they travel at a nice steady speed. And of course, they have a solution for that: spend more money expanding capacity so cars don’t have to slow down so much. That’ll be great for the environment, or so the argument goes.

This claim has been invoked by highway advocates everywhere. Most recently, its been raised by officials  speaking in favor of spending upwards of a billion dollars on three freeway widening projects in the Portland area.  Sate Senator Lee Beyer argued that truck idling due to congestion was contributing to global warming. Here’s what Beyer told Oregon Public Broadcasting’s Think Out Loud program on April 18, 2017:

To the extent that we have congestion, in Portland for example, or anywhere else, but there particularly, if you look at the amount of exhaust those trucks are spewing into the air during that 52 hours while they sit in traffic, that may have more of a negative impact on the environment and more carbon release than we would gain solely through the low carbon fuels piece as its currently structured.

His argument was echoed by City Commissioner Amanda Fritz:

It seems likely the emissions from vehicles crawling in this section are worse than those at normal speed.

So is there any truth to the idea that reducing traffic congestion will lower vehicle emissions?

In place of the now retired duo of Adam and Jamie, we’ll turn this question over to Alex and Miguel–Alex Bigazzi and Miguel Figliozzi, two transportation researchers at Portland State University. Their research shows that savings in emissions from idling can be more than offset by increased driving prompted by lower levels of congestion.  The underlying problem is our old friend, induced demand: when you reduce congestion, people drive more, and the “more driving” more than cancels out any savings from reduced idling. As they conclude:

Induced or suppressed travel demand . . . are critical considerations when assessing the emissions effects of capacity-based congestion mitigation strategies. Capacity expansions that reduce marginal emissions rates by increasing travel speeds are likely to increase total emissions in the long run through induced demand.

In a companion paper, they look at a variety of data, including variations among metropolitan areas, changes over time in congestion and emissions, and corridor level estimates of traffic and emissions. In each case, they find that carbon emissions are strongly correlated with the length of travel and weakly correlated (or uncorrelated) with levels of congestion.

Specifically, metropolitan areas with high levels of congestion do not have higher levels of CO2 emissions per capita than ones with low congestion. They conclude their is “no relation.” But vehicle miles traveled is a strong correlate. Here’s a chart showing daily peak period hours of vehicle travel per peak period travel (on the horizontal axis) and CO2 emissions per peak period traveler per day. More driving is correlated with more carbon emissions.

 

Similarly, metro areas that had an increase in congestion (as measured by the Texas Transportation Institute’s Travel Time Index), didn’t see proportionate increases in CO2 emissions.  The following panel shows how the change in emissions per traveler between 2000 and 2010 for an array of 101 metropolitan areas related to changes in congestion (left hand chart), changes in hours traveled per person (center chart) and vehicle miles traveled (right hand chart). There’s essentially no relation between increases in congestion and per traveler emissions; but more hours of travel and greater distances traveled translate very directly into more carbon emissions.

There’s also another kicker to the speed/emissions relationship that you’ll never hear highway advocates mention. While its true that cars emit more carbon per mile while idling and in stop and go traffic than they do when cruising at 30 to 45 miles per hour, traveling at higher speeds is actually less fuel efficient and produces more CO2 per mile driven. Hence one of the strategies that we ought to employ is imposing stricter speed limits (say 55 miles per hour). This also means that the more we build roads that let people drive at higher speeds (60 to 70 miles per hour) the more we’re increasing global warming.

This myth is busted: adding more capacity might reduce idling a bit, but it will actually induce more driving, which will lead to higher, not lower carbon emissions.

And, a technological post-script: Automakers are now increasingly equipping their vehicles with stop-start technology, which automatically turns the engine off when the car stops moving, and then re-starts the engine when the driver takes her foot of the brake. This virtually eliminates idling emissions, not just in traffic, but at red lights too. Some 15 million European cars already have stop-start, a majority of cars sold in North America are predicted to have in the next few years. In addition, electric vehicles don’t idle when they’re stopped. So in the long run, if we want to reduce emissions from idling, a technical fix is in the works–no need to widen roads to address this source of pollution.

 

More driving means more dying

New data from the national traffic safety administration shows an ominous trend: traffic related deaths are up 11.3 percent for the first nine months of 2015, as compared to the same period a year earlier.

Although the NHTSA warns that the data are subject to revision, and cautions that it’s too early to discern the causes of this change, those who have been paying attention to the longer trend know that there’s every reason to believe we already have a suspect.

As we’ve noted before at City Observatory, the decline in gas prices that started in mid-2014 has led to an increase in driving, reversing a nearly decade-long trend of Americans driving fewer miles per person per day.

What’s striking about the new NHTSA numbers is that road crash deaths are increasing much faster than total miles driven. As a result, the number of deaths per mile driven—which has been declining for decades—jumped up in the first three quarters of 2015, from 1.05 deaths per 100 million miles to 1.10 deaths per 100 million miles.

But this isn’t a new relationship: the same kind of disproportionate change occurred when gas prices increased in 2007-08. At that time, miles driven fell sharply—and traffic deaths fell even faster. In 2008, total vehicle miles traveled declined by 0.7% and in 2009, they declined a further 1.5 percent.

 

This suggests that the relationship between driving and deaths is non-linear: a one percent increase in driving produces a much larger than one percent increase in deaths—actually, something like a three percent increase in deaths, based on these very partial data.

Why might deaths increase faster than miles driven? There are several reasons. First, we know that some traffic phenomena, like traffic congestion, are very non-linear: the roughly 3 percent decline in traffic in 2008 produced a 30 percent reduction in traffic congestion. Second, it may well be that the additional (or, in economist-speak, “marginal”) miles driven, and marginal drivers driving them, are for some reason less safe than the typical mile driven. If we take longer trips, we may drive on more dangerous roads, or at more dangerous times. Third, it may be that people are driving faster—some research showed that high fuel prices induced motorists to slow down—and speed is strongly correlated with road safety. Over at Streetsblog, Angie Schmitt reports on David Levinson’s theory that price sensitive teen drivers may drive more when gasoline is cheap, and that may account for some of the hypersensitivity of crash rates to apparently small changes in the total amount of driving.

We know that for many reasons, there are structural connections between cheaper gas, more miles driven, and more traffic fatalities. But some analysts want to downplay those structural issues and place the blame on driver behavior. One of the most widely offered explanations for the increase in crashes and fatalities is “distracted driving,” and especially the rise of text messaging. While there’s little question that texting and driving is dangerous, and contributes to many crashes, the numbers simply don’t support this theory.

Available data suggests that text messaging grew much more rapidly between 2008 and 2013—when traffic deaths were declining—than in recent years. One source, citing data collected by Nielsen, reports that text messaging increased by a factor of four between 2007 and 2009, but has grown only about 12-15 percent per year since 2011. Forrester reports messaging increased about 14 percent between 2010 and 2011. While one could wish for much better data about the volume of text messaging, the growth of text messaging seems to coincide with a period of safer driving.

What’s clear though is that the minor changes in the amount of driving that we do seem to have disproportionately large effects on traffic fatalities. Lower gas prices that encourage more driving produce proportionately larger increases in fatalities. And higher gas prices that reduced driving in 2007 and 2008, produced disproportionately large reductions in fatalities.

Why the first-time homebuyer is an endangered species

First-time home buyers play a critical role in the housing market. The influx of new households into the owner-occupied market is a key source of sales, and provides impetus for existing homeowners to move, liquidate their investment, or trade up to a bigger or better house. They’re the bottom of the home-buying pyramid.

The number of first-time homebuyers has been low since the Great Recession, in spite of recent improvements in housing affordability nationally (at least according to standard metrics like low interest rates and lower housing prices). Total sales of new and existing homes are still well below levels of a decade ago (from more than 8.5 million to 5.3 million), and the National Association of Realtors reports a historic low in the fraction of buyers that are first-timers (30 percent).

These should be good times: the Millennials now just reaching prime homebuying age are the nation’s largest ever generation. But a recent presentation by Zillow’s Stan Humphries laid out a startling picture of how much tougher young adults find it today to transition from renting to buying their first home.

Credit: Stan Humphries
Credit: Stan Humphries

 

Compared with the 1970s, today’s first-time homebuyers are older, have rented longer, have smaller households, and—strikingly—have less income than did their predecessors.. And critically, the housing they’re looking to buy is much more expensive. While average incomes are down slightly, home prices (in inflation-adjusted terms) have increased 60 percent since the 1970s, (from $87,000 to $140,000).

We know these key metrics (lower incomes, longer rental tenure) reflect the economic headwinds that have plagued the Millennial generation, including higher college costs, more student debt, and a weak job market. On top of that, a much larger fraction of young adults today come from demographic groups (including Latinos and African Americans) whose families generally have less wealth—meaning less familial help to marshall a down payment.

All of these factors lend credence to projections by the Urban Institute and others that housing markets are facing a long period of gerontification. They predict that between now and 2030, all of the net increase in homeownership will be in households aged 65 and older, as Baby Boomers age.

Facing home higher prices, with less income, less accumulated wealth and greater debt—not to mention tougher credit availability—today’s young adults have unsurprisingly not been able to reverse the recent decline in homeownership rates. While the first-time homebuyer is hardly headed for extinction, all these trends taken together suggest that they’ll be a far less numerous and consequential force in housing markets than in years past.

Bursting Portland’s urban growth boundary won’t make housing more affordable

Like many cities in the US, Portland has been experiencing an affordable housing crisis as rents have risen substantially over the last several years. One proposed solution to this problem is inclusionary zoning—requiring people who build new apartments to hold some units’ rent at below-market rates.

In the coming month or so, the Oregon Legislature will consider a package of bills to address this problem. Unlike most states, Oregon law prohibits local governments from enacting inclusionary zoning laws. As a quid pro quo for agreeing to drop the ban—at least for rental housing—the development industry is suggesting it would like to see the state’s land use laws, including its signature urban growth boundary, weakened.

The anti-sprawl effect of the urban growth boundary is clear in this map. Credit: Free Association Design
The anti-sprawl effect of the urban growth boundary is clear in this map. Credit: Free Association Design

 

But this is a losing proposition on both ends. Busting the urban growth boundary will do nothing to address housing affordability, and inclusionary zoning would likely make the city’s affordability problems worse, not better. Here’s why:

  1. Affordability is about growing up, not out. The economic literature is very clear that the problem is primarily constraints on achieving higher levels of density within existing urban areas: i.e. building more multi-family housing. Rents are rising in Portland (and Seattle and San Francisco) because of the difficulty/constraint on building more density in the center, not expanding the periphery. More housing in the center makes better use of our existing, expensive infrastructure, and lowers transportation costs and pollution. Adding land at the urban edge does little to expand either the supply of housing or, more importantly, the supply of affordable housing. In the last 15 years, the Metro urban growth boundary (or UGB) has been expanded to add more than 32,000 acres of land. Since 2000, those UGB expansion areas have added only 8,500 new housing units, about 7% of new dwellings built since 2000.
  1. The market demand/affordability problem is in the urban core. That message is abundantly clear in the shift in home prices in Portland. All of the price appreciation in the Portland area is focused on the urban core. In 2005 that homes in Portland sold for a $20,000 discount to homes in the suburban counties. Now Portland homes sell for a $27,000 premium to homes in the suburbs. Adding more land on the periphery does very little to influence supply in the center, where the demand is.
  1. Adding more supply in the core is the key to addressing affordability. The solution to rising rents is to continue to aggressively expand the supply of housing, especially in the core in Portland. Build more apartments. Demand has shifted much more quickly than supply, and the development pipeline is long and slow, but as new units come on line, they help absorb the demand that is pushing up prices. In recent weeks, Seattle rents have begun to soften. Seattle is roughly a year or so ahead of us in the up-cycle in housing construction.
  1. Inclusionary zoning increases market prices. Inclusionary zoning tends to drive up the cost of market-rate units, especially in tight housing markets like Portland’s, since developers recoup the cost of subsidized units by raising prices elsewhere. On top of that, the inherently negotiated nature of inclusionary zoning approvals adds uncertainty and delay to the development approval process—which also drives up costs. There’s only limited experience with inclusionary zoning, but evidence from Boston, where it’s been in place for some time, suggests that inclusionary zoning will cause fewer total new units to get built, and that constriction in supply will tend to drive up prices in the entire market.
  1. Inclusionary zoning creates only token numbers of affordable units. In the five boroughs of New York, in one of the hottest real estate markets in the world, that city’s inclusionary zoning program produced fewer than 3,000 units in a decade. Portland subsidized nearly that many (about 2,300) affordable units in the Pearl District with Tax Increment Financing. So inclusionary zoning creates so few units that it ends up like a lottery: if you’re lucky enough to get a subsidized unit, bully for you. But everyone else probably ends up paying more for housing as a result.
  1. Inclusionary zoning requirements would encourage further sprawl. Because inclusionary zoning is likely to apply only to housing built in Portland, but not in suburban jurisditions, it will effectively be a way of penalizing and disincentivizing dense development in the city relative to housing on the periphery. It would effectively be a tax on urban development, but not suburban development—unless the inclusionary zoning requirement applies regionally and a exacts a payment-in-lieu from all new housing construction.
  1. If we want to make housing more affordable, let’s get rid of parking requirements. Oregon actually does allow inclusionary zoning—for cars, in the form of parking requirements. Requiring parking reduces the amount of land that can be used to house people, and directly drives up the price of new homes and apartments. These costs get passed on to homebuyers and renters. Studies show that in urban centers, parking requirements drive up rents by something on the order of about $200 a month. If we want to increase affordability we ought to be getting rid of this kind of hidden housing tax.

Housing affordability is a real problem, and it demands solutions that address the reality of today’s housing market. Sacrificing the state’s prudent system of planning for urban growth won’t remedy housing affordability.

More evidence on the “Dow of cities”

Last year, we described the widening gap between typical housing values in cities and suburbs as the “Dow of cities”: Just as differences in stock prices signal the performance of companies, variations in average home prices are a market signal of the performance of cities. High and rising prices, relative to the overall market, are an indicator that people value what a city offers.

The original data for this analysis came from a report prepared by investment advisory firm Fitch, which combed through 25 years of its Case-Shiller housing price indices to compare changes in home values of homes in four concentric circles in each of the nation’s largest metropolitan areas. That analysis showed that homes in the most central neighborhoods appreciated 50 percent more after 2000 than their peers in each of the more peripheral neighborhoods.

Credit: Zillow
Credit: Zillow

 

Last week, the real estate analytics firm Zillow released its analysis of zip code-level data that tackles the same broad question. Zillow assigned each zip code in the nation’s largest metropolitan areas to one of three categories—urban, suburban, or rural—based on its analysis of survey data about consumer perceptions and their correlation to some of the key characteristics of zip codes, like population density. They they used this classification to track the change in home values for urban, suburban, and rural areas in each of the nation’s largest metropolitan areas from 1997 through 2015.

Overall, they find that urban home values now surpass suburban home values on average. The crossover, according to Zillow’s numbers occurred in November 2014. Today, the average urban home is worth about $269,000, compared to $264,000 for the average suburban home. On a per square foot basis, the somewhat smaller urban homes have been worth more than their suburban counterparts since the late 1990s. Zillow reports that gap has widened, and now the typical consumer pays about 25 percent more per square foot for an urban home ($198) than for a suburban one ($156).

Of course, values vary by market. Thoughtfully Zillow has included metropolitan level data for the nation’s largest metropolitan areas. You can drill down to individual metros and see how the pattern of price changes varies by neighborhood type over the past two decades.

The growing premium that households pay for urban locations compared to suburban ones is an indication of the growing value of cities, and also an indication that we’re facing a shortage of cities.

Don’t demonize driving—just stop subsidizing it

At City Observatory, we try to stick to a wonky, data-driven approach to all things urban. But numbers don’t mean much without a framework to explain them, and so today we want to quickly talk about one of those rhetorical frameworks: specifically, how we talk about driving.

Our wonky perspective tells us that there are lots of problems that stem from the way we use cars: We price roads wrong, so people over use them. Cars are a major source of air pollution, including the carbon emissions that are causing climate change. Car crashes kill tens of thousands of Americans every year, injure many more, and cost us billions in medical costs and property damage. And building our cities to accommodate cars leads to sprawl that pushes us further apart from one another.

But the problem is not that cars (or the people who drive them) are evil, but that we use them too much, and in dangerous ways. And that’s because we’ve put in place incentives and infrastructure that encourage, or even require, us to do so. When we subsidize roads, socialize the costs of pollution, crashes and parking, and even legally require that our communities be built in ways that make it impossible to live without a car, we send people strong signals to buy and own cars and to drive—a lot. As a result, we drive too much, and frequently at unsafe speeds given the urban environment.

This car might be evil, though. Credit: Michael Coghlan, Flickr
This car might be evil, though. Credit: Michael Coghlan, Flickr

 

Many people—transit boosters, cyclists, planners, environmentalists, safety advocates—look at the end result of all this, and understandably reach the conclusion that cars are the enemy. The overriding policy question, then, becomes: “How do we get people out of their cars?”

In this December story in The New Republic, for example, Emily Badger quotes Daniel Piatowski, a planning PhD presenting a paper on “carrots and sticks” at the Transportation Research Board conference, saying: “The crucial component that’s missing is that we’re not implementing any policies that disincentivize driving.”

“Getting people out of their cars” is a rallying cry and a mission statement that’s guaranteed to provoke a formidable opposition. That’s because most people, correctly, can’t imagine any time soon when they won’t need to use a car for most—even all—of their daily trips. As a practical matter, the fact that for seven or eight decades the entire built environment and most transportation investments have been predicated on car travel means that we can’t quickly move away from auto dependence. For most Americans, driving isn’t attributable to an irrational fondness for cars. In many places, it’s simply impossible to live and work without one.

But there’s good news. The first is that incentives matter. We learned that higher gas prices, for example, had a large and sustained impact on driving behavior. After growing steadily for decades, vehicle miles traveled per person peaked and declined after 2005 (as gas prices shot up). This produced knock-on changes in housing markets, and helped accelerate the move back to cities. And the recent decline in gas prices triggered more driving. “This shows that more intentional kinds of pricing schemes, like congestion pricing or parking pricing, could have similar effects.”

The second point is that small changes matter. Even slight reductions in car use and car ownership will pay big dividends. Traffic congestion is subject to non-linear effects: small reductions in traffic volumes produce big reductions in traffic congestion. Travel monitoring firm Inrix reported that in 2008, the 3 percent decline in vehicle miles traveled led to a 30 percent decline in traffic congestion. . As driving declined, carbon emissions declined and so too, did crashes and traffic deaths.

Moralizing about mode choice is a recipe for policy gridlock

Bitter and acrimonious flamewars between people who are convinced that one side or the other is trying to run us off the road will surely be unproductive. We agree with most of the policies advocates like Piatowski want, including the “sticks” like parking and congestion fees—but not the way they’re being described.

Credit: Steve Snodgrass, Flickr
Credit: Steve Snodgrass, Flickr

 

Rather than being framed as a punishment, it should be more about responsibility. Drivers should pay for the roads that they drive on. They should be regulated in a way that protects the safety of other users of the right of way. Trucks ought to pay for the damage they do to roads. Every car driver ought to pay for their parking space they use—whether it’s in the public or the private realm. All cars and trucks should be responsible for the carbon pollution they emit. We shouldn’t require third parties such as homebuilders or renters or local businesses to subsidize car travel and parking. This isn’t about creating a “disincentive for car use,” but, as a matter of fairness and practicality, dropping what have essentially been subsidies for financially and socially expensive and dangerous behavior.

Driving is a choice, and provided that drivers pay all the costs associated with making that choice, there’s little reason to object to that. After all, very few people think that a zero car world is one that makes a lot of sense. Low-car makes much more sense that non-car as a policy talking point. How do we get people to make these choices. There’s an analogy here to alcohol. We tried prohibition in the twenties. It was moral absolutism, zero tolerance. Alcohol in any amount was evil. That didn’t work.

When we experienced the epidemic of drunk driving, we didn’t go back to prohibition. Instead, we raised penalties to make drivers more responsible, set tougher limits on blood alcohol content, and put more money into enforcement. People still drink—but there’s a different level of understanding of responsibility and consequences, and fewer people drive drunk.

Rather than demonizing driving—let’s just stop subsidizing it

A “war on cars” won’t win many hearts and minds; let’s ask for responsibility

It’s clear that cars, and particular the large numbers of cars we have, and the way in which we and our urban environments have become dependent upon them, is either at the root of many of our most pressing problems (including climate change, public finance, and inequality, to name three). That routinely leads many passionate activists to take a stridently anti-car stance in their public posturing.

At City Observatory, we try to stick to a wonky, data-driven approach to all things urban. But numbers don’t mean much without a framework to explain them, and so today we want to quickly talk about one of those rhetorical frameworks: specifically, how we talk about driving.

Our wonky perspective tells us that there are lots of problems that stem from the way we use cars: We price roads wrong, so people over use them. Cars are a major source of air pollution, including the carbon emissions that are causing climate change. Car crashes kill tens of thousands of Americans every year, injure many more, and cost us billions in medical costs and property damage. And building our cities to accommodate cars leads to sprawl that pushes us further apart from one another.

But the problem is not that cars (or the people who drive them) are evil, but that we use them too much, and in dangerous ways. And that’s because we’ve put in place incentives and infrastructure that encourage, or even require, us to do so. When we subsidize roads, socialize the costs of pollution, crashes and parking, and even legally require that our communities be built in ways that make it impossible to live without a car, we send people strong signals to buy and own cars and to drive—a lot. As a result, we drive too much, and frequently at unsafe speeds given the urban environment.

This car might be evil, though. Credit: Michael Coghlan, Flickr
This car might be evil, though. Credit: Michael Coghlan, Flickr

 

Many people—transit boosters, cyclists, planners, environmentalists, safety advocates—look at the end result of all this, and understandably reach the conclusion that cars are the enemy. The overriding policy question, then, becomes: “How do we get people out of their cars?”

In this December 2015 story in The New Republic, for example, Emily Badger quotes Daniel Piatowski, a planning PhD presenting a paper on “carrots and sticks” at the Transportation Research Board conference, saying: “The crucial component that’s missing is that we’re not implementing any policies that disincentivize driving.”

“Getting people out of their cars” is a rallying cry and a mission statement that’s guaranteed to provoke a formidable opposition. That’s because most people, correctly, can’t imagine any time soon when they won’t need to use a car for most—even all—of their daily trips. As a practical matter, the fact that for seven or eight decades the entire built environment and most transportation investments have been predicated on car travel means that we can’t quickly move away from auto dependence. For most Americans, driving isn’t attributable to an irrational fondness for cars. In many places, it’s simply impossible to live and work without one.

But there’s good news. The first is that incentives matter. We learned that higher gas prices, for example, had a large and sustained impact on driving behavior. After growing steadily for decades, vehicle miles traveled per person peaked and declined after 2005 (as gas prices shot up). This produced knock-on changes in housing markets, and helped accelerate the move back to cities. And the decline in gas prices since 2014 has triggered more driving. “This shows that more intentional kinds of pricing schemes, like congestion pricing or parking pricing, could have similar effects.”

The second point is that small changes matter. Even slight reductions in car use and car ownership will pay big dividends. Traffic congestion is subject to non-linear effects: small reductions in traffic volumes produce big reductions in traffic congestion. Travel monitoring firm Inrix reported that in 2008, the 3 percent decline in vehicle miles traveled led to a 30 percent decline in traffic congestion. . As driving declined, carbon emissions declined and so too, did crashes and traffic deaths.

Moralizing about mode choice is a recipe for policy gridlock

Bitter and acrimonious flamewars between people who are convinced that one side or the other is trying to run us off the road will surely be unproductive. We agree with most of the policies advocates like Piatowski want, including the “sticks” like parking and congestion fees—but not the way they’re being described.

Credit: Steve Snodgrass, Flickr
Credit: Steve Snodgrass, Flickr

 

Rather than being framed as a punishment, it should be more about responsibility. Drivers should pay for the roads that they drive on. They should be regulated in a way that protects the safety of other users of the right of way. Trucks ought to pay for the damage they do to roads. Every car driver ought to pay for their parking space they use—whether it’s in the public or the private realm. All cars and trucks should be responsible for the carbon pollution they emit. We shouldn’t require third parties such as homebuilders or renters or local businesses to subsidize car travel and parking. This isn’t about creating a “disincentive for car use,” but, as a matter of fairness and practicality, dropping what have essentially been subsidies for financially and socially expensive and dangerous behavior.

Driving is a choice, and provided that drivers pay all the costs associated with making that choice, there’s little reason to object to that. After all, very few people think that a zero car world is one that makes a lot of sense. Low-car makes much more sense that non-car as a policy talking point. How do we get people to make these choices. There’s an analogy here to alcohol. We tried prohibition in the twenties. It was moral absolutism, zero tolerance. Alcohol in any amount was evil. That didn’t work.

When we experienced the epidemic of drunk driving, we didn’t go back to prohibition. Instead, we raised penalties to make drivers more responsible, set tougher limits on blood alcohol content, and put more money into enforcement. People still drink—but there’s a different level of understanding of responsibility and consequences, and fewer people drive drunk.

Let’s not demonize driving—just stop subsidizing it

At City Observatory, we try to stick to a wonky, data-driven approach to all things urban. But numbers don’t mean much without a framework to explain them, and so today we want to quickly talk about one of those rhetorical frameworks: specifically, how we talk about driving.

Our wonky perspective tells us that there are lots of problems that stem from the way we use cars: We price roads wrong, so people over use them. Cars are a major source of air pollution, including the carbon emissions that are causing climate change. Car crashes kill tens of thousands of Americans every year, injure many more, and cost us billions in medical costs and property damage. And building our cities to accommodate cars leads to sprawl that pushes us further apart from one another.

But the problem is not that cars (or the people who drive them) are evil, but that we use them too much, and in dangerous ways. And that’s because we’ve put in place incentives and infrastructure that encourage, or even require, us to do so. When we subsidize roads, socialize the costs of pollution, crashes and parking, and even legally require that our communities be built in ways that make it impossible to live without a car, we send people strong signals to buy and own cars and to drive—a lot. As a result, we drive too much, and frequently at unsafe speeds given the urban environment.

This car might be evil, though. Credit: Michael Coghlan, Flickr
This car might be evil, though. Credit: Michael Coghlan, Flickr

 

Many people—transit boosters, cyclists, planners, environmentalists, safety advocates—look at the end result of all this, and understandably reach the conclusion that cars are the enemy. The overriding policy question, then, becomes: “How do we get people out of their cars?”

In this December 2015 story in The New Republic, for example, Emily Badger quotes Daniel Piatowski, a planning PhD presenting a paper on “carrots and sticks” at the Transportation Research Board conference, saying: “The crucial component that’s missing is that we’re not implementing any policies that disincentivize driving.”

“Getting people out of their cars” is a rallying cry and a mission statement that’s guaranteed to provoke a formidable opposition. That’s because most people, correctly, can’t imagine any time soon when they won’t need to use a car for most—even all—of their daily trips. As a practical matter, the fact that for seven or eight decades the entire built environment and most transportation investments have been predicated on car travel means that we can’t quickly move away from auto dependence. For most Americans, driving isn’t attributable to an irrational fondness for cars. In many places, it’s simply impossible to live and work without one.

But there’s good news. The first is that incentives matter. We learned that higher gas prices, for example, had a large and sustained impact on driving behavior. After growing steadily for decades, vehicle miles traveled per person peaked and declined after 2005 (as gas prices shot up). This produced knock-on changes in housing markets, and helped accelerate the move back to cities. And the decline in gas prices since 2014 has triggered more driving. “This shows that more intentional kinds of pricing schemes, like congestion pricing or parking pricing, could have similar effects.”

The second point is that small changes matter. Even slight reductions in car use and car ownership will pay big dividends. Traffic congestion is subject to non-linear effects: small reductions in traffic volumes produce big reductions in traffic congestion. Travel monitoring firm Inrix reported that in 2008, the 3 percent decline in vehicle miles traveled led to a 30 percent decline in traffic congestion. . As driving declined, carbon emissions declined and so too, did crashes and traffic deaths.

Moralizing about mode choice is a recipe for policy gridlock

Bitter and acrimonious flamewars between people who are convinced that one side or the other is trying to run us off the road will surely be unproductive. We agree with most of the policies advocates like Piatowski want, including the “sticks” like parking and congestion fees—but not the way they’re being described.

Credit: Steve Snodgrass, Flickr
Credit: Steve Snodgrass, Flickr

 

Rather than being framed as a punishment, it should be more about responsibility. Drivers should pay for the roads that they drive on. They should be regulated in a way that protects the safety of other users of the right of way. Trucks ought to pay for the damage they do to roads. Every car driver ought to pay for their parking space they use—whether it’s in the public or the private realm. All cars and trucks should be responsible for the carbon pollution they emit. We shouldn’t require third parties such as homebuilders or renters or local businesses to subsidize car travel and parking. This isn’t about creating a “disincentive for car use,” but, as a matter of fairness and practicality, dropping what have essentially been subsidies for financially and socially expensive and dangerous behavior.

Driving is a choice, and provided that drivers pay all the costs associated with making that choice, there’s little reason to object to that. After all, very few people think that a zero car world is one that makes a lot of sense. Low-car makes much more sense that non-car as a policy talking point. How do we get people to make these choices. There’s an analogy here to alcohol. We tried prohibition in the twenties. It was moral absolutism, zero tolerance. Alcohol in any amount was evil. That didn’t work.

When we experienced the epidemic of drunk driving, we didn’t go back to prohibition. Instead, we raised penalties to make drivers more responsible, set tougher limits on blood alcohol content, and put more money into enforcement. People still drink—but there’s a different level of understanding of responsibility and consequences, and fewer people drive drunk.

The market cap of cities, 2019

What are cities worth? More than big private companies, as it turns out: The value of housing in the nation’s 50 largest metropolitan areas ($25.7 trillion) is more than double the value of the stock of the nation’s 50 largest publicly listed corporations ($11 trillion).

Market capitalization is a financial analysis term used to describe the current estimated total value of a private company based on its share price. It’s a good rough measure of what a company is worth, at least in the eyes of the market and investors. The market capitalization—or “market cap,” as it is commonly called—is computed as the current share price of a corporation multiplied by the total number of shares of stock outstanding. In theory, if you were to purchase every share of the company’s stock at today’s market price, you would own the entire company.

The following chart compares the market cap of the nation’s 50 largest publicly traded corporations (on the right) with the market cap housing in each if the nation’s 50 largest metropolitan areas (on the left).  The magnitude of these numbers is a bit staggering, all values are expressed in billions. The data for housing are broken into two components, the value of single family homes (blue) and multi-family homes (orange).  Sources and methodology for these estimates are described below.

The most valuable company is Amazon, with a market cap of $829 billion; the most valuable metro area is New York, where the market value of owner-occupied and rental housing is over $3.8 trillion—about four times higher. The current market value of Amazon is about the same as the current market value of housing in Seattle or San Jose, the eighth and ninth most valuable housing markets on our list.

Some modest-sized metros have housing that’s worth as much as the entire value of some very well-known corporations: IBM’s market cap ($113 billion) is about equal to New Orleans housing ($120 billion). Orlando’s housing ($255 billion) is valued at more than 50 percent over all of Disney ($166 billion). Two Seattle-based companies (Microsoft, at $827 billion; Amazon, at $829 billion) are each worth more  than all the housing in Seattle (about $776 billion).

The differences are smaller at the bottom end of our two league tables. The fiftieth largest firm, the United Technologies, is worth about 25 percent more more than the fiftieth most valuable metro housing market, Buffalo: $98 billion versus $80 billion.

Buffalo! Credit: Zen Skillicorn, Flickr
Buffalo! Credit: Zen Skillicorn, Flickr

 

It may seem strange to compare the market value of houses with companies, but this exercise tells us more than you might think. Just as the share price of a corporation reflects an investor’s expectations about the current health and future prospects of a company, the price of housing in a metropolitan area also reflects consumer and homeowner attitudes about the quality of life and economic prospects of that metropolitan area. So, for example, as the price of oil has fallen, weakening growth prospects in the oil patch, it’s quickly translated into less demand and weaker pricing for homes in Houston. Just as stock market investors purchase and value stocks based on the expectation of income (dividends) and capital gains from their ultimate sale, so too do homeowners (and landlords)—they count on the value of housing services provided by their home as well as possible future capital gains should it appreciate.

In fact, these two commodities—housing and stocks—are among the most commonly held sources of wealth in the United States. And while the financial characteristics of the two investments are dramatically different the underlying principle is the same, making market cap is a useful common denominator for assessing the approximate economic importance of each entity.

Each day, the financial press reports the market’s assessment of the value of individual firms, through their stock prices. But viewed through a similar lens, the housing markets of the nation’s cities are by this financial yardstick an even bigger component of the nation’s economy.

Technical Notes

Our estimates are based on the market capitalization of publicly traded U.S. based corporations as reported on January 19, 2019.  Our estimates of the value of single family housing in each metropolitan market were generously provided by real estate experts at Zillow.  For more keen insights on housing markets, follow their work at Zillow’s Real Estate and Rental Trends blog.

We supplemented Zillow’s estimates of the value of the single family housing stock by computing the market value of the nation’s multi-family housing using data from the Census Bureau’s American Community Survey. In real estate, the value of rental housing is usually estimated using a “cap rate” or capitalization rate, that approximates the rate of return on capital that real estate investors expect from leasing out apartments. To estimate the current market value of apartments, we multiplied the median rent in each metropolitan area by the number of multi-family housing units in that area.  Then we deducted  35 percent to estimate “net operating income”—the amount the investor receives after paying maintenance, other operating expenses, and taxes—and then we divide this number by a capitalization (cap) rate of 6 percent. Both of these figures (net operating income and capitalization rates) are rough estimates—values vary across different times of properties, different markets, and over time with financial conditions (such as with the change in market interest rates). Our estimates of the value of the housing stock in each metropolitan area differ from those we published in 2016.

The market cap of cities

What are cities worth? More than big private companies, as it turns out: The value of housing in the nation’s 50 largest metropolitan areas ($22 trillion) is more than double the value of the stock of the nation’s 50 largest corporations ($8.8 trillion).

Market capitalization is a financial analysis term used to describe the current estimated total value of a private company based on its share price. It’s a good rough measure of what a company is worth, at least in the eyes of the market and investors. The market capitalization—or “market cap,” as it is commonly called—is computed as the current share price of a corporation multiplied by the total number of shares of stock outstanding. In theory, if you were to purchase every share of the company’s stock at today’s market price, you would own the entire company.

Checking up on your cities. Credit: OTA Photos, Flickr
Checking up on your cities. Credit: OTA Photos, Flickr

 

In roughly similar fashion, we can compute the market capitalization of cities—or at least of their housing stock. We start with Zillow’s estimate of the market value of owner-occupied housing in each of the nation’s largest metropolitan areas which is computed by estimating the current market price of each house in a metropolitan area, and sum that value over all of the owner occupied houses. We also estimate the value of rental housing. For rented units we use a commonly accepted technique of estimating current values based on the income generated from rent. (Americans paid about $535 billion in rent in 2015, according to data compiled by Zillow; we can use this data and some financial formula to estimate the value of rental housing. Details of this calculation are explained below.) Then we add together the value of all owner-occupied housing and the value of rental housing to compute the total market cap of housing in each metropolitan area in the US.

Together, the 50 largest publicly traded private corporations in the United States had a market capitalization of $8.8 trillion at the end of 2015. The total market value of housing in 2015 in the 50 largest metropolitan areas was $22 trillion. For reference, the gross domestic product—the total value of all goods and services produced in the US in 2015—was estimated at $18 trillion. It’s hard to find things measured in trillions of dollars, so we’ve juxtaposed GDP against the market cap of housing and businesses. Keep in mind that the GDP is a flow (trillions of dollars per year) while the value of corporations and housing is a stock (trillions of dollars in value at one-point in time).

The following table shows the market value of housing in each of the nation’s 50 largest metropolitan areas and the current market capitalization of the nation’s 50 largest publicly-traded private sector businesses.

For metro areas, the value of housing is divided into two components (owner-occupied housing) shaded blue, and rental housing (shaded orange).

The most valuable company is Apple, with a market cap of $541 billion; the most valuable metro area is New York, where the market value of owner-occupied and rental housing is $2.9 trillion—more than five times higher. The current market value of Apple is about the same as the current market value of housing in Seattle (the twelfth most valuable market on our list).

Some modest-sized metros have housing that’s worth as much as the entire value of some very well-known corporations: IBM’s market cap ($128 billion) is about equal to Indianapolis housing ($138 billion). Orlando’s housing ($208 billion) is valued at more than 25 percent over all of Disney ($164 billion). Three Seattle-based companies (Microsoft, at $418 billion; Amazon, at $285 billion; and Starbucks, at $84 billion) are worth more combined ($787 billion) than all the housing in Seattle (about $617 billion).

The differences are smaller at the bottom end of our two league tables. The fiftieth largest firm, the oil services company Schlumberger, is worth about $15 billion more than the fiftieth most valuable metro housing market, Buffalo: $82 billion versus $67 billion.

Buffalo! Credit: Zen Skillicorn, Flickr
Buffalo! Credit: Zen Skillicorn, Flickr

 

It may seem strange to compare the market value of houses with companies, but this exercise tells us more than you might think. Just as the share price of a corporation reflects an investor’s expectations about the current health and future prospects of a company, the price of housing in a metropolitan area also reflects consumer and homeowner attitudes about the quality of life and economic prospects of that metropolitan area. So, for example, as the price of oil has fallen, weakening growth prospects in the oil patch, it’s quickly translated into less demand and weaker pricing for homes in Houston. Just as stock market investors purchase and value stocks based on the expectation of income (dividends) and capital gains from their ultimate sale, so too do homeowners (and landlords)—they count on the value of housing services provided by their home as well as possible future capital gains should it appreciate.

In fact, these two commodities—housing and stocks—are among the most commonly held sources of wealth in the United States. And while the financial characteristics of the two investments are dramatically different the underlying principle is the same, making market cap is a useful common denominator for assessing the approximate economic importance of each entity.

Each day, the financial press reports the market’s assessment of the value of individual firms, through their stock prices. But viewed through a similar lens, the housing markets of the nation’s cities are by this financial yardstick an even bigger component of the nation’s economy.

Technical Notes

How we computed the value of rental housing. In real estate, the value of rental housing is usually estimated using a “cap rate” capitalization rate, that approximates the rate of return on capital that real estate investors expect from leasing out apartments. To estimate the current market value of apartments, we take Zillow’s estimate of the total amount of rent paid in each market and deduct 35% to estimate “net operating income”—the amount the investor receives after paying maintenance, other operating expenses, and taxes—and then we divide this number by a capitalization (cap) rate of 6%. Both of these figures (net operating income and capitalization rates) are rough estimates—values vary across different times of properties, different markets, and over time with financial conditions (such as with the change in market interest rates).

Many thanks to Zillow’s Chief Economist Svenja Gudell and Aaron Terrazas for doing the hard work here of estimating property values and rental payments. For more keen insights on housing markets, follow their work at Zillow’s Real Estate and Rental Trends blog.

For highway advocates, it’s about the journey, not the destination

Last month, we called out the American Highway User’s Alliance (AHUA) for trumpeting the Katy Freeway as a congestion-fighting success story. The Katy, as you will recall, is Houston’s 23-lane freeway, which was recently expanded at a cost of $2.8 billion.

Although the AHUA hailed that expansion in a report as the kind of project that cities could undertake to reduce traffic bottlenecks, after the freeway opened, congestion became even worse than before, according to traffic records compiled by Transtar. In just three years, peak hour travel times increased by more than 50 percent; a trip that had taken 42 minutes in 2011 took 64 minutes in 2014. We also pointed out that the images of the freeway produced by the Texas DOT and recycled by the US DOT, which portrayed the world’s widest freeway as a pedestrian-dominated greenspace, are a profoundly Orwellian greenwashing of auto-centric policies.

Let's take a walk!
Let’s take a walk!

 

Predictably, freeway apologists emerged. Houston blogger Tory Gattis maintains that the expansion of the Katy freeway was “not a mistake,” arguing that congestion just means that the government investment was being fully utilized. He added: “Just imagine how much worse it would be if we hadn’t widened to 23 lanes.”

That freeway moves way, way more people than it did before as well as offering the congestion tolled lanes which didn’t exist before. Bottom line: the government invested in a piece of infrastructure that has proven extremely popular and highly utilized—isn’t that what we want from government investments of tax dollars? [emphasis in original]

In short: not in this case. Why? Because, in a context where drivers are vastly undercharged for the costs of their automobile usage, drivers will tend to use their cars such that the total social costs of car use—most of which they don’t see—exceed the benefits. Building more freeways, and inducing more car demand, means digging further into that hole, exacerbating social problems like pollution, deaths and injuries from car crashes, and, yes, more total hours waiting in traffic.

And then there’s the counterfactual: what would have happened if Houston had not just not built the Katy Freeway, but perhaps had spent that same $3.2 billion on other investments, like better transit or denser housing? The evidence from cities that have torn out freeways is that reduced capacity reduces demand. And ultimately, transportation investments shape urban form, which in turn profoundly affects transportation demand. Houston has built a self-perpetuating auto-dependent growth model.

Gattis has constructed a kind of rhetorical perpetual motion machine for justifying highway construction. Step 1: Our highways are full and congested therefore we need to expand them. Step 2: The expansion generated more traffic, and the highways are full, therefore the expense was justified. Step 3: Repeat, infinitely.

A chief effect of the expansion of the Katy Freeway has been additional, sprawling low density development 30 miles east of Houston’s downtown—and trips from these suburbs have flooded the expanded Katy Freeway. With this kind of logic, it’s little surprise average commutes in Houston average commutes in Houston are 42 percent longer than in other large metro areas—ranking second only behind Atlanta, according to data compiled by the Brookings Institution.

Credit: Brookings Institution
Credit: Brookings Institution

 

This Sisyphean philosophy of transportation planning was perfectly—if unwittingly—captured in a recent Washington Post headline, from its traffic columnist, “Dr. Gridlock”:

Screen Shot 2016-01-19 at 11.18.20 AM

Maybe it’s just that highway engineers have their own perverse spin on that mantra that “It’s about the journey, not the destination”—especially when it comes to building more roads. The inevitability of induced demand in urban settings means that trying to reduce congestion by widening highways means you’ll end up chasing your tail, forever. Which to some is a feature, not a bug—if you’re in the asphalt or concrete business, or are a highway engineer, that’s not a bad thing—it’s a guarantee of lifetime full employment. So little wonder that the asphalt socialists are really indifferent to whether multi-billion dollar highway projects have any effect on congestion at all.

But for the rest of us this worldview is costly, unsustainable and undermines livability. When we prioritize “getting there” over “being there” we sacrifice the quality of urban space. As our friends at Strong Towns have pointed out, optimizing urban streets for auto traffic eviscerates walkable neighborhoods and main streets. The sprawl and decentralization produced by freeways hollows out urban space—each addition highway through urban center was associated with an 18 percent decline in city population—plus it’s an increasingly bankrupting the public sector.

Are jobs really returning to the city?

At City Observatory, we’ve cataloged a series of indicators that point to the the growing economic strength of city centers—including on the metric of job growth. But in a new blog post, Jed Kolko looks at county-level data for the past 15 years, and declares that city jobs aren’t really back, concluding: “It’s hard to make the case that economic activity has fundamentally become more urban.”

Are jobs coming back to downtowns like Austin's? Credit: Michael, Flickr
Are jobs coming back to downtowns like Austin’s? Credit: Michael, Flickr

 

As for evidence that job growth in urban counties has picked up since 2007, Kolko calls that a “cyclical,” rather than “structural,” trend—i.e., one that is inherently temporary.

Kolko uses county level data on employment focusing on the period from 2000 to 2015, classifying counties in large metropolitan areas as either urban (if they are the most populous or central county in the metropolitan area), higher-density suburban (not the most central, but still densely settled), lower density suburban (in a metro area, but lower density). He also presents data on smaller metropolitan areas, and non-metro areas, but we’re going to focus on patterns within large metro areas.

We have a huge amount of respect for Jed Kolko and his work. But on this point, we respectfully but firmly disagree on the interpretation. We look at the same data, and draw some different conclusions—here’s why:

Central counties are accelerating; suburban counties have decelerated

Job growth in central, urban counties is accelerating. Central counties are growing faster in the 2007-15 cycle than in the 2000-2007 cycle. All other counties are growing slower than in the earlier period. So, for example, central county growth accelerated from zero in the 2000-2007 period to approximately 0.4 percent in the 2007-15 period; meanwhile growth in suburban counties decelerated—slowing from 0.8 percent in 2000-07 to 0.4 percent in 2007-15 in higher density suburbs, and slowing from 2.0 percent to 0.7 percent in lower density suburbs. Dense central counties are growing faster than they were before 2007; suburbs are growing more slowly than they were before 2007. That’s evidence that job growth is becoming “more urban.”

Credit: Jed Kolko
Credit: Jed Kolko

We’ve had two economic cycles since 2000, and central counties are doing better in this second cycle

Kolko downplays the significance of these changes by saying that they are simply cyclical, and describes the entire period 2000 to 2015 as a singular cycle. But since 2000 there have been two macroeconomic cycles, not one. According to the National Bureau of Economic Research, there was a peak in 2000, a trough in 2001, another peak in 2007, and another trough in 2009 and we’re working towards another peak today. If we define a cycle as the time between two peaks, this 15-year period is very nearly two complete cycles. The key point is there have been two expansionary cycles in the last 15 years and the performance of central counties is very different in this most recent one. That suggests that this urban acceleration is actually its own phenomenon, and not simply a contractionary phase of a cycle whose growth favored the suburbs.

Even within the latest cycle, central counties are accelerating

Also, the within cycle pattern of change doesn’t square with this attempt to dismiss this as a cyclical change. Kolko’s argument is, implicitly, that the strength of cities was due to some temporary, special factors in the early post-2007 period. If that was the case, then one would expect the suburbs to reassert themselves as the cycle matured (and the the unchanged underlying structural advantages came into play). But Kolko’s evidence doesn’t square with that view: Central counties are performing better at the end of this cycle; if it were truly “cyclical,” then as the cycle progressed, you would expect suburbs to be erasing the difference. Instead, the pattern remains the same.

The other implication of the “no structural change” hypothesis is that Kolko expects growth trends to revert to the kind of housing bubble pattern of the 2000-2007 period. (Part of the reason suburban counties grew faster had to do with the growth of sprawling, single family subdivisions, and attending shopping centers and office parks.) There’s precious little evidence that those trends are re-asserting themselves.

The key point is that something very different is going on in the pattern of job growth within metropolitan areas in the period since 2007 than in the period prior to 2007. Kolko is essentially arguing that there’s no meaningful information to be extracted from subdividing that 15-year period into two segments—and that instead we need to look at the entire period as one trend. We disagree.

And finally, even if you regard 2000 to 2015 as a single cycle, there’s no reason to believe that the housing bubble was not the anomalous part. If you discount recent city growth using the cyclical argument, you’re suggesting that the 2000 to 2007 period was “normal,” and everything since then has been a temporary cyclical departure from that pattern. When this cycle is complete, and things return to “normal,” then we can expect the previous pattern to reassert itself. We don’t think so: the housing market continues to be weak, especially for the kind of sprawling, single-family development that propelled growth in the bubble.

County-based measures are crude and produce inaccurate comparisons

Finally, this is wonky, but it needs to be said: Counties are the wrong units for making these comparisons.Even “central” counties vary wildly from region to region in their size and centrality—and they often contain both urban and suburban areas. When they contain suburban areas, they’re often the kind of inner-ring suburbs that are seeing the worst economic performance. Central counties in some metros (Atlanta or Miami) are tiny (less than 10 percent of the MSA), while in other metros (San Jose, Phoenix, Jacksonville, or Austin), the central county makes up a majority or near-majority of the region’s economy. In some places, the central county includes a vibrant urban core and nearby neighborhoods, as well as declining older urban neighborhoods and industrial areas. A finer geographic parsing is needed to detect whether the urban core is growing.

Screen Shot 2016-01-20 at 11.43.51 AM

 

Both of these scenes show jobs in Cook County. Credit: Google Streetview
Both of these scenes show jobs in Cook County, IL. Credit: Google Streetview

 

For example, Chicago’s Loop and North Side could be gaining jobs like crazy (as they are) while suburban Cook County (especially the southern and western suburbs) could be losing jobs and population, and Kolko’s county level analysis would lump them together—missing or discounting the central job growth. Similarly, in the Seattle area, both Microsoft’s suburban Redmond campus and Amazon’s downtown South Lake Union headquarters are in the same county (King). Kolko’s county data simply doesn’t register where growth was happening in the Seattle area. The only way to determine whether centers are doing better than the rest of metro areas is to use a more geographically fine-grained measure than counties—which is something we did in our Surging City Center Jobs report.

So here are four takeaways:

  • Don’t rely only county based data to make comparisons or draw conclusions about the health of city center job growth.
  • Central counties are performing much more strongly in in the 2007-15 cycle than in the 2000-2007 cycle. Suburbs and smaller counties are performing worse than they did in the earlier cycle.
  • The change in performance between these two cycles is indicative of structural change in economic growth patterns within urban areas: central areas are getting more growth (due to the expansion of professional and personal services and knowledge industries in the core); peripheral areas are performing worse than they did, especially during the housing bubble because they’re not buoyed by housing and sprawl.
  • The within-cycle pattern of change (with central counties performing well late in the cycle) is consistent with the hypothesis that there has been a structural change in metropolitan employment growth patterns.

In a way this is a “half full, half empty” debate. Jed’s telling you the city center jobs glass is still half empty; we’re saying it’s half full—and that it was essentially empty in the last cycle (2000 to 2007). Not all job growth is happening in city centers—that’s not our point—but it seems clear that in this economic expansion, unlike the last one, large metro economies and central cities are leading the way. That’s an important development, in our view. Whether it continues and grows remains to be seen—and we look forward to exploring this question.

Which federal agency has a big role to play in housing affordability? The answer might surprise you

The big economic news of the past month was the Federal Reserve Board’s decision to begin raising interest rates after years of leaving them at near-zero levels. The first increase in the short-term interest rate the Fed charges banks will be one-quarter of one percent, but there’s an expectation that the Fed will continue to raise rates through the remainder of the year.

Shadow boxing at the Fed

The theory behind the Fed’s policy is that rate hikes are needed to normalize financial markets—it’s unusual for interest rates to hover near zero for so long—and to fend off the prospect of inflation. The Fed raises interest rates when it fears that inflation might be getting out of hand, and it wants to tamp it back down.

As we’ve argued before, this move isn’t doing cities any favors, as it’s likely to hold back economic growth before urban areas, like the rest of the country, have returned to the growth path they were on prior to the Great Recession. The timing and wisdom of this rate increase is very much in question. Never mind that there’s virtually no evidence of inflationary pressure in the economy today, nor has there been for decades. Many of the Fed’s officials came of professional age in the seventies, when inflation was a real concern, and like so many aging generals, they are still re-fighting the last war. As The Economist’s Ryan Avent observes, the economy has changed a lot since then, what with technology, globalization, the decline of unions, and a steady attenuation of wage and price expectations, but “The Fed seems not to realize that it is risking America’s recovery out of fear of an inflationary dynamic that it ruthlessly and utterly eliminated three decades ago.”

So what are the portents of inflation that worry the Fed? For most workers, wage increases have been negligible. Health care costs are subdued. Although macroeconomists generally discount food and fuel prices on account of their short-term volatility, sustained reduction in energy prices (i.e. oil costing something closer to $30 a barrel for a year or more, rather than the roughly $100 a barrel it has averaged for the past few years) can’t be erased from the inflation numbers.

As the Wall Street Journal’s Ben Leubsdorf points out, the only segment of the economy that’s exhibiting significant price increases is rental housing. But because shelter makes up a third of the consumer price index, the growth of housing prices is hiding weak growth in virtually all other sectors. In fact, if you exclude rents from the consumer price index, price levels are decreasing, not increasing. In other words, the Fed is taking steps to fight inflation when much of the economy is experiencing deflation.

 

The measure of rent in the consumer price index includes both the payments by renters, and an allowance for an equivalent cost for homeowners.* Over the past twelve months, rent is up 3.2 percent, faster than any other component of the consumer price index. In that same time, thanks to declining energy prices, the overall index is up only 0.5 percent.

 

Can the fed deal with rents directly?

The fact that rising rents are the chief contributor to price increases in the macro economy implies that the Fed’s monetary policy is problematic for a couple of reasons:

First, if, outside of housing, prices are falling, that suggests that it is premature for the Fed to raise interest rates. If the economy is in a deflationary condition, raising rates is likely to cut growth and possibly even drive the economy into recession. The turmoil in global financial markets in the past few weeks is evidence that many investors are uneasy about the health of the worldwide economy.

But second, if rising rents are the underlying inflation threat, then a very different policy response is called for. As we’ve noted at City Observatory, the reason for the big run up in rents has a lot to do with the nation’s shortage of cities, and of the shortage of multi-family rental housing, especially in walkable urban neighborhoods.. While there’s been a resurgence of construction of this kind of housing, in general, the supply response hasn’t been big enough, or fast enough, to blunt the rise in rents.

And here’s the interesting point: the Fed’s policies play a critical role in housing finance. The principal channel through which monetary policy works is by influencing economic activity in interest rate sectors of the economy. The way the Fed fought inflation in the sixties, seventies and eighties was by jacking up interest rates, choking off the flow of credit to the housing sector, and cooling off the economy. As Economist Ed Leamer summarized it provocatively: “housing is the business cycle.”

Interest rates for construction loans and for commercial real estate purchases strongly influence the feasibility of new investment in rental housing. If interest rates on these loans rise, fewer new apartments will get built, demand will continue to outstrip supply, and rents will likely be pushed up further. If the Fed is worried about inflation, maybe it needs to make sure we keep building more apartments.

A Modest Proposal

All this suggests that if the Fed takes its inflation-fighting mandate seriously, it might want to consider a sectorally targeted policy for multi-family housing. For example, if the Fed were to shift some of its purchases of securities to include a greater share of construction loans and mortgages on multifamily rental property, it could hold down growth in interest rates (or even drive rates down) in this sector, which would encourage more construction—adding to supply and helping to address inflation in the one sector where it seems to be a problem. And while the Fed seems concerned that its easy money policy may be prompting bubbles in other sectors, rising rents are a sign that more investment is needed in this sector to forestall the kind of supply constraints that fuel inflation.

The Fed is generally keen to deflect responsibility to Congress and the President for addressing what are referred to as “structural” problems in the economy. In general, the Federal Reserve also eschews sectoral interventions in the economy. But it’s not unprecedented. During the financial crisis in 2008, the Fed was instrumental in creating a number of special purpose financial facilities to help assure the liquidity of the banking industry—out of a concern that a structural implosion in this structure would cause serious damage the entire economy. To promote the economic recovery, the Fed later engaged in a program of “quantitative easing” buying up long term government bonds—and mortgage backed securities—to help hold down long term interest rates and buoy investment.

Of course, finance isn’t the only policy of importance here: cities have to zone land for apartments. But absent the ready availability of financing, new apartments won’t get built. And if long-term rates rise, that’s likely to reduce the number of new projects that go forward. So far, at least, mortgage rates have remained stable in spite of the first step in the Fed’s “normalization” process. But stay tuned.

Late last year, Jason Furman, the Chair of the White House Council of Economic Advisers gave a major speech connecting the dots between local zoning and problems of housing affordability and inequality.  The critical—but largely unnoticed role that rising rents are playing in inflation is another indicator of the growing economic importance of what happens in cities.  So perhaps it’s time for Janet Yellen and her colleagues at the Fed to take a closer look at what’s happening in the nation’s cities as they set monetary policy.


* Technical note: The shelter component of the consumer price index measures the increase in housing costs based on the current cost of occupancy. For renting households, this includes their rent. For homeowners, the Bureau of Labor Statistics calculates “owners equivalent rent”—the amount that homeowners would have to pay if they rented their homes, rather than owned them. Changes in the amount of this “imputed rent” are used to figure inflation in housing costs.

Pulling it all together

At City Observatory, we post several new commentaries each week on a variety of urban themes, and aim to provide discrete, coherent analyses of specific questions, and contributing to the policy dialog about cities. At the start of a new year, we’d like to pull back a bit, and reflect on what we think we’ve learned, and how these varied pieces add up to a cohesive vision.

So what follows is not a manifesto, but more an outline of of the knowledge assembled in our work at City Observatory, since we started in October 2014. Here’s a list—not entirely exhaustive—of what we came up with, grouped into four big themes:

The growing economic importance of city centers

Credit: Jonathan Miske, Flickr
Credit: Jonathan Miske, Flickr

 

The demand for cities is rising.

Talented young people are increasingly choosing to live in urban centers.

People are increasingly seeking dense, diverse, interesting, transit-served, bikeable and walkable communities.

This is leading to a surge in city center jobs.

Cities are powering the nation’s economic growth in this cycle.

Cities are cleaner, greener, and safer than ever before.

The economic advantage of cities is growing in providing convenience and experiences.

The shortage of cities

We have a shortage of cities, the growing demand for city living is outstripping the supply of great urban spaces, which is producing higher housing costs.

Our land use planning systems, dominated by homevoters, make it too hard to build new housing, especially in the most desirable locations, driving up housing prices.

Hyper-local decision-making can shut out important voices and lead to more segregated cities.

We’ve effectively made the most desirable kinds of housing—dense, diverse mixed use neighborhoods with narrow streets, and a varied range of housing types—illegal.

Other prosperous countries with attractive cities have very different ways of zoning that allow more traditional urban neighborhoods.

The need to rethink transportation policy

Credit: Montgomery County Planning Commission, Flickr
Credit: Montgomery County Planning Commission, Flickr

 

The big subsidies to parking—socialism for car storage in the public right of way—undermines biking and walking and drives up the price of housing.

There’s no such thing as a free way—taxpayers subsidize car ownership significantly, causing people to drive much more than they would otherwise.

The engineering rules of thumb that are used to forecast traffic, set road widths, require parking are pseudo-science, with perverse effects on cities and humans.

The way we design our roads costs thousands of lives a year. It’s time for another road safety revolution.

When it comes to public transit, what matters is reliability and convenience—not whether it’s rail or bus.

Land use is as important to public transit as the actual transit infrastructure. It’s especially important to have destination density—of jobs, amenities, homes—near transit stations.

The challenge of segregation, integration, and neighborhood change

Economic segregation is growing, the rich and the poor live apart from one another in our cities, this is a product both of the secession of the rich, especially to exclusive suburban enclaves, and by the concentration of poverty.

Gentrification, though rare, is actually reducing economic segregation.

Poor households living in gentrifying neighborhoods are no more likely to move away than poor households in non-gentrifying neighborhoods and report higher incomes and greater satisfaction with their neighborhoods than those living in non-gentrifying neighborhoods.

Unless housing supply increases in high demand locations, rents and home values will rise, and the poor will be priced out of neighborhoods.

High-inequality neighborhoods actually reduce inequality at the city and metro level.

Obstructing new development, even new higher-income development, is a recipe for aggravating problems of affordability and displacement.

Policies that aim to put the burden of paying for affordable housing on developers are unlikely to work. Their margin is too small, and the incentive effects will lead to less housing being built.

The scale of public investment in affordable housing is dwarfed by the housing market—but we can do better.


In the coming year, we’ll look to dig deeper into each of these propositions, and add others to our list. If you take issue with the positions we’ve staked out here, can offer relevant evidence that confirms, denies or sharpens these propositions, or have other ideas that are candidates for this list let us know. We look forward to continuing this conversation in 2016.

Pulling it all together

At City Observatory, we post several new commentaries each week on a variety of urban themes, and aim to provide discrete, coherent analyses of specific questions, and contributing to the policy dialog about cities. At the start of a new year, we’d like to pull back a bit, and reflect on what we think we’ve learned, and how these varied pieces add up to a cohesive vision.

So what follows is not a manifesto, but more an outline of of the knowledge assembled in our work at City Observatory, since we started in October 2014. Here’s a list—not entirely exhaustive—of what we came up with, grouped into four big themes:

The growing economic importance of city centers

Credit: Jonathan Miske, Flickr
Credit: Jonathan Miske, Flickr

The demand for cities is rising.

Talented young people are increasingly choosing to live in urban centers.

People are increasingly seeking dense, diverse, interesting, transit-served, bikeable and walkable communities.

This is leading to a surge in city center jobs.

Cities are powering the nation’s economic growth in this cycle.

Cities are cleaner, greener, and safer than ever before.

The economic advantage of cities is growing in providing convenience and experiences.

The shortage of cities

We have a shortage of cities, the growing demand for city living is outstripping the supply of great urban spaces, which is producing higher housing costs.

Our land use planning systems, dominated by homevoters, make it too hard to build new housing, especially in the most desirable locations, driving up housing prices.

Hyper-local decision-making can shut out important voices and lead to more segregated cities.

We’ve effectively made the most desirable kinds of housing—dense, diverse mixed use neighborhoods with narrow streets, and a varied range of housing types—illegal.

Other prosperous countries with attractive cities have very different ways of zoning that allow more traditional urban neighborhoods.

The need to rethink transportation policy

Credit: Montgomery County Planning Commission, Flickr
Credit: Montgomery County Planning Commission, Flickr

 

The big subsidies to parking—socialism for car storage in the public right of way—undermines biking and walking and drives up the price of housing.

There’s no such thing as a free way—taxpayers subsidize car ownership significantly, causing people to drive much more than they would otherwise.

The engineering rules of thumb that are used to forecast traffic, set road widths, require parking are pseudo-science, with perverse effects on cities and humans.

The way we design our roads costs thousands of lives a year. It’s time for another road safety revolution.

When it comes to public transit, what matters is reliability and convenience—not whether it’s rail or bus.

Land use is as important to public transit as the actual transit infrastructure. It’s especially important to have destination density—of jobs, amenities, homes—near transit stations.

The challenge of segregation, integration, and neighborhood change

Economic segregation is growing, the rich and the poor live apart from one another in our cities, this is a product both of the secession of the rich, especially to exclusive suburban enclaves, and by the concentration of poverty.

Gentrification, though rare, is actually reducing economic segregation.

Poor households living in gentrifying neighborhoods are no more likely to move away than poor households in non-gentrifying neighborhoods and report higher incomes and greater satisfaction with their neighborhoods than those living in non-gentrifying neighborhoods.

Unless housing supply increases in high demand locations, rents and home values will rise, and the poor will be priced out of neighborhoods.

High-inequality neighborhoods actually reduce inequality at the city and metro level.

Obstructing new development, even new higher-income development, is a recipe for aggravating problems of affordability and displacement.

Policies that aim to put the burden of paying for affordable housing on developers are unlikely to work. Their margin is too small, and the incentive effects will lead to less housing being built.

The scale of public investment in affordable housing is dwarfed by the housing market—but we can do better.


In the coming year, we’ll look to dig deeper into each of these propositions, and add others to our list. If you take issue with the positions we’ve staked out here, can offer relevant evidence that confirms, denies or sharpens these propositions, or have other ideas that are candidates for this list let us know. We look forward to continuing this conversation in 2017.

Bending the carbon curve in the wrong direction

Gas prices are down, driving is up, and so, too, is carbon pollution. In a little over a year, the US has given up about one-sixth of the progress it made in reducing transportation’s carbon footprint.

For more than a decade, America was making real progress in reducing is car dependence. The growth of driving slowed at the turn of the millennium, and declined from 2004 onward. The average American went from driving about 27.6 miles per day to driving just 25.7 miles per day—a nearly seven percent decline.

Demographic and technological factors played a role, but the big runup in gas prices—especially from 2004 to 2008, when gas broke through the $3/gallon and $4/gallon barriers—appeared to dramatically decrease the demand for driving.

But for more than a year now, gas prices have been ebbing downward. All told, they’ve fallen by nearly half, from $3.62 gallon in early 2014 to $1.92 gallon in most of the nation today.

Screen Shot 2016-01-08 at 3.25.36 PM

And as driving has gotten cheaper, Americans have begun driving more—a simple illustration of what economists call the “price elasticity of demand.” Monthly data on driving from the US Department of Transportation shows driving is up about nine tenths of a mile per person per day over the past year.

Screen Shot 2016-01-08 at 3.26.58 PM

Collectively, we’re driving more than 3.1 trillion miles per year, after holding the line just below three trillion for several years.* That added 100 billion miles of driving per year means more carbon pollution. At a fleet average of about 20 miles per gallon, this added driving implies about 5 billion gallons of additional gasoline and about 44 million tons of carbon emissions nationally. So far, this increased driving has erased about one-sixth of the progress the country made in reducing transportation related carbon emissions since 2008.

And we can’t expect that all these changes will disappear if, and when, gas prices increase again. Because cheap gas has also prompted Americans to buy less fuel efficient vehiclestoday’s prices will lock in lower levels of efficiency for more than a decade.

On top of that, the congressionally approved bailout of the bankrupt highway trust fund (largely paid for with a combination of raiding federal reserve bank balances and selling off the strategic petroleum reserve (at exactly the wrong time) have the effect of insulating highway users from the true costs of building and maintaining the roads they drive on.

Taken together, cheap gas, more driving, and dirtier, less efficient vehicles make a mockery of the high-minded rhetoric coming out of last month’s Paris accords on global warming. While we’ve nominally agreed to take long term steps to reduce our carbon emissions, when it comes to the economic signals that matter—the price we charge for gas and the billions of dollars of subsidies for car travel—we’re clearly bending the curve in the wrong direction. And that makes the ultimate objectives of avoiding climate change harder to achieve and far less likely.


* These calculations are based on the numbers reported in the U.S. Department of Transportation’s Traffic Volume Trends through October 2015. As Tony Dutzik of the Frontier Group reports, initial estimates from 2014 have been revised down, so we may expect a similar downward revision of the 2015 data. Still, the upward trend is evident even with the corrections.

The economic strength of American cities in four charts

Cities are becoming more important to the economic health of the country. How do we know? We can boil the answer down to four charts, each of which plots a key indicator of urban economic strength.

1. The Dow of Cities

The market value of housing in urban centers is increasing much more rapidly than in more outlying areas, signalling the growing economic importance of central locations and urban living. Since 2000, according to calculations by Fitch Ratings, using data from the Case-Shiller Index, home prices in urban centers have increased 50 percent faster than in more peripheral locations. We call this the “Dow of Cities” because—like the Dow Jones Industrial Average—it signals the increasing value and success of cities.

Screenshot 2015-08-12 20.59.59

2. The Rent Gradient

The way housing prices change as you get further from a city’s downtown is called the “rent gradient.” Over the past four decades, the rent gradient has become much steeper—meaning people are willing to pay more of a premium to live in more central locations. Census data compiled by economists Lena Edlund, Cecilia Machado and Michaela Sviatchi show that this trend is particularly pronounced in the nation’s largest metro areas.

Edlund_Rent_Gradient

 

3. The Walkability Premium

A key feature of urbanism is walkability, and there’s a strong correlation between walkability—as measured by Walk Score—and increases in home values. Compelling evidence marshalled by Spencer Raskoff and Stan Humphries in their book Zillow Talk illustrates this trend. Over the past 14 years, the most walkable homes—those rated as “walker’s paradises” and “very walkable”—have consistently outperformed houses in lower-scoring, “somewhat walkable” and “car-dependent” neighborhoods. Although all types of homes saw value declines when the housing bubble burst, houses in walkable neighborhoods have recovered most, and fastest.

4. Job Growth in Large Metropolitan Areas

The success of cities is not just a local phenomenon—it has national implications. In this economic expansion—in the wake of the Great Recession—national job growth is being led by the rapid expansion of the nation’s largest metropolitan areas. According to Bureau of Labor Statistics data compiled by Oregon economist Josh Lehner, employment in the nation’s largest metro areas has increased three percent since 2007, compared to an increase of less than one percent percent in smaller metropolitan areas. Rural areas still have yet to recover their pre-recession levels of employment.

Jobs_MetSize_0715

Taken together, these trends tell a compelling story about the economic importance of cities and large metropolitan areas. There’s a growing market demand for urban living. Americans are attaching an increasing value to living in cities, and especially in walkable neighborhoods. And the American economy is increasingly propelled by the success of large metropolitan areas anchored by strong central cities.

The primary implication of this work is that we need to capitalize on the economic power of cities. In addition, while rising prices signal a resurgence of city economic strength, they also pose an important public policy challenge: how to address housing affordability. In our view the underlying issue is what we’ve called a shortage of cities and we can best meet this challenge by improving cities everywhere and also by expanding the opportunities to live in cities by building more housing and doing a better job of helping low income households afford housing. On that front, the evidence points to a two-pronged approach. First, in regions where housing demand strongly outstrips supply, allowing more housing construction is crucial to reducing overall housing prices. Second, we need to dramatically increase direct housing assistance to low-income people who have trouble purchasing housing even in normal markets—for example, by making Housing Choice Vouchers an entitlement.

Our favorites from 2015, part 1

Over the last two days, we’ve give you readers’ favorite posts from 2015. Now we’re choosing our own. Here are Joe Cortright’s five favorite:

5. Want to close the black/white income gap? Work to reduce segregation

The income gap between black and white households is one of the major racial inequalities in American society. It’s also highly correlated with residential segregation.

4. The Dow of cities

Fitch, the investment rating company, released a report earlier this year showing that real estate in city centers was consistently outpacing more outlying properties in appreciating value. That’s strong evidence of the resurgence of demand for urban living.

3. The real welfare Cadillacs have 18 wheels

It’s a little-talked-about issue, but subsidies to freight trucks are a major government transportation expenditure—as much as $128 billion a year, according to a Congressional Budget Office report.

Screen Shot 2015-06-01 at 2.02.36 PM

2. What does it mean to be a “smart city”?

What is the “smart city” movement? And what is a smart city? Joe argues that it has to be about more than just optimizing systems—it’s about people.

1. The Cappuccino Congestion Index

Congestion in lines to buy coffee: it’s real. Read all about it.

Coffee_Speed_Volume

The Katy isn’t ready for its closeup

When it comes to selling huge new road projects to the public, the highway lobby and their allies in government have many tools. Last week, we wrote about one of them: touting initial declines in congestion as success, without bothering to follow up as induced demand eliminates those gains in just a matter of years.

But that tactic, used by the American Highway Users Association with Houston’s effort to get rid of congestion bottleneck on the Katy Freeway, is hardly more honest than another used by the US Department of Transportation on the Katy project. The DOT features the $2.8 billion effort on the webpage of USDOT’s “Office of Innovative Program Delivery.”

Let's take a walk!
Let’s take a walk!

 

To show what a great project this is, they offer visitors to their website this photo of a green, people-friendly highway. As you can see, it features exactly as many pedestrians as it does cars. If the image had no caption, you might be forgiven for thinking that the project in question was a park or an open space, rather than a freeway. This particular view—which is probably seen by almost none of the freeway’s regular users—is very different from what most see. After all, the Katy Freeway carries roughly 300,000 vehicles per day; we don’t have pedestrian numbers for the roads alongside it, but we’re skeptical that it’s anywhere within several orders of magnitude.


Using distorted images to downplay the visual impact of massive freeways has a long tradition in US engineering practice. It was a tactic honed to perfection by the master builder himself, Robert Moses, in New York more than half a century ago.

Moses was trying to persuade the city to build an epic suspension bridge from Brooklyn to Battery Park, on the lower tip of Manhattan. The bridge would have alighted on a five-story-tall causeway, dominating most of the Battery. But Moses sold the project with an “artist’s conception” that made the bridge almost disappear. Let’s turn the microphone over to Robert Caro, in his epic biography, The Power Broker:

Moses’ announcement had been accompanied by an “artist’s rendering” of the bridge that created the impression that the mammoth span would have about as much impact on the lower Manhattan Landscape as an extra lamppost. This impression had been created by “rendering” the bridge from directly overhead—way overhead—as it might be seen by a high flying and myopic pigeon. From this bird’s eye view, the bridge and its approaches, their height minimized and only their flat roadways really visible, blended inconspicuously into the landscape. But in asking for Board of Estimate approval, Moses had to submit to the board the actual plans for the bridge. . . .

When the real impact of the proposed bridge became apparent, it provoked opposition among some of the most influential New Yorkers. The Brooklyn-Battery Bridge was one of the few projects that Moses failed to pull off. (He was instead forced to build a tunnel under the harbor). But while he lost this particular round, the Moses legacy lives on in the kind of visual puffery and misdirection favored by highway builders everywhere: distorted, false-perspective artist’s renderings that show projects in a way they’ll never be experienced by actual humans.

From the promise of congestion relief that is quickly erased by induced demand, or deceptive imagery designed to conceal a project’s scale and impacts, there are plenty of good reasons to be skeptical of the sales pitches used to sell automobile infrastructure.

Reducing congestion: Katy didn’t

Here’s a highway success story, as told by the folks who build highways.

Several years ago, the Katy Freeway in Houston was a major traffic bottleneck. It was so bad that in 2004 the American Highway Users Alliance (AHUA) called one of its interchanges the second worst bottleneck in the nation wasting 25 million hours a year of commuter time.  (The Katy Freeway, Interstate 10, connects downtown Houston to the city’s growing suburbs almost 30 miles to the west).

Obviously, when a highway is too congested, you need to add capacity: make it wider! Add more lanes! So the state of Texas pumped more than $2.8 billion into widening the Katy; by the end, it had 23 lanes, good enough for widest freeway in the world.

It was a triumph of traffic engineering. In a report entitled Unclogging America’s Arteries, released last month on the eve of congressional action to pump more money into the nearly bankrupt Highway Trust Fund, the AHUA highlighted the Katy widening as one of three major “success stories,” noting that the widening “addressed” the problem and, “as a result, [it was] not included in the rankings” of the nation’s worst traffic chokepoints.

There’s just one problem: congestion on the Katy has actually gotten worse since its expansion.

Sure, right after the project opened, travel times at rush hour declined, and the AHUA cites a three-year old article in the Houston Chronicle as evidence that the $2.8 billion investment paid off. But it hasn’t been 2012 for a while, so we were curious about what had happened since then. Why didn’t the AHUA find more recent data?

Well, because it turns out that more recent data turns their “success story” on its head.

 

We extracted these data from Transtar (Houston’s official traffic tracking data source) for two segments of the Katy Freeway for the years 2011 through 2014.  They show that the morning commute has increased by 25 minutes (or 30 percent) and the afternoon commute has increased by 23 minutes (or 55 percent).

Growing congestion and ever longer travel times are not something that the American Highway Users Alliance could have missed if they had traveled to Houston, read the local media, or even just “Googled” a typical commute trip. According to stories reported in the Houston media, travel times on the Katy have increased by 10 to 20 minutes minutes in just two years. In a February 2014 story headlined “Houston Commute Times Quickly Increasing,Click2Houston reported that travel times on the 29-mile commute from suburban Pin Oak to downtown Houston on the Katy Freeway had increased by 13 minutes in the morning rush hour and 19 minutes in the evening rush over just two years. Google Maps says the trip, which takes about half an hour in free-flowing traffic, can take up to an hour and 50 minutes at the peak hour. And at Houston Tomorrow, a local quality-of-life institute, researchers found that between 2011 and 2014, driving times from Houston to Pin Oak on the Katy increased by 23 minutes.

Even Tim Lomax, one of the authors of the congestion-alarmist Urban Mobility Report, has admitted the Katy expansion didn’t work:

“I’m surprised at how rapid the increase has been,” said Tim Lomax, a traffic congestion expert at the Texas A&M Transportation Institute. “Naturally, when you see increases like that, you’re going to have people make different decisions.”

Maybe commuters will be forced to make different decisions. But for the boosters at the AHUA, their prescription is still exactly the same: build more roads.

The traffic surge on the Katy Freeway may come as a surprise to highway boosters like Lomax and the American Highway Users Alliance, but will not be the least bit surprising to anyone familiar with the history of highway capacity expansion projects. It’s yet another classic example of the problem of induced demand: adding more freeway capacity in urban areas just generates additional driving, longer trips and more sprawl; and new lanes are jammed to capacity almost as soon as they’re open. Induced demand is now so well-established in the literature that economists Gilles Duranton and Matthew Turner call it “The Fundamental Law of Road Congestion.”

Claiming that the Katy Freeway widening has resolved one of the nation’s major traffic bottlenecks is more than just serious chutzpah, it shows that the nation’s highway lobby either doesn’t know, or simply doesn’t care what “success” looks like when it comes to cities and transportation.

Reducing congestion: Katy didn’t

Here’s a highway success story, as told by the folks who build highways.

Several years ago, the Katy Freeway in Houston was a major traffic bottleneck. It was so bad that in 2004 the American Highway Users Alliance (AHUA) called one of its interchanges the second worst bottleneck in the nation wasting 25 million hours a year of commuter time.  (The Katy Freeway, Interstate 10, connects downtown Houston to the city’s growing suburbs almost 30 miles to the west).

Obviously, when a highway is too congested, you need to add capacity: make it wider! Add more lanes! So the state of Texas pumped more than $2.8 billion into widening the Katy; by the end, it had 23 lanes, good enough for widest freeway in the world.

It was a triumph of traffic engineering. In a report entitled Unclogging America’s Arteries, released last month on the eve of congressional action to pump more money into the nearly bankrupt Highway Trust Fund, the AHUA highlighted the Katy widening as one of three major “success stories,” noting that the widening “addressed” the problem and, “as a result, [it was] not included in the rankings” of the nation’s worst traffic chokepoints.

There’s just one problem: congestion on the Katy has actually gotten worse since its expansion.

Sure, right after the project opened, travel times at rush hour declined, and the AHUA cites a three-year old article in the Houston Chronicle as evidence that the $2.8 billion investment paid off. But it hasn’t been 2012 for a while, so we were curious about what had happened since then. Why didn’t the AHUA find more recent data?

Well, because it turns out that more recent data turns their “success story” on its head.

 

We extracted these data from Transtar (Houston’s official traffic tracking data source) for two segments of the Katy Freeway for the years 2011 through 2014.  They show that the morning commute has increased by 25 minutes (or 30 percent) and the afternoon commute has increased by 23 minutes (or 55 percent).

Growing congestion and ever longer travel times are not something that the American Highway Users Alliance could have missed if they had traveled to Houston, read the local media, or even just “Googled” a typical commute trip. According to stories reported in the Houston media, travel times on the Katy have increased by 10 to 20 minutes minutes in just two years. In a February 2014 story headlined “Houston Commute Times Quickly Increasing,Click2Houston reported that travel times on the 29-mile commute from suburban Pin Oak to downtown Houston on the Katy Freeway had increased by 13 minutes in the morning rush hour and 19 minutes in the evening rush over just two years. Google Maps says the trip, which takes about half an hour in free-flowing traffic, can take up to an hour and 50 minutes at the peak hour. And at Houston Tomorrow, a local quality-of-life institute, researchers found that between 2011 and 2014, driving times from Houston to Pin Oak on the Katy increased by 23 minutes.

Even Tim Lomax, one of the authors of the congestion-alarmist Urban Mobility Report, has admitted the Katy expansion didn’t work:

“I’m surprised at how rapid the increase has been,” said Tim Lomax, a traffic congestion expert at the Texas A&M Transportation Institute. “Naturally, when you see increases like that, you’re going to have people make different decisions.”

Maybe commuters will be forced to make different decisions. But for the boosters at the AHUA, their prescription is still exactly the same: build more roads.

The traffic surge on the Katy Freeway may come as a surprise to highway boosters like Lomax and the American Highway Users Alliance, but will not be the least bit surprising to anyone familiar with the history of highway capacity expansion projects. It’s yet another classic example of the problem of induced demand: adding more freeway capacity in urban areas just generates additional driving, longer trips and more sprawl; and new lanes are jammed to capacity almost as soon as they’re open. Induced demand is now so well-established in the literature that economists Gilles Duranton and Matthew Turner call it “The Fundamental Law of Road Congestion.”

Claiming that the Katy Freeway widening has resolved one of the nation’s major traffic bottlenecks is more than just serious chutzpah, it shows that the nation’s highway lobby either doesn’t know, or simply doesn’t care what “success” looks like when it comes to cities and transportation.

This commentary appeared originally in December 2015.

Don’t bank on it

Democratic presidential front-runner Hillary Clinton laid out the broad outlines of her plan for a National Infrastructure Bank, which would make low interest loans to help fund all kinds of public and private infrastructure. In an explainer for Vox, Matt Yglesias lays out the case for an infrastructure bank, and sets out some of the key assumptions behind the idea.

The infrastructure bank is not exactly a new idea: It’s been suggested in several forms by the Obama Administration, and has been repeatedly advanced by think tanks including the Brookings Institution, the Hamilton Project and the Center for American Progress. While it mostly gets support from the political left, some Republicans have supported the idea, too. As Governor, Jeb Bush authorized a modest $50 million contribution to Florida’s infrastructure bank in 1999, but the Florida Legislature raided the bank to pay for other projects in 2003.

The basic outlines are these: The federal government would endow the bank with funds and empower it to borrow from the Treasury. It would make loans on generous terms—low-interest, long-term, fixed-rate—to states and local governments, and in some cases to private firms, to build major infrastructure projects. In some cases, repayment of the loans might even be deferred for a number of years. The bank would be directed to favor projects that had important national benefits, including job growth and environmental improvement.

Credit: Loco Steve, Flickr
Credit: Loco Steve, Flickr

 

In theory, if a national infrastructure bank wisely chose projects, and if it dispensed money efficiently, it might avoid some of the problems that plague our current system of infrastructure finance. But those are big “ifs.”

In practice, there are real reasons to believe that a national infrastructure bank won’t miraculously overcome the problems that plague American infrastructure.

A bank has to be capitalized. As the debates over the effectively bankrupt Federal Highway Fund have shown, there’s simply no stomach in Congress for raising revenues to pay for new infrastructure spending. Unwilling to ask users or taxpayers to pay more for roads and other infrastructure projects, Congress has resorted to increasingly desperate and gimmicky proposals, including a proposal to capture a portion of repatriated corporate profits, and transferring funds from the Federal Reserve’s balance sheet. Both measures end up costing the federal government more money in the end.

Banks want to be paid back. The finance problem with infrastructure projects is not the availability of capital for projects that generate a positive cash flow, it’s the lack of cash flow: states and localities have pretty much tapped out their own revenues (like the gas tax), and are generally unwilling or unable to take on toll-financed projects. The key problem is that most infrastructure projects simply don’t generate cash flows that can be used to retire debt.

Absent some new revenue source or some pricing mechanism—higher state gas taxes, or road and bridge tolls, or a vehicle miles traveled fee—it will be difficult to do more or different projects than we’re doing right now.

A Bank may mostly substitute for existing financing rather than prompting additional investment. The nature of bank financing is that they want the borrowers to take on the project management and revenue risks. This is especially the case for low interest rate financing—you only get low interest rates if you lower the bank’s risks to nearly nothing. That means that the projects that would be most appealing to a national infrastructure bank would also be the one’s that are the financially strongest—and the most capable of getting financing now (see below).

Making more money available on concessionary terms from a national infrastructure bank might simply lead to substitution of cheaper Infrastructure Bank money for slightly more expensive municipal bond financing. One of the problems with the existing infrastructure lending program TIFIA (Transportation Infrastructure Finance and Innovation Act) is, according to the US DOT, that for some projects in “TIFIA may displace, rather than induce participation by capital markets.”

Credit: Ken Lund, Flickr
Credit: Ken Lund, Flickr

 

Banks don’t design projects, DOTs do. While it’s tempting to assert that a national infrastructure bank will somehow only choose meritorious, sustainable efficient projects, the history with TIFIA—the closest thing we have to a federally sponsored infrastructure bank—is that it too is highly tilted to big highway projects. Case in point: a $412 million low interest TIFIA loan to widen a toll road in Southern California. Hardly a sustainable or innovative or particularly meritorious project—but thanks to a stream of toll revenue, it could plausibly commit to repay the federal loan. Clothing the lending function with the fancy new title of “infrastructure bank” may do nothing to change the actual process of project selection.

Cheap money creates its own incentive problems. Despite the good intentions of those who would set some broad policy oversight on the projects to be selected, preferential funding for some projects may have unintended consequences. Projects of national significance creates perverse incentives that encourage gamesmanship and gold-plating.

If there’s special bonus federal funding for special projects, look for states and localities to re-package their pre-existing project plans as one’s that fit the national criteria. If you can get access to some special pot of federal funding for your bridge, etc, then you can back out your own resources.

And once projects get cast as being of national or regional significance, local concern for efficiency or cost-effectiveness often gets tossed out the window. Big, nationally significant projects have the unfortunate tendency to experience the mega-project disease, and in part because of their size and importance, generate huge cost overruns as in the reconstruction of the PATH train station at the World Trade Center.

And it’s not like states haven’t figured out how to borrow money. The premise of the Infrastructure Bank idea is that our problem is too little access to financing. But when they have a cash flow, municipal governments have little problem borrowing against it—and in fact, that’s part of the problem with transportation finance. The political allure of borrowing to build big capital projects is undeniable—you get the jobs and the ribbon cutting in your term, and spread the costs over the next two or three decades, when your successors will have to deal with complaints about the taxes levied to repay the debt service.

In fact, if you have revenue, it’s fairly easily to go to the capital markets: The state of California has about $87 billion in infrastructure debt, according to its Legislative Auditor’s office. North Carolina Governor Pat McCrory has proposed borrowing $3 billion for roads and other infrastructure projects. Borrowing to pay for roads is as old as the automobile, the first road finance bonds were issued by Massachusetts in the 1890s. By 1992, states and localities had cumulative debt outstanding for road building of more than $47 billion.

For some states, arguably, the problem is not that they don’t have enough access to debt, but that they’ve relied on it far too heavily. The state of Washington for example, was on track to spend 72 percent of its gas tax revenue on debt service—effectively short-changing basic maintenance. Earlier this year it passed a new gas tax increase to fill the gap—and surprise, committed to borrowing against those funds for $8.8 billion in new projects—mostly freeway widening.

Climate concerns steamrolled by FAST Act and cheap gas

There’s plenty of high-minded rhetoric at the UN climate change conference in Paris about getting serious about the threat of climate change. According to the Los Angeles Times, Secretary of State John Kerry is optimistic that, “even without a specific temperature-change limit and legally binding structure, a climate change agreement that negotiators in Paris are hoping to reach this week has the potential to change the world.”

Kerry said that if the more than 190 nations at the Paris conference sign on to a plan in which they have confidence, the private sector will take the reins and innovate new sustainable-power technologies that will ease climate change.

The theory seems to be that if we convince the market that we’re really, really serious about doing something about climate change, then patterns of innovation and investment will change, and we’ll create and actually invest in the kinds of things that will lead to big emission reductions.

Credit: Elliott Brown, Flickr
Credit: Elliott Brown, Flickr

 

But back in Washington, the new FAST Act, and our eagerness to hide from ourselves the true cost of our transportation, are making a cruel joke of that rhetoric. The signals we’re sending, in terms of policy and prices, are leading us to drive and pollute more—making it harder to do anything to solve an increasingly evidence climate crisis.

As every driver knows, the price of gasoline has plunged by more than a dollar per gallon in the past year. If ever there were a time when it might be politically possible to ask drivers to pay just a slightly larger fraction of the costs of building and maintaining the roads they use—not to mention the costs of polluting the atmosphere—you’d think it would be now. But you would be wrong.

Cheaper gas is already prompting Americans to drive morebut the damage will last much longer than the low prices. Thats because cheaper gas is also prompting people to buy heavier, dirtier, less-fuel-efficient new vehicles. According to the University of Michigan, the average fuel economy of the typical new car sold today has declined from a high of 25.8 miles per gallon last year to about 25.0 miles per gallon today. That may not sound like a lot, but it’s a scary development for several reasons.

For one, cars are long lived assets, so poor fuel economy today locks in a lifetime of inefficiency. The typical new vehicle lasts more than 15 years and chalks up more than 150,000 miles. Over its lifetime, a vehicle that gets 25 miles per gallon will consume about 186 more gallons of gas than a car that gets 25.8 miles per gallon.

The agents of our destruction. Credit: Ray Forster, Flickr
The agents of our destruction. Credit: Ray Forster, Flickr

 

To get an idea of what that means, let’s think about the number of cars that will be sold over the next five years—the lifetime of the FAST funding package. Currently, cars and light trucks are selling at an annual rate of about 17 million units per year. At that rate, Americans will buy about 85 million new cars over the next five years. If they all have a fuel economy of 25.0 miles per gallon (and they don’t become less efficient as they age) rather than the 25.8 miles per gallon of cars sold last year, they’ll consume an extra 15.8 billion gallons of gas over their lifetime.

That has at least two important impacts worth thinking about.

First, each gallon of gas produces about 19.64 pounds of carbon. So the additional gasoline burned by buying less efficient vehicles will lead to about 140 million more metric tons of carbon being emitted into the atmosphere over these vehicles lifetime. (For reference, that’s about the same as the total CO2 emissions of the State of Georgia in 2013—136 million metric tons). And that’s simply because because of the low, low price of driving, consumers opted for less fuel efficient vehicles.

And second, over the life of these cars, consumers will have to pay for that much more gasoline. At the current national average price of about $2.25 a gallon, that works out to about $35 billion more over the life of the vehicles purchased in the next five years. To put that number in context, recall the the subsidy for the FAST Act comes from diverting $58 billion from from reserves of the Federal Reserve system. So new car buyers will end up spending about 65 cents more on gasoline for their less efficient, more polluting cars for every dollar shifted from the banking system to subsidize roads.

Transportation continues to be one of the principal sources of greenhouse gases. So while the world’s leaders, including those from the US are making serious-sounding noises in Paris about finally needing to something about climate change, the bipartisan policy consensus in Washington is to continue a system that insulates drivers from the costs of their actions, and by doing so encourages more driving, less efficient vehicles and more pollution.

Secretary Kerry is right: we need to send the private sector (and that includes consumers) clear signals about the seriousness of the climate change problem so that they make sound decisions about how to invest. But the short-sighted decisions we’ve made to continue to insulate car users from the costs of their decisions, coupled with the very low price of gasoline (which itself doesn’t reflect at all the damage that burning it does to the climate) is prompting Americans to drive more, and to buy dirtier, less-efficient vehicles that will only make it harder to tackle climate change.

Cities have reason to be wary of Fed moves

Later this month, the Federal Reserve Board (or “the Fed,” as it’s often referred to) will raise interest rates. After seven years of very loose monetary policy designed to facilitate economic recovery from the Great Recession, the Fed now apparently thinks that the economy is healthy enough to stand higher interest rates.

Clearly, the financial markets will be paying rapt attention; in fact, guessing the date of the rate increase has been the principal occupation of Fed watchers for two years. But beyond the headlines, should cities care?

Janet Yellen, Chair of the Federal Reserve. Credit: Day Donaldson, Flickr
Janet Yellen, Chair of the Federal Reserve. Credit: Day Donaldson, Flickr

 

Yes. To begin with, there are some direct impacts on city finances. Municipal governments are big borrowers, especially for things like public works, and if the Fed’s rate rise pushes up interest rates, then that means it will be more expensive for cities to borrow to fund those projects.

But cities should also be concerned about the effect on national economic growth. While the recovery from the Great Recession has been a long, slow slog, cities in particular have seen steady economic growth for the past several years.

Arguably, housing is the sector of the economy most sensitive to changing interest rates. Low interest rates make housing investments more attractive, and the one bright spot in a still deeply depressed housing market—multifamily construction in cities—has depended heavily on the Fed’s extended period of low rates. Multifamily starts have fully recovered to their pre-recession levels, while single family starts have languished at near historic lows. As we’ve noted, the trend toward apartment construction is attributable to a number of factors: a growing demand for urban living; poor credit availability (and creditworthiness) among young adults who have traditionally purchased entry homes; strong rent growth; plus relatively attractive interest rates for the kinds of institutional and large-scale investors who finance apartment construction.

Credit: SounderBruce, Flickr
Credit: SounderBruce, Flickr

 

While many of these factors aren’t going anywhere, rising interest rates may determine whether some apartment construction projects go forward. A significant rise in long term interest rates could mean that some apartment projects don’t pencil out. Thirty year mortgages are currently running about 3.72 percent.

The big question for the housing market is whether long term rates increase. Fed policies chiefly affect short-term rates, and the link between long-term and short-term rates is indirect and variable. The Federal Reserve’s main policy lever is the federal funds rate, which plays a key role in determining short term interest rates, but which affects longer term interest rates, like mortgages with, as the economists are wont to say “with long and variable lags.” The Fed uses its financial transactions, including the rates it charges to banks, and its purchases of securities, to target a particular short-term interest rate. That, in turn, influences other interest rates in the economy. Back in 2008 and 2009, the Fed successively reduced short term interest rates to almost zero to help support the economy during the Great Recession. A majority of the Federal Reserve Board of Governors seems to think that they should “normalize” their policy and raise rates.

This is what “highly accommodative monetary policy” looks like. It is coming to an end.

 

In June, Zillow economists projected that Fed tightening via short term interest rates would push up mortgage rates. Assuming a constant spread between short and long term rates, they estimated that rates on 30-year fixed mortgages would rise from rise from 3.84 percent in May 2015 to 4.63 percent by December 2015, 5.63 percent by December 2016, 6.88 percent by December 2017, and 7.75 percent by December 2018. Some economists seem to think that short-term rate increases will have little impact on long term rates—in economists’ parlance, the yield curve will flatten. They think that economic growth is sufficiently well-established that long term rates will rise very little.

While the immediate effect on cities may well be seen in the housing sector, the larger concern has to be the state of the macroeconomy. If overall job growth (which has been running at about two percent annually) falls off, it makes nearly all of the economic challenges facing cities worse. Falling job growth would likely lead to higher rates of unemployment, weaker wage growth, and lower tax collections. Many economists think the Fed tightening is premature, that the economy is not fully recovered, and that there are significant downside risks to raising rates now. One former Fed Economist argued that the economy is still operating well below potential, and can expand further with little fear of inflation. Others note that the recent appreciation of the U.S. dollar can be expected to be a drag on U.S. economic growth, and that an interest rate hike would add add to this drag.

At City Observatory, we generally focus our analysis on cities and metropolitan economies, and look at economic trends as they play out in particular geographies. But occasionally, it’s important to step back at consider the broader national macroeconomic context. City and metro economies each have their own dynamics, but ultimately find their options shaped by trends in the national economy. This is one of those times.

It’s become increasingly popular to assert that cities can replace federal policy activism to tackle many national and global problems. And while we take a backseat to no one in stressing the importance of cities, in some cases, if the national government gets key policies wrong, it can be almost impossible for cities to make progress. If timidity about potential inflation prompts the Fed to engineer a rate rise that slows economic growth—or pushes us into a recession—much of the progress that cities have made in growing jobs and expanding opportunity will be at risk. In the months ahead, keep a close eye on multifamily housing starts and the job growth rate to see whether the Fed got it right—or not.

Pulling a FAST one

Whatever remained of the fig leaf claim that the US has a “user pays” system of road finance disappeared completely with the passage of the so-called FAST Act.

It would be better to call the new transportation bill the “Free Ride” Act, because that’s exactly what it does: gives auto users something for nothing. It’s more money for road building, scraped together from arithmetically questionable raids on the Strategic Petroleum Reserve and the Federal Reserve Bank, and a series of other gimmicks.

As we pointed out earlier, this legislation represents the triumph of the asphalt socialists—in their view, our transportation problems can be best solved by more subsidies for the old policies. As Yonah Freemark observes, the biggest chunk of this subsidy comes from a transfer of $53 billion from the Federal Reserve System. In effect, bank robbery is now national policy for road finance.

7862855188_555b43d05f_k
Live feed from Congress. Credit: Henry Burrows, Flickr

 

As everyone now knows, the 18 cent per gallon federal gas tax that has been the mainstay of the Highway Trust Fund hasn’t been raised in two decades and the combined effects of declining driving, more fuel efficient vehicles, and inflation have pushed the fund to bankruptcy. But rather than ask users to pay a higher fees—reflecting the higher costs of maintaining roads—Congress ruled out a gas tax increase, and instead did in a big way, what its done several times in smaller ways before: bail out the Highway Trust Fund with general funds. Because of budget rules, the general fund diversions have to be “paid for”—which Congress did by a series of one-time accounting devices and asset sales.

And Delaware Senator Tom Carper argued, “the bill is paid for is simply irresponsible…Congress is passing the buck by using a grab bag of budget gimmicks and poaching revenues from unrelated programs for years to come in order to pay for today’s transportation needs.” Financial blogger Barry Ritholtz called it the “dumbest way” possible to pay for infrastructure.

But it’s not just the revenue side of the new transportation bill that’s rotten. On the spending side, the FAST Act leaves largely unchanged the practice of allocating most revenue to state DOTs. Those departments, in turn, disproportionately allocate money to new highway construction, rather than more sustainable transportation needs. The bill sets up a new National Highway Freight program which will allocate about $1.25 billion annually to the states by formula and another $900 million annually for “Nationally Significant Freight & Highway Projects” to be awarded on a competitive basis for “multi-modal” freight projects. But 90 percent of the funds are earmarked for highways, and since virtually all roads carry at least some truck traffic, it’s far from clear that these funds will be anything other than yet another pot of funding for subsidized road construction.

14964874_e077bf8063_b
Credit: Nick Douglas, Flickr

 

As is now common with major congressional legislation, it will take further days or weeks of sleuthing by the experts to discover and understand all of the bill’s arcane provisions. This 1,300 page bill is no exception. If you’re looking to understand some of the ins and outs, a good place to start is the official U.S. Department of Transportation summary. Transportation for America’s more opinionated view of the FAST Act’s the best and worst provisions is here; their verdict: the net result is to use tomorrow’s dollars to pay for yesterday’s ideas. And Deron Lovaas of the Natural Resources Defense Council aptly describes the bill as the “sum of all lobbyists”—noting a number of obscure provisions that are only distantly related to transportation policy.

There will be no doubt much back-slapping and congratulation that Congress has finally passed a long-term transportation bill after a series of short-term, patchwork fixes. But in reality, Congress has done nothing to address the underlying causes of the decline in national transportation funds—the steady erosion of the gas tax by inflation, improved fuel efficiency and stagnation of driving levels—and has utterly failed to craft a solution that asks the users and beneficiaries of the transportation system step up and pay for its costs.

What Congress has done is completely demolish the “user pays” idea, and simply kicked the can down the road—this time by five years, rather than 90 days or six months. There’s nothing sustainable about the one-time revenue sources Congress has thrown into the pot here: once they’ve raided the Fed’s balances, for example, they can’t do it again. But in five years, when the one-time money runs out—sooner, if Congress’s creatively optimistic estimates of the revenue produced by its gimmicks aren’t realized—we’ll be facing exactly the same underlying problems.

Chief among them is that our transportation system has excess demand for service because most users aren’t confronted with a price that reflects the cost of providing roads. Subsidized auto travel has lead to sprawling, car-dependent automobile travel patterns that generate additional traffic, congestion, pollution and crashes—and further burden the transportation system. The opposition to gas tax increases, and tolling, shows that most road users simply don’t attach any value to system expansion, or that they favor more spending on roads—but only as long as they personally don’t have to pay for it.

So maybe the FAST Act isn’t such in inappropriate name for this legislation: It will serve as a continuing reminder that instead of a serious and responsible solution, Congress simply chose to pull a fast one.

Is foreign capital destroying our cities?

Be afraid: Big foreign corporations are buying up our cities and stamping out our individuality. Or so warns Saskia Sassen in a piece ominously entitled, “Who owns our cities—and why this urban takeover should concern us all,” published in the Guardian Cities.

The harbinger of our doom, according to Sassen: large corporations are buying up our cities. Worldwide, such businesses bought something like a trillion dollars of real estate in the past year, up from about $600 billion the year before. Based on this single factoid, Sassen argues that that large corporations from countries all over the world own too much urban real estate, and that this ownership threatens the democratic rights and economic opportunities of ordinary city residents.

…large-scale corporate buying of urban space in its diverse instantiations introduces a de-urbanising dynamic. It is not adding to mixity and diversity. Instead it implants a whole new formation in our cities—in the shape of a tedious multiplication of high-rise luxury buildings.

The Barclay's Center under construction, 2011. Credit: Michael Dougherty, Flickr
The Barclay’s Center under construction, 2011. Credit: Michael Dougherty, Flickr

 

Case in point: Brooklyn. Ground zero for global capital’s dispossession of the locally owned city is the Forest City Ratner Pacific Place (née Atlantic Yards) development, a mixed-use residential and office project built atop an old rail switching yard, and adjacent to the new Barclay Center (home of the Brooklyn Nets and New York Islanders). This $5 billion, 22 acre project includes 14 towers with more than 6,000 new homes, including 2,000 promised affordable apartments. Originally developed by New York-based Forest City, the firm recently sold a large stake in the project to Shanghai-based Greenland Holding Group Co.

There are plenty of aesthetic and public policy reasons to dislike Pacific Place/Atlantic Yards. It is yet another public subsidy for private sports franchises, and one can argue that the city should have gotten a better deal for the sizable tax breaks it offered the developer. But as big as this project is—and it is the biggest in the borough—it’s hardly indicative of what’s driving development here.

The implication is that large scale corporate ownership is somehow stifling the diversity and dynamism of the city.  Far from being crowded out by big projects like Pacific Place, small scale businesses and entrepreneurs are thriving, and responsible for the real dynamism of the borough’s economy.  Earlier this year, the Brooklyn Chamber of Commerce released its first economic report card. It found that between 2009 and 2014, some 9,600 net new businesses opened in Brooklyn, twice the rate of new business formation for the prior decade.In the past three years, the borough has added 5,500 net additional incorporated self-employed individuals, an increase of 19 percent and more than in the rest of New York City combined.

Job growth is powered by small firms and large ones in Brooklyn. Credit: BK
Job growth is powered by small firms and large ones in Brooklyn. Credit: Brooklyn Chamber of Commerce.

 

A big part of this story is how creative entrepreneurs have flocked to Brooklyn. According to the Center for an Urban Future, between 2003 and 2013, the number of creative businesses in the borough more than doubled. Brooklyn has also become a hotbed for small tech firms and startup activity as well. Most famously, Etsy.com, perhaps the perfect techno-corporate reflection of all things Brooklyn and hipsterish, links 1.5 million sellers of handmade crafts with more than 20 million registered buyers).

We’ve heard Sassen’s lament before: back in the late 1980s, it was the influx of Japanese capital that was turning American city real estate into global corporate colonies. Mitsubishi, flush with cash from Japan’s bubble economy, famously bought New York’s iconic Rockefeller Center, and Japanese investors at one point owned 40 percent of the prime office space in downtown Los Angeles. Michael Crichton’s novel Rising Sun, made into a 1993 movie starring Sean Connery depicted the Japanese investment as part of a dark cabal undermining both American business and government. Despite the concerns of a Japanese takeover of the US economy, nothing of the sort happened. As it turned out, Japanese real estate investors were no more savvy than American ones: Mitsubishi walked away from its Rockefeller deal writing off much of its $2 billion investment.

But the biggest problem with Sassen’s premise is that most corporate real estate purchases are bought from other corporations—meaning there’s no net increase in the amount of corporately-owned property. It’s just being transferred from one corporation to another, and sometimes from bigger global corporations to smaller, more local ones. For example, mega-investor Blackstone just sold $1 billion worth of Los Angeles office buildings to local investors. According to Cushman Wakefield, most of the sales (and purchases) are by US based pension funds, life insurers, real-estate investment trusts and the like. It’s hard to see how exchanging one set of absentee corporate owners for another ought to matter to anyone. Every transaction has a seller, as well as a buyer, son one could just as easily describe the data Sassen cites as revealing a giant sell-off of corporate owned real estate. Effectively, the ownership of real estate is commoditized—pretty much like the money used to finance home mortgages.

Despite the impressive-sounding sums involved, this sort of churn tells us nothing about whether corporate ownership is increasing or decreasing. Despite the article’s central premise that absentee corporate ownership is large and growing, Sassen presents no data on what fraction of all urban real estate is corporate-owned, or whether it’s higher or lower than it was a decade or two decades ago.

There’s actually precious little comparable, national data on the ownership characteristics of real estate, especially commercial and multi-family. Studies of housing ownership patterns in New York and Baltimore by the Urban Institute concluded “we know surprisingly little” about ownership patterns. But from the data they were able to cobble together from local administrative records, they found no consistent relationship between type of owner and maintenance and affordability. In New York, mom-and-pop owned apartments tended be affordable and better maintained (in part, the authors speculate because local owners are more careful in choosing tenants), while in Baltimore, large-scale owners provided better-maintained buildings.

Credit: Urban Institute
Credit: Urban Institute

 

As much as Sassen and other may dislike the profile (and symbolism) of high-rise residential towers, it’s clear that the portfolio managers buying real estate actually value functioning urbanity. The Cushman Wakefield report from which Sassen draws her numbers is pretty adamant about the need for human-friendly cities with more public investment and better public spaces:

There are many strands to creating healthy cities but a sensible starting point is allowing—and where possible promoting—walking and cycling through both the infrastructure and public spaces but also via more mixed use facilities. A second must be in providing common space where the city’s residents can meet and relax to provide a lower stress environment, be that on the grand scale of an urban park or High Line.

It could even be the case that Sassen has it backwards: multi-national investors are a lot less interested in controlling or re-shaping the city than figuring out in what direction it’s going, and then investing there. It’s pretty clear that the market is increasingly turning its back on the traditional suburban model of development; investing in big city real estate is the most obvious way the financial types can figure to follow the market for urban living that’s been created not by the machinations of investors, but by the surging demand for urban living, and the kind of diverse, interesting neighborhoods found in big cities. Sassen worries that “large scale corporate buying of urban space…introduces de-urbanizing dynamic.” But far from looking to squelch urbanity, these investors are actually looking to invest in it, and make more of it.

Finally, it’s worth considering the missing counterfactual: What would happen if global capital weren’t flowing into major projects in large cities? It’s certainly acceptable to have objections to any project, and Sassen and others may reasonably decide that the scale of Pacific Place is out of place with its surroundings. The large new residential towers will no doubt provide housing to many upper income families. But would Brooklyn—and Brooklynites in surrounding neighborhoods—be better off if it weren’t built? Given the overwhelming demand for housing in New York, if those high income units aren’t built at Pacific Place, their would-be occupants won’t simply evaporate: instead, they’ll likely further bid up the price of all other housing in the borough, worsening affordability for everyone.

As we have pointed out before, the growing relative value of real estate in close-in is a sign of the growing economic strength of, and demand for urban living. We should hardly be surprised that capital is flowing to these areas, but it signals the growing power of urbanism, not its demise.

It’s a good time for buyers to beware

It’s the hardiest perennial in the real estate business: “Now,” your realtor will tell you, “is a great time to buy a home.”

Back in 2006, just as the housing market was faltering, that’s exactly what the National Association of Realtors (NAR) was telling us. In fact, in November of that year, the NAR launched a $40 million advertising campaign claiming boldly that it was a great time to buy or sell a home.

its_a_great_time1

The campaign’s central message, according to the New York Times, was that “historically low interest rates, a large supply of homes on the market and the group’s forecast of rising prices next year make now an ideal time to buy a home.” By then, the message met a highly skeptical audience. Financial commentator Barry Ritholtz of “The Big Picture” reviewed point-by-point NAR claims about the housing outlook and concluded “EVERY SINGLE STATEMENT IN THE AD IS MATERIALLY FALSE OR MISLEADING.” (Shouting caps in the original.)

Anybody who relied on the NAR’s advice then is probably regretting it. The NAR ads claimed that “homeownership is a safe, secure way to build long term wealth.” But as millions of homeowners learned to their chagrin when the housing bubble collapsed—to the tune of $7 trillion in lost value—the folklore about homes being a safe investment was just that: folklore.

And just this week, with the benefit of hindsight, the perennial cheerleaders at the National Association of Realtors now concede that those who went house shopping during the height of the housing bubble—2005 to 2007—maybe that wasn’t such a good time to buy after all. According to a story reported in Marketwatch:

Those who bought their homes 8 to 10 years ago — 2005 to 2007, at the height of the real estate bubble — have gained almost no equity in that time, an average of just $3,000 or 1%, said Jessica Lautz, the NAR’s managing director of survey research and communication.

Credit: MarketWatch
Credit: MarketWatch

 

If anything, estimates that 2005-07 buyers are ahead by one percent are almost certainly overly optimistic. These are NAR’s calculations, and don’t account for the fact that many of those who bought during the 2005-07 period lost their homes to foreclosure. Since 2006, more than 16 million homes have had foreclosure actions filed against them.  Excluding those who were wiped out creates an upward “survivorship” bias in the estimated returns for those who still managed to hang on to their homes. And even for the survivors, the one percent return is actually an overstatement, for two reasons. First, it’s a total return of one percent over six to eight years, not one percent per year. And second, it doesn’t reflect the fact that the costs of liquidating their “investment” would more than wipe out this tiny amount of appreciation. It’s also important to remember that the one percent figure is an average for all buyers. While homes in some are markets, like San Francisco or New York, have fully recovered, millions of others are still underwater and buyers owe more on their mortgages than their homes are worth. These buyers have sustained a net loss on their investment.

According to Zillow, roughly 14 percent of all mortgage holders—about 7 million households—have this kind of negative equity. The largest share of these underwater homeowners bought during the height of the bubble. And when you add in the cost of selling and moving—including the commissions that go to the realtors—it’s even worse. Zillow estimates that just less than a third of mortgage holders (31.4 percent) are in a situation of effective negative equity because they don’t have enough equity to sell their home, pay closing costs, move, and make a down payment on an equivalent home.

Even as early as 2005, there were plenty of warning signs that we were in the midst of a housing bubble. Princeton economist Robert Shiller—and subsequent Nobel laureate in economics—warned for years about “irrational exuberance” in the housing market. And post-mortems of the housing crisis—including those by Atif Mian and Amir Sufi—showed how the structure of home mortgage finance encouraged buyers to take on risky, highly-leveraged bets on housing that had devastating financial and economic consequences when the day of reckoning finally came.

As tragic as the repercussions of the bubble’s collapse were for homebuyers—and all of us, really, because this triggered the Great Recession—the deeper policy question here may be whether it makes sense to position home ownership as the principal means of wealth-building for American households. If housing is a volatile, risky investment, and if returns vary so much over time and across space—which is decidedly the lesson of the housing bust—should we really be encouraging people to incur debt (mortgages) and stake their financial well-being to real estate markets? This is a question we’ll investigate further in the weeks ahead at City Observatory.

So the next time you hear someone telling you it’s a good time to buy a home, you might want to remember the old latin phrase Caveat Emptor, which today we might want to revise to read “Nam tempus nunc ut caveat emptor”: “Now is a good time for the buyer to beware.”


Here’s how “survivorship bias” works.  For many calculations, you get very different statistics depending on when you sample a population. If you look at the statistics for just the cases that were still extant at some later date, you get an upwardly biased estimate. For example, in 2009, the sole survivor of the sinking of the Titanic—Milavina Dean—was 97 years old. If you looked only at the sample of Titanic passengers who survived to 2008, you could say the average life expectancy of Titanic passengers was 97 years.

Zoning and cities on the national economic stage

It’s hard to think of an issue that is more quintessentially local than zoning. It’s all about what happens on the ground on a specific piece of property in a particular neighborhood. It’s the bread and butter of local governments and neighborhood groups. Zoning and land use seem about as far removed from national economic policy as just about any issue you can imagine.

Or so you might have thought until last week.

The local becomes national. Credit: John Haslam, Flickr
The local becomes national. Credit: John Haslam, Flickr

 

On November 20, Jason Furman, Chair of the President’s Council of Economic Advisers delivered a speech at the Urban Institute that is required reading for all city leaders. In it, he spells out why zoning—and by extension, how we build cities—matters vitally to tackling national problems ranging from accelerating economic growth to broadening opportunity to reducing inequality. The Furman speech has already drawn media attention: Matt Yglesias, who’s been on the zoning beat for a while, wrote in Vox that Furman’s speech demonstrated that “regulations mandat[ing] single-family homes” are “a disaster” for “younger people, for renters, and for the overall cause of social and geographic mobility.”

The Atlantic, in its story, emphasized that Furman “isn’t alone in his belief that the growing prevalence of economic rents are one of the root causes of inequality today.” It also related some of the history about how zoning became such a powerful force in metropolitan economies—much of which overlaps with what we published here last week.

This is a big deal. For the most part, macro-economists don’t much concern themselves with cities. Sure, they’ll focus in on housing starts—because housing construction and finance are so closely related to the economic cycle and so sensitive to changes in monetary policy. But most of urban economics is generally treated by these national economic modelers as a quiet backwater of applied microeconomics. So having the Chair of the Council of Economic Advisers weigh in on urban issues, especially zoning, is significant.

In his remarks, Furman links what’s happening in cities to two big macroeconomic problems: the slowdown in productivity growth, and the rise of income inequality.

Can't move here. Credit: (vincent desjardins), Flickr
Can’t move here. Credit: (vincent desjardins), Flickr

 

The argument on productivity is this: By bringing people together, cities facilitate the formulation and application of new ideas that propel innovation, creating new products and lowering costs. These so-called “agglomeration economies” are a major factor in lifting productivity. For a variety of reasons, some cities are more productive than others. Historically, people have moved from less-productive regions to more-productive ones, because places with higher productivity tend to have better wages. In the process, they increased the productivity of the country as a whole.

But since the 1970s, many of the most productive cities have greatly limited the expansion of their housing supply, and thus the number of people who can move there. In other words, they hold back population growth in the very places that are the biggest contributors to economic opportunity. Fewer people end up living and working in the most productive cities, and more people end up living in somewhat less productive cities. Two Berkeley economists have estimated that the total value of lost output due to this lower efficiency because some highly productive cities aren’t as large as they might be over a trillion dollars annually.

The constrained housing supply argument recognizes that a major source of upward economic mobility is the ability of Americans to physically relocate to places with greater economic opportunity. Furman notes that physical mobility has decreased in the US over the last several decades, and that in-migration has been suppressed in exactly those places with the highest levels of productivity. That means fewer opportunities to move up the income ladder, especially for the lowest income segments of the population, who are most sensitive to housing costs in their decisions about moving.

The argument on inequality is based partly on this observation about the constrained housing supply in highly productive cities, and partly on the work of Raj Chetty and his colleagues at the Equality of Opportunity project. Intergenerational economic mobility—measured as the likelihood that children born to families in the lowest income quintile will see a substantially different economic outcome as adults—varies substantially among metropolitan areas. Chetty found that one of the important correlates of this kind of economic mobility was whether a metropolitan area had a high level of economic segregation: having high income and low income people live in widely different parts of the metropolitan area. Again, local zoning plays a key role in determining whether housing opportunities are widely available within metropolitan areas for persons of all income levels.

Although he didn’t mention it in his speech, Furman could also have pointed to the work of Matthew Rognlie. Rognlie has shown that capital gains from housing—that is, the money homeowners earn in part by using zoning to increase the price of their property—are a major component of rising inequality between the upper end of the income distribution and everyone else. All of the net increase in capital’s share of income has been in the form of returns to housing. Given the important role of housing in driving income inequality, it’s important to pay attention policies—like local land use restrictions—which can drive up housing costs.

Screen Shot 2015-11-25 at 10.06.17 AM

To us at City Observatory, these observations show the pervasive and powerful effects of what we’ve called the nation’s shortage of cities. The high and rising demand for urban living is daily colliding with our limited ability to rapidly expand the supply of great cities, great urban neighborhoods, and housing within those cities and neighborhoods. Part of the problem is just that demand can—and has—changed much more rapidly than supply—people’s tastes change quicker than we can build new houses, much less neighborhoods and cities. And a key factor impairing our ability to meet this demand is local land use regulations.

As Furman calls out, all the things that impede increments to housing supply including density restrictions, parking requirements, prohibitions on mixing different uses in a single neighborhood—contribute to higher prices, less mobility, lower economic growth and greater inequality.. In fact, the “modern” approach to planning has made the most desirable, most valuable most in demand kind of neighborhood—walkable, dense, mixed use urban development—actually illegal in most places.

While Furman’s speech is a welcome acknowledgement from the most authoritative economic voice in the federal government of the importance of cities, his suggested federal actions for dealing with the problems he identifies are pretty small bore.

Furman sketches out three federal initiatives: the administration’s new rule on Affirmatively Furthering Fair Housing), which would block local policies that locate new public housing in ways that reinforce patterns of segregation; a new $250 million loan fund for affordable multi-family housing projects; and a proposal—probably doomed—by the administration to make grants to local governments to overhaul their zoning ordinances. While these are steps in the right direction, they are just the tiniest of baby steps.

Furman does little to acknowledge the prodigious federal role in promoting and reinforcing the local status quo that he recognizes is so damaging. For decades, federal tax, expenditure, and financial policies have made home ownership the de facto preferred means for American households to build wealth. The efforts have been buttressed by everything from policies of the FHA which discouraged multi-family housing in established neighborhoods, and highways that subsidized sprawling, low-density single family owner occupied housing developments. Even the nationwide adoption of zoning traces its roots back to Herbert Hoover’s efforts at the U.S. Department of Commerce in the 1920s to develop and propagate model zoning codes. As long as homeownership remains effectively the only federally sanctioned vehicle for wealth accumulation for lower- and middle-income families, is it any wonder that they devote enormous energy to protecting their investments?

It’s definitely fair to say that local zoning is a major factor in shaping our shortage of cites. But it’s equally important that local zoning is underpinned by a web of federal policies that make it difficult to do anything different from what we’ve done. Now that CEA has puts its finger on this problem, we hope that it will keep working and come up with a set of policy recommendations that is fully commensurate with the scale of the issues involved. This high level federal interest is long-overdue and well-warranted, but there’s much more work to be done.

“Caveat Rentor” – beware of crazy rent statistics

The quality of rent data varies widely, beware of erratic data sources

Trying to measure average housing costs for neighborhoods across an entire city—let alone the whole country—is an incredibly ambitious task. Not only does it require a massive database of real estate listings, it requires making those listings somehow representative at the level of each neighborhood and city.

For a number of reasons, just taking the average of all the listings you can find is likely to produce extremely skewed results, with numbers much higher than true average home prices. For one, many apartments, especially on the lower end of the market, aren’t necessarily listed in places that are easy to find—or at all. Instead, landlords find tenants with a sign on a fence or streetlight pole, local (and not necessarily English-language) newspapers, or just word of mouth. On top of that, if you have two homes of similar quality but even slightly different prices, you would expect the cheaper one to rent or sell more quickly. As a result, it would spend less time listed than the more expensive home; any given sample of listings, then, would tend to over-represent those more expensive, harder-to-rent homes. (If this doesn’t make sense, read the “visitors to the mall” example here, explaining a similar statistical problem with attempts to measure prison recidivism.)

So we’re sympathetic to anyone taking on this challenge. But that doesn’t mean that organizations who take it on but fall short should be given a pass.

Take, for example, Zumper. Zumper is a  website that features rental listings in cities around the country. So far, so good. Zumper has also made a name for itself through its “National Rent Reports”—more or less monthly press releases that claim to track median rental prices around the country. These reports have received copious media coverage, from the Bay Area to Seattle to Nashville to Chicago to Boston to LA to Miami to Denver, and so on.

Some journalists apparently take Zumper’s reports at face value. For example, Wolf Richter of WolfStreet, spotted a “bubble” in apartment markets–based on a look at month over month changes in one bedroom apartment prices starting in late 2017. According to Zumper data, one bedroom rents surged, reaching double digit rates by February 2018, before falling back to less than two percent. According to Richter:

A peculiar phenomenon cropped up last November: The median asking rent for 1-BR apartments suddenly surged by the double-digits, even as the median asking rent for 2-BR apartments was barely edging up. This phenomenon endured for four months but has now collapsed (the phenomenon remains unexplained, though some suspects have been lined up):

But the real suspect here is the quality of Zumper’s data.  Not only is there no plausible reason why nationally, one bedroom rents would follow a suddenly different trajectory than two bedroom rents.  Averaged over millions of units nationally, there’s no reason why rents should move so abruptly if they’re correctly measured.  In addition, no other source has any corroborating increase in one bedroom rents during this time. For reference, Zillow‘s Rental Index shows a very steady rate of increase, with a value of less than 3 percent year over year through 2017 and 2018. Yardi‘s national average apartment rent didn’t change by more than $1 per month between July 2017 and March 2018. The real problem would seem to be Zumper’s methodology, which is severely affected by the composition of units listed for lease on its website. This composition effect, and not any change in market conditions appears to be influencing these results.

Unfortunately, Zumper’s reports also appear to be severely affected by the problems we listed above, and possibly others. We noticed this in Chicago. Back in August, 2015 Zumper’s National Rent Report declared that the median one-bedroom apartment in Chicago cost $1,920—a number that would raise eyebrows among anyone who has actually looked for one-bedroom apartments in that city. A cursory glance at Zumper’s neighborhood-level data reveals issues that should call the entire report into question. 

From Zumper's website.
From Zumper’s website.

 

“Median,” of course, means that half of Chicago’s one-bedroom apartments ought to cost more than $1,920, and half ought to cost less. But according to Zumper’s own data, just three of the city’s 77 neighborhoods had median one-bedroom rents of over $1,920. While apartments are definitely not distributed evenly over the city, so you wouldn’t necessarily expect an even split in terms of neighborhoods, it’s simply not plausible (or supported by, say, the Census) that half of the city’s apartments are in just three of its neighborhoods.

It seems more likely that half of Zumper’s listings are in just three of the city’s (wealthiest) neighborhoods. As of the writing of this article, Zumper claims to have over 4,000 apartments listed in the Near North Side—the most expensive part of the city—and just 11 in Jefferson Park, five in West Garfield Park, and zero in South Lawndale, three of the cheaper neighborhoods.

Nor does it appear that Chicago is the only city with this problem. In Los Angeles, it appears that about 25 neighborhoods have median rents above the supposed citywide median—and about 70 have ones below. In Philadelphia, Zumper’s map shows just 11 neighborhoods with median rental costs at or above the supposed citywide median, and over 40 below; the proportion is similar in San Diego. The skewed distribution of Zumper’s listings is also apparent in these cities: the relatively more expensive Philadelphia neighborhoods of Rittenhouse Square, Center City East, and University City have 99, 219, and 94 apartments listed, respectively, while the less-expensive communities of Elmwood, Kingsessing, and Mill Creek have 19, 25, and 8.

These comparisons likely understate how inaccurate Zumper’s numbers are. After all, if its listings skew towards the higher end of the market, they likely not only oversample wealthier neighborhoods, but also more expensive properties in those neighborhoods, meaning that the true median rent in each neighborhood, not just the city as a whole, is below what Zumper reports.

Comparing Zumper’s citywide medians to estimates from Zillow, which is generally regarded as one of the more accurate estimators of real estate prices, reveals a mixed bag. (We looked at numbers for September, 2015). In some cities, the two sources give roughly similar numbers: Zumper estimates the median listed one-bedroom apartment cost $2,110 in Washington, DC, versus Zillow’s estimate of $2,149; the estimates for Los Angeles are $1,830 and $1,850, respectively. But in many places, they’re quite different. In New York, it’s $3,160 versus $2,300; in Chicago, $1,920 and $1,550.

Zumper responded to our inquiries over Twitter and email. A spokesperson said that Zumper “stands firmly behind [its] rental data.” He added: “We have some of the strongest inventory from which to analyze…. We are reporting on true, asking rents seen in the market, and do not create an algorithm to estimate value.”

Of course, put another way, this is largely our point: Zumper takes the median from its listings, without compensating at all for the fact that its listings are disproportionately concentrated in higher-end neighborhoods. While it may be true that Zumper has a relatively large inventory of rental homes in its database, that’s akin to an online pollster saying that their polls must be accurate because they got so many votes. While quantity matters, at a certain point, quality—representativeness—matters much more.

On Twitter, Zumper’s CEO also told us that the National Rent Report focuses on the median apartment available for rent, and doesn’t claim to take into account apartments that are currently occupied. But as an explanation for Zumper’s concentration of listings in high-end neighborhoods, that doesn’t really pass the smell test: differences in housing turnover between wealthier and less-wealthy communities are several orders of magnitude too small to account for, say, the gap between the number of listings in the Near North Side and Jefferson Park. (Note that the Zillow estimates we described above are also for listed apartments.) Nor are claims that that gap is “in proportion to how many people move” to each of those neighborhoods plausible.

We should note that none of this is really a problem for Zumper’s main business, which is being a database for people looking for a place to rent. But it does mean that they should not be used as a reliable source for rental data, just as journalists shouldn’t report on real estate trends by simply adding up every listing on Craigslist.

Housing affordability issues are real, as we’ve written about here extensively, and the media absolutely should be reporting on home price trends, both locally and nationally. But precisely because these issues are so important, it’s crucial that the data that gets reported is reliable. Until it addresses the problems we’ve brought up here, Zumper’s rent reports are not, and journalists should be aware of that.

 

The high price of cheap gas

At least on the surface, the big declines in gas prices we’ve seen over the past year seem like an unalloyed good. We save money at the pump, and we have more to spend on other things, But the cheap gas has serious hidden costs—more pollution, more energy consumption, more crashes and greater traffic congestion. There’s an important lesson here, if we pay attention.

US macroeconomic forecasters are usually very upbeat about any decline in gasoline prices.

Because the US is a big importer of petroleum, a decline in oil prices benefits the US economy. Lower oil prices reduce the nation’s balance of trade deficit, and effectively put more income into consumer’s pockets, which helps stimulate the domestic economy. In theory, declining gas prices should have the same stimulative effect as a tax cut. Whether that’s true in practice depends on how consumers respond to changing gas prices. Some of the positive effect of the decline has been muted by consumer disbelief that price reductions are permanent. Earlier this year, surveys by VISA showed that 70% of consumers were still wary that prices could rise.

Low gas prices: worse news than you think. Credit: Minale Tattersfield, Flickr
Low gas prices: worse news than you think. Credit: Minale Tattersfield, Flickr

 

But cheaper gas has does free up consumer budgets to spend more in other industries. Using data on credit card and debit card purchases of households, and looking at variations in spending among households that spent a little and a lot of their income on gasoline, and observing how spending patterns changed as gas prices fluctuate led the JP Morgan Chase Institute to predict that the bulk of savings from lower gas prices go to restaurant meals, groceries and entertainment.

Locally, the effects can be different. In oil-producing regions, as the saying goes, your mileage may vary. Declines in oil prices have produced a sharp fall off in revenue, drilling activity, and jobs in places like Texas, Oklahoma, North Dakota, and Alaska. Recently, Shell Oil shelved its plans to drill for oil in the Arctic because it couldn’t justify the expenditure based on the current (and expected future) price of oil.

But while the macroeconomic news is mostly good, the microeconomic news is quite different. As we noted earlier, the demand for driving—and therefore gas consumption—is sensitive to the price of gasoline. Declines in gasoline prices encourage increases in driving. And more driving has all kinds of negative consequences that end up imposing costs on all of us: more traffic congestion, more injuries and deaths from crashes, and more pollution.

What’s happening now is the flip-side of the big declines in driving we experienced when gas prices went up. We haven’t tend to overlook the silver-lining associated with higher gas prices. For example, the reduction in vehicle miles traveled (VMT) that followed the advent of $3 and $4 a gallon gas prices was far more effective in reducing congestion than any highway expansion program. In large part, that’s because traffic congestion is highly non-linear, meaning that a small drop in the number of vehicles on the road can produce a proportionately much larger drop in congestion. According to travel tracking firm Inrix, in 2008, the 3 percent decline in VMT traveled led to a 30 percent decline in traffic congestion. As gas prices fall and driving increases, those gains may disappear.

Much more serious is the toll of deaths and injuries from crashes. Traffic fatalities, which had steadily decreased as driving ebbed, have recently been on the uptick as well. In Oregon, traffic fatalities have jumped to levels not seen in seven years—before the big run up in gas prices in 2008. In the first seven months of 2015, Oregon traffic fatalities were up 44 percent over the first seven months of 2014 (the period immediately prior to the decline in gas prices). Nationally, a 14 percent rise in crash-related fatalities has surprised insurers and pushed up car insurance premiums. A detailed study of gas prices and crashes in Mississippi found that a 10 percent increase in gasoline prices was associated with a 1.5 percent decrease in crashes per capita, after a lag of about 9-10 months.

Lower gasoline prices also mean we’re burning more gas and creating more pollution. Overall US gasoline consumption, which had been trending down for years, bounced back up in 2014 just as gas prices collapsed:

 

Screen Shot 2015-11-17 at 9.49.41 AM

Gasoline sales increased by about 10 million gallons per day over the past year, since each gallon contributes about 19.6 pounds of carbon dioxide, that means increased driving has led to about 35 million additional tons of carbon per year emitted into the atmosphere. This surge in driving has contributed to a reversal in the steady declines in total CO2 emissions the nation recorded in the years after 2008.

On top of it all, cheaper gas is prompting drivers to buy less fuel-efficient vehicles. Sales of of light-duty trucks are up sharply, and average fuel economy of new cars, which had been steadily improving, has fallen noticeably in the past year. According to researchers at the University of Michigan, the average new car today is rated at 25.0 miles per gallon, down from a peak of 25.8 miles per gallon in August 2014. Cheap gas today gets “locked in” to higher fuel consumption over the 15 or 20 year life of these less efficient vehicles.

There’s plenty of downside here, but if we pay attention, there’s also something we can learn: gas price fluctuations represent a terrific natural experiment in the efficacy of using pricing to manage traffic and its negative effects.

Of course gas prices are a fairly crude way of reflecting back to drivers the costs of their behavior. Gas prices don’t reflect the time of day traveled or whether the road is congested, and have far less impact on the behavior of owners of high-efficiency vehicles. But as blunt as the incentives are, they show that discouraging just a small amount of travel at the peak hour can result in big reductions in time lost to congestion and in lives lost to crashes.

The uptick in driving—and all its associated costs—resulting from the decline in fuel prices is powerful evidence of the effectiveness of pricing and demand management strategies in addressing the nation’s transportation problems. Our conventional approach to transportation consists almost entirely as “supply-side” measures: we build more roads, expand transit, and so on. But there’s another way to bring supply and demand into balance: to reduce demand.

A sign announcing congestion charges in London. Credit: mariordo59, Flickr
A sign announcing congestion charges in London. Credit: mariordo59, Flickr

 

TDM—travel demand management—is the neglected stepchild of US transportation policy. We have a few, fragmentary efforts, that are tried mostly in the breach: such as HOV (high occupancy vehicle) and HOT (high occupancy toll) lanes on a few congested urban freeways. In practice, they’re overwhelmed by cheap gasoline—and similar policies, like parking subsidies, which encourage more driving and actually make congestion—and pollution and crashes—worse.

Ultimately, there’s an important lesson here: prices matter. We neglect the most powerful and direct ways of managing demand—raising the price of driving, particularly on congested roadways and at the peak hour. Our recent experience with $3 and $4 gallon gas shows that we can reduce the demand for travel in ways that reduce traffic congestion, decrease the number of crashes and improve the air. Maybe it’s time to make that a conscious aim of transportation policy, rather than the by-product of oscillations in the global oil market.

In the meantime, enjoy your cheap gas: you’ll be paying for it in the form of more clogged roads and more crashes and deaths, less efficient cars and more pollution.

A “helicopter drop” for the asphalt socialists

The House of Representatives has hit on a clever new strategy for funding the bankrupt Highway Trust Fund: raid the Federal Reserve. Their plan calls for transferring nearly $60 billion from the profits earned on the Federal Reserve’s operations—basically fees paid by member banks—to bail out the Highway Trust Fund.

For years, many macro economists have been urging the Federal Reserve to stimulate the economy by using its power to effectively print money in the form of a “helicopter drop”—simply crediting every American with a certain amount of extra dollars in their bank accounts. The idea has been suggested as a way to jump start consumer spending in a moribund or deflationary economy by economists of some stature, including Ben Bernanke and Milton Friedman. The idea was advanced as a way of accelerating the sluggish growth we’re currently experiencing in an article in Foreign Affairs. But while it might make theoretical sense to economist, it was simply politically impossible, because as The Economist intoned, the idea of a helicopter drop would be anathema to Republicans.

Credit: Joe Shlabotnik, Flickr
Credit: Joe Shlabotnik, Flickr

 

But when it comes to a helicopter drop for highways, there’s no such problem. Remarkably, the proposal to tap the Federal Reserve’s funds comes not from radical Keynesians, but from the Republicans in the very conservative House of Representatives. And apparently, the same people who preach personal responsibility in almost every other field of endeavor want to insulate automobile drivers from paying the costs of the roads they drive on. While they may espouse the virtues for the free market in almost everything else, this position makes them “asphalt socialists” when it comes to transportation.

The best estimates are that drivers now pay only a tiny fraction of the direct costs of building and operating roads, not to mention causing huge externalities in the form of crash-related injuries and deaths and pollution. As we’ve noted before, the heaviest road users are the ones who get the biggest subsidies: The Congressional Budget Office estimates that trucks already cost the public as much as $129 billion annually more than they pay in road user fees. And a report from TransitCenter and the Frontier Group recently detailed the $7.3 billion in parking tax subsidies drivers get every year as well.

(Even with these subsidies, however, increasing fuel efficiency and the decline in per capita driving have pushed down revenues for the Highway Trust Fund, and contributed to the current crisis.)

While this latest chapter of dysfunctional public finance and ideological hypocrisy is playing out at the federal level, it’s equally prevalent in the way states and localities treat driving, too. Local governments have parking requirements that drive up the cost and drive down the supply of housing to subsidize car ownership. In Seattle, parking requirements add something on the order of $250 a month to the price of a typical apartment.

The new transportation bill will favor cars in other ways, too. Local highway projects will get an 80 percent federal match, but transit projects will get only 50 percent. Meanwhile, important sources of funds for transit, pedestrian, and bicycle programs, including TIGER grants and the Transportation Alternatives Program, were cut or imperiled.

While advocates of the road system regularly cloak their arguments in the rhetoric of choice and the free market, our transportation system is actually characterized by heavy government intervention on behalf of private vehicles. Massive, taxpayer-supported subsidies effectively bribe people to drive, and insulate them from the financial consequences their choices impose on others.

Drivers want more roads—as long as they don’t actually have to pay for them. The fact that there’s no stomach for increasing the gas tax—even though gasoline prices have fallen by more than a dollar a gallon in the past year—shows that when put to the test of the marketplace, there’s actually little demand for more transportation.

The irony, of course, is that transportation is clearly one policy area where traditional free market principles would put a serious dent in the problems of traffic congestion, air pollution, and safety. If car users faced anything close to the actual costs of building and operating roads (and mitigating or preventing the injuries and pollution effects), we’d see much less driving, and much less demand for additional capacity.

Truthiness in gentrification reporting

Recently, we’ve received three new pieces of evidence on how gentrification affects the lives of poor people in changing neighborhoods. First, a study from NYU’s Furman Center suggests that residents of public housing in wealthier and gentrifying neighborhoods make more money, suffer from less violence, and have better educational options for their children, despite also facing some challenges. Then another study from the Philadelphia Federal Reserve Bank finds that there has been much less displacement of existing residents from gentrifying neighborhoods than is commonly feared—and that those who do leave aren’t necessarily more likely to go to lower-income neighborhoods. And finally, a Columbia University study on gentrification in London also failed to find evidence of widespread out-migration in neighborhoods with rising average incomes.

Together, these stories suggest that while gentrification can be disruptive, and makes residents anxious about the future, it neither produces measurably higher levels of movement from the affected neighborhoods, nor does it usually make residents economically worse off. If anything, residents of improving neighborhoods see greater wealth (as measured by credit scores) and higher incomes ($3,000 to $4,500 higher for residents of public housing in New York City).

Screen Shot 2015-10-27 at 9.08.38 AM

So why is it that when media outlets report on neighborhood change, so many continue to ignore the abundance of evidence that relatively few low-income neighborhoods gentrify, and that when they do there is much less displacement than is commonly believed?

With the growth of research demonstrating the benefits of living in more economically integrated neighborhoods for low-income families, you’d expect to see more news articles about the positive aspects of neighborhood change. . This is especially true in light of the widespread reporting of Raj Chetty et al’s findings about the connection between integration and improved economic mobility for children growing up in poor households.

Unfortunately, most media coverage seems to ignore these consistent research findings. The result is a classic case of what Stephen Colbert famously called “truthiness”—the quality of seeming to be true according to one’s intuition, opinion, or perception without regard to factual evidence. The truthiness here is that we “know” that gentrification is an intrinsically malignant process, and so we deeply discount or simply ignore evidence to the contrary, even as that body of evidence is piling up. The studies have to be fit into our prior understanding of the issue, rather than adapting our understanding to the new facts.

The New York and Philadelphia studies both confirmed earlier research that gentrification is seldom associated with outmigration, and that it is frequently associated with higher incomes and better economic results for the longtime residents of gentrifying neighborhoods. But no reader of the media coverage would ever get that impression from a quick glance at the headlines or even the thrust of the stories’ narratives. Consider these three examples.

The New York Times headlined its story “In Chelsea, a Great Wealth Divide,” and began by describing the plight of a retired resident of public housing who had to travel to New Jersey to find bargain shopping opportunities. Not until paragraph 14 did the story acknowledge the positive findings from a New York University study that public housing residents in high income or gentrifying neighborhoods enjoyed higher incomes, lower crime, better schools and higher test scores. And not until the final paragraph did the story report the resident’s firm opinion that despite the disorientation of change and the challenge of shopping, her neighborhood was unambiguously a better place to live post gentrification.

Chelsea. Credit: Boss Tweed, Flickr
Chelsea. Credit: Boss Tweed, Flickr

 

Or take the series on gentrification that Governing ran earlier this year. While the magazine acknowledged that gentrification (as defined by rising rents and educational levels) and displacement of the poor are not the same thing, it proceeded as if the link between the two were strong and well-established. In fact, there were more low-income people living in the neighborhoods that Governing identified as “gentrifying” in 2013 than in 2000.

There’s a similar issue in a more recent Next City story about the Philadelphia Federal Reserve study on gentrification, displacement, and credit scores. Although the piece leads by revealing that “gentrification hasn’t forced out as many residents as one might think,” and that those who do leave gentrifying neighborhoods aren’t necessarily more likely to move to more disadvantaged communities, it quickly pivots, announcing that the “findings didn’t leave much to celebrate.”

While the study in question was far from uniformly sunny, it’s odd that a report concluding that one of the most widely-feared aspects of gentrification is relatively rare would so quickly be dismissed. It’s true that on the whole, the news on housing affordability and economic segregation is bad. But reports like this one at least open the door to the possibility that when low-income neighborhoods begin to see renewed attention from people with incomes in the middle class or above, the effects need not be as exclusionary as we fear—and may even, with smart management, lay the groundwork for the kind of integration and reinvestment that has been a major goal of housing policy for decades.

There’s a man-bites-dog quality to the way we talk about poverty. While the gentrification narrative (having rich neighbors makes life harder for poor people) is common, you seldom read stories about the narrative of concentrated poverty (having mostly poor neighbors makes life harder for the poor), which is both more prevalent and demonstrably more harmful. More strikingly, we often turn a blind eye to more straightforward examples of displacement—such as suburban Marietta, Georgia’s $65 million bond issue to acquire and demolish about 10 percent of all its multi-family housing in a pretty transparent effort to move poor households to other cities.

The Marietta apartments in question, before and after being shuttered by the city.

Implicit in all these narratives is a strong crypto-segregationist impulse: Rich people ought to live with rich people, poor people ought to live with other poor people. Any thing that changes this status quo is suspect: If rich people move into poor neighborhoods we call it gentrification. If poor people move into rich neighborhoods, we call it social engineering. It’s difficult to see how this framing ever leads to a world in which there is less economic segregation.

We now have abundant evidence that promoting economic integration positively improves the lives of the poor. But to make progress in reducing concentrated poverty, we need to reframe the conversation and stop demonizing the very changes that are, however slowly and awkwardly, moving us in the right direction.

More doubt cast on food deserts

It’s a plausible and widely-believed hypothesis: Poor people in the United States suffer from measurably worse nutrition because they have such limited access to good food. Confronted with a high concentration of poor diet choices (like fast food, and processed food in convenience stores) and with few markets offering fresh fruit and vegetables, the poor end up eating a less healthy diet. In this view, bad diets are a problem of the urban environment—the lack of good food in poor neighborhoods.

But while there are certainly urban neighborhoods that lack good grocery options, is there any evidence that close physical access to food—as opposed to other factors like income or education—are big determinants of healthy eating? We’ve been skeptical of that view for some time.

Credit: Open Grid Scheduler, Flickr
Credit: Open Grid Scheduler, Flickr

 

A new study by researchers at the University of Pennsylvania, Princeton and the US Department of Agriculture summarized in the Chicago Policy Review concludes that after controlling for differences in educational attainment and income, variations in physical access to food explain less than ten percent of the variation in consumption of healthy foods. They also find that the opening of new, healthier supermarkets in neighborhoods has very little effect on food consumption patterns of local residents.

This new study confirms earlier research that questioned whether the physical proximity to healthier eating choices is the big driver of our hunger and nutrition problems.

Studies show that there is no apparent relationship between a store’s mix of products and its customer’s body/mass index (BMI) (Lear, Gasevic, and Schuurman, 2013). Limited experimental evidence suggest that improving the supply of fresh foods seems to have limited impacts on food consumption patterns. Preliminary results of a study of consumers in a Philadelphia neighborhood that got better supermarket access showed no improvement in fruit and vegetable consumption or body mass index even for those who patronized the new store.

In January, we observed that physical proximity alone is not likely to be a strong explanation of variations in diet. Judged by proximity to grocery stores nearly all of rural America is a food desert. Nathan Yau at FlowingData uses Google maps data to construct a compelling map of how far it is to the nearest grocery store across the entire nation. The bleakest food deserts are the actual deserts of the American West, in Nevada and Wyoming.

City dwellers, particularly those in the biggest, most dense cities tend to live closest to supermarkets and have the best food access. WalkScore used their data and modeling prowess to develop some clear, objective images of who does (and doesn’t) have a good grocery store nearby. They estimate that 72 percent of New York City residents live within a five-minute walk of a grocery store. At the other end of the spectrum, only about five percent of residents of Indianapolis and Oklahoma City are so close. If you want to walk to the store, this data shows the real food deserts are in the suburbs.

There are other ways of measuring food access and mapping food deserts. The U.S. Department of Agriculture and PolicyMap have both worked to generate their own maps of the nation’s food deserts. They use a combination of physical proximity (how far it is to the nearest grocery store) and measurements of neighborhood income levels.

While it’s clear that income plays a big role in food access, it’s far from clear how to combine income and proximity to define food deserts. The USDA uses an overlay which identifies low-income neighborhoods with limited food access. PolicyMap has a complicated multi-step process that compares how far low-income residents have to travel to stores compared to higher income residents living in similarly dense neighborhoods.

In practice, combining neighborhood income and physical proximity actually muddles the definition of food access. First, and most important, it acknowledges that income, not physical distance, is the big factor in nutrition. Both of these methods imply that having wealthy neighbors or living in the country-side means than physical access to food is not a barrier. Second, it is your household’s income, not your neighbor’s income, that determines whether you can buy food. Third, these methods implicitly treat low income families differently depending on where they live. For example, PolicyMap excludes middle income and higher income neighborhoods from its definition of “limited supermarket access” areas—and therefore doesn’t count lower income families living in these areas as having poor food access.

The fact that both of these systems use a different yardstick for measuring accessibility in rural areas suggests that proximity isn’t really the issue. Rural residents are considered by USDA to have adequate food access if they live within ten miles of a grocery story, whereas otherwise identical urban residents are considered to have adequate access only if they live within a mile or half-mile of a store.

If we’re concerned about food access, we probably ought to focus our attention on poverty and a lack of income, not grocery store location. The argument here parallels that of Nobel Prize-winning economist Amartya Sen, who pointed out that the cause of starvation and death in famines is seldom the physical lack of sufficient food, but is instead the collapse of the incomes of the poor. Sen’s conclusion was that governments should focus on raising incomes if they wanted to stave off hunger, rather than stockpiling or distributing foodstuffs

It’s tempting to blame poor nutrition and obesity on a lack of convenient access to healthier choices, but the problem is more difficult and complex than that. Poverty and poor education are strong correlates of poor nutrition and obesity.

Of course, we have good reasons to believe that the built environment does play an important role in obesity—but as the Surgeon General’s report implies that may have more to do with how easy it is to walk to all our daily destinations, and not just the distance to the fresh food aisle.

(Portions of this post appeared originally on City Observatory in a January commentary “Where are the food deserts.”)

Beyond gas: The price (of driving) is wrong

Our recent conversation about the future of American driving habits, and the role of the price of gas in changing them, is a good reminder of a broader truth about transportation policy: prices are important, and getting prices right (or wrong) is crucial. And when it comes to driving, prices are frequently wrong.

That’s because driving is extremely costly: It uses vast amounts of valuable urban land and requires the construction and maintenance of public infrastructure. But it also has massive negative spillover effects, from forcing cities to sprawl more than they would otherwise (because of all that land used for parking and driving compared to more space-efficient transportation modes), to pollution and environmental damage, to the public health crisis of deaths and injuries caused by car crashes.

But drivers see very little of this in the price of using their vehicles. A study released earlier this year busted open the myth that drivers “pay their way”—even only counting the direct costs of maintaining their transportation infrastructure—through the gas tax. In fact, those revenues cover less than half the cost of building and maintaining road networks. Below, we’ve republished our writeup of that report (originally entitled “There’s no such thing as a free way”), which remains a hugely important piece of context for any debate over the future of transportation policy. A big takeaway: While gas prices are an important part of the cost of driving, they’re only a part. We ought to use other levers, from taxes to congestion and parking charges, to get the price of driving right.


 

A new report from Tony Dutzik, Gideon Weissman and Phineas Baxandall confirms, in tremendous detail, a very basic fact of transportation finance that’s widely disbelieved or ignored: drivers don’t come close to paying the costs of the roads they use. Published jointly by the Frontier Groups and U. S. PIRG Education Fund, Who Pays for Roads exposes the “user pays” myth.

Screen Shot 2015-05-12 at 3.32.45 PM

The report documents that the amount that road users pay through gas taxes now accounts for less than half of what we spend to maintain and expand the road system. The shortfall is made up from other sources of tax revenue at the state and local level. This subsidization of car users costs the typical household about $1,100 per year – over and above what they pay in gas taxes, tolls and other user fees.

While recent congressional bailouts of the Highway Trust Fund have made the subsidy more apparent, it has actually never been the case that road users paid their own way. Not only that, but the amount of their subsidy has steadily increased in recent years. The share of the costs paid from road user fees has dropped from about 70 percent in the 1960s to less than half today, according to the study.

There are good reasons to believe that the methodology of Who Pays for Roads, if anything, considerably understates the subsidies to private vehicle operation. It doesn’t examine the hidden subsidies associated with the free public provision of on-street parking, or the costs imposed by nearly universal off-street parking requirements, that drive up the cost of commercial and residential development. It also ignores the indirect costs that come to auto and non-auto users alike from the increased travel times and travel distances that result from subsidized auto oriented sprawl. And it also doesn’t look at how the subsidies to new capacity in some places undermine the viability of older communities (a point explored by Chuck Marohn at length in in his Strong Towns initiative.)

These facts put the widely agreed proposition that increasing the gas tax is politically impossible in a new light: What it really signals is car users don’t value the road system highly enough to pay for the cost of operating and maintaining it. Road users will make use of roads, especially new ones, but only if their cost of construction is subsidized by others.

The conventional wisdom of road finance is that we have a shortfall of revenue: we “need” more money to pay for maintenance and repair and for new construction. But the huge subsidy to car use has another equally important implication: because user fees are set too low, and because, in essence, we are paying people to drive more, we have excess demand for the road system. If we priced the use of our roads to recover even the cost of maintenance, driving would be noticeably more expensive, and people would have much stronger incentives to drive less, and to use other forms of transportation, like transit and cycling. The fact that user fees are too low not only means that there isn’t enough revenue, but that there is too much demand. One value of user fees would be that they would discourage excessive use of the roads, lessen wear and tear, and in many cases obviate the need for costly new capacity.

And these subsidies to car travel have important spillovers that affect other aspects of the transportation system. There’s a good argument to be made that part of the reason that subsidies to transit are as large as they are is that motorists are being paid not to use the transit system in the form of artificially low prices for road use and (thank you Don Shoup) parking.

Credit: David Gallagher
Credit: David Gallagher

 

There’s another layer to this point about roads not paying for themselves: Most of these calculations are done on a highly aggregated basis, and look at the total revenue for the road system, and the total cost of maintaining the road system. What the study doesn’t explore is whether particular elements of the road system pay for themselves or not.

Think about air travel for a moment. Airlines don’t simply look at whether their total revenue from passengers (fares and all those annoying fees) covers the total cost of jets, crews, and fuel (although the stock market pays attention to this). Airlines look at each individual flight and each route, and examine whether the number of travelers and the amount of fares that will be paid cover the cost of providing that service—when not enough passengers use a route, they discontinue air service (as many small market cities know too well). While this calculus is routine and well-accepted in air travel and the private market, it’s unknown for public roads.

The Frontier Group/US PIRG study also significantly understates the economic cost of the transportation system. Their analysis looks only at how much we are actually spending to maintain and expand the current system. This is problematic for two reasons. First, there’s abundant evidence that we’re not spending enough to keep the system in repair, and there’s a growing hidden cost in higher future repair bills from the added deterioration of the system. These hidden costs are accumulating and not reflected in what users pay now. Second, we’re doing nothing to recognize the economic value of the existing road system: the replacement cost of the current road system –what it would take to rebuild the existing asset—is likely on the order of tens of trillions of dollars. Current road users get free use of that inherited, paid for (but depreciating) asset. Again, this is unlike other forms of transportation: just because United Airlines may have long since paid off the purchase price of the 737 you are riding in, doesn’t mean that they don’t charge you for the capital value of using that asset.

The real question for transportation public finance is whether new roads—additional capacity—pays for itself. Does the volume of traffic using a new bridge or additional lanes of freeway capacity pay for the road they use in their road taxes? New projects are so expensive–$100 million or more for a mile of urban freeway–that the road users who pay the equivalent of 2-3 cents per mile of travel in gas taxes (depending on the tax rate and vehicle fuel efficiency) never contribute enough money to recoup the costs of the new capacity.

Credit: Richard Masoner, cyclelicio.us
Credit: Richard Masoner, cyclelicio.us

 

The surprising evidence from road pricing demonstrations (tolled HOT lanes) is that the revenue gathered from tolling often fails to cover the costs of collecting the tolls and operating the toll collection system: they never come close to paying for the roadway. (To be sure, tolling improves the efficiency of use of the freeway—traffic flows more smoothly, capacity is increased—but the tolls don’t pay for constructing, or even maintaining the pavement).

But again, the highly visible toll collection mechanism, like the very visible gas tax, creates the illusion that user fees are paying the cost of the system.

As the Transit Center demonstrated in its recent report, Subsidizing Congestion, the $7.3 billion federal tax break for commuter parking costs encourages additional peak hour car commuting which has the effect of causing greater congestion. The systematic under-pricing of roads has the same effect, with the result that taxpayers subsidize car use through higher taxes, and also face greater congestion than they would if road users paid their way.

To be sure, these same questions can, and should be raised about transit, biking and walking projects. And for transit projects, close financial scrutiny is far more common than for roads. A key difference with these other forms of transportation is that they arguably have big net social benefits–lower congestion, less pollution greater safety, and they support important equity objectives by making transportation available to those who don’t own or can’t operate a motor vehicle. The problem with hidden subsidies is that often that they’re hidden: if we made them explicit, and considered our alternatives we would likely choose differently and more wisely.

The problem of pricing roads correctly is one that will grow in importance in the years ahead. It’s now widely understood that improvements in vehicle fuel efficiency and the advent of electric vehicles is eroding the already inadequate contribution of the gas tax to covering road costs. The business model of companies like Uber and Lyft likewise hinges on paying much less for the use of the road system than it costs to operate. The problem is likely to be even larger if autonomous self-driving vehicles ever become widespread—in larger cities it may be much more economical for them to simply cruise “free” public streets than to stop and have to pay for parking.

As we’ve pointed out before, the root of many of our transportation problems is that the price is wrong. Puncturing the widely held myth that cars pay their own way makes this report required reading for those thinking about transportation finance reform.

Now we are three!

Three  years  ago–on October 15th, 2014–we launched City Observatory, a data-driven voice on what makes for successful cities.  Since then, we’ve weighed in daily on a whole series of issues set in and around urban spaces. So today, we’re taking a few moments to celebrate our birthday, reflect back on the past year, and plot a course forward.

Flickr User: Blint.

Our fair city.

Once upon a time, it was all about “The City Beautiful.”  But today, our focus should turn, rightly, to “The City Just.”  Can we build communities that are not merely economically prosperous, but which are also diverse, inclusive and which foster widely shared opportunity?

We firmly believe that cities are the places of opportunity. Cities, when they work well, reduce the distance between people, and foster all kinds of interactions. Sometimes these interactions lead to friction and conflict, but many, if not most of these interactions are beneficial and serendipitous, with demonstrable social and economic benefits.

While many of the ills of the modern economy are most apparent in cities–the rich and poor live closer to one another cities than anywhere else, it’s also the case the the cure for these ills lies is strengthening the function of cities. Cities are full of contrasts and paradoxes. Even when some measures, like income inequality, signal the local manifestation of a national problem, what’s happening on the ground in cities, where people from different strata of society are living and working and playing closer to one another than they are in less dense and more economically segregated suburbs, is a cause for optimism.

Riven as our nation currently is by all manner of social, economic and political divides, cities are the place where we can invest in the civic commons–the kinds of public and quasi-public spaces that bring many different kinds of people togehter and foster the kind of “bridging” social capital that can knit our country back together.

For these reasons, we’re fundamentally optimistic about cities, and we’ll be exploring these themes in the coming months. We’ve long recognized, for example, that racial and economic segregation are serious detriments to cities realizing their potential. We’ll flip this perspective on its head by looking at the urban neighborhoods around the country, and in virtually every metro area, which exhibit high levels of economic and racial/ethnic integration.  They’re models of what we can to do built more inclusive places.

In a parallel vein, we’ll explore the strong common interests that cities, their businesses and their citizens have in pursuing mutually beneficial efforts to strengthen the civic realm. Its a tremendously encouraging time to be working in cities.   We hope you share that view and will continue to follow our work at City Observatory in the coming year.

 

Happy birthday to us!

A year ago today, October 15th, 2014, we launched City Observatory, a data-driven voice on what makes for successful cities.

The Plaza in Kansas City. Credit: chocolatsombre, Flickr
The Plaza in Kansas City. Credit: chocolatsombre, Flickr

 

The past year has been a whirlwind: We’ve released four major reports—Young and Restless, Lost in Place, Surging City Center Job Growth, and Less in Commoneach of which use data to examine some of the most important trends in American cities today: both the remarkable growth and transformation of central cities, as well as the persistent concentration of poverty and decline of “civic commons” spaces that pose some of our greatest challenges.

With each passing day, the evidence that cities are leading our nation’s economy becomes more compelling: City home values are rising much faster than in their surrounding suburbs, an indicator we call the “Dow of Cities.” City center job growth is outpacing that in the suburbs for the first time in decades, and the expansion of large metro economies is driving the national economic expansion.

Young people—especially young people with college degrees—are increasingly moving back to city centers. And the growing pool of talent in cities is drawing jobs and economic growth back to those centers as well.

We believe all of these things should encourage you, even if you don’t consider yourself an urbanist. The return to urban living has the potential to help create a healthier planet, as people drive less and use less energy to maintain relatively smaller urban homes. It can help make people healthier, too, by allowing them to walk more, and reducing their exposure to one of the greatest causes of death and serious injury in the country: car crashes. Finally, overwhelming evidence shows that bringing people with social capital and middle-class incomes back into urban neighborhoods that had lacked those things for years creates more social equity and economic mobility.

But the shift to cities comes with challenges as well as opportunities. The urban renaissance has been so robust that the demand for city living is growing at a much faster pace than the supply of great urban homes and neighborhoods. As a result, we have a “shortage of cities.” Many of our greatest urban challenges—declining housing affordability, traumatic neighborhood change, conflict over growth—trace their roots to this worsening shortage.

We’ve also pushed back on myths and miscalculations. We’ve challenged the still-too-common narrative that cities are dirty, dangerous and congested, and shown that we’ve made progress in making cities, safer, cleaner and more convenient. We helped lead the charge against the incredibly misleading “Urban Mobility Report,” which uses bogus figures about traffic congestion to convince the public and policy makers that our cities need even more highways, instead of investing in more affordable, sustainable, and efficient ways to get people where they need to go. And we’re working to encourage greater honesty and reflection in how we talk about change in cities—acknowledging that the cities we see today are not the product of some process of immaculate conception, but have evolved as a result of market forces and policy, including policies with the inherent contradiction that housing should be both affordable and a reliable source of wealth creation.

In our second year, we’ll turn even more of our attention to analyzing what we’re calling our “shortage of cities.” That’s the gap between the new increased demand for urban neighborhoods and the amount of housing actually available in them, which is held down both by zoning that prevents the construction of more housing in urban areas, and regulations that require newer neighborhoods to be ultra-low-density and car-dependent. We believe that understanding the causes, consequences, and potential cures for our national shortage of cities is key to many of our most important urban challenges, from making housing affordable to maximizing economic opportunity and creating a more sustainable transportation system.

And while we bill City Observatory as a “virtual” think tank, early next year, we’ll be engaging policymakers and thinkers in person, with a live City Observatory salon event to be held in Washington DC. Stay tuned for details.

We also want to thank everyone on our mailing list or Twitter who took the time to fill out our birthday survey, and we’re extremely gratified with the response we’ve received there and elsewhere. Though, as numbers people, we know these are hardly scientific stats, over three-quarters of survey respondents said they had spread the word about City Observatory by recommending our work to friends or colleagues, and many of you had kind words about our commentaries, research, and weekly email roundup, The Week Observed—with one calling it “absolutely my favorite thing I’m subscribing to right now.” (And for those of you who aren’t subscribed, you can do so at the top of the page!) We also heard your concerns about some issues with our website, from the way images load to the color scheme, and want to let you know that we’re working on making improvements.

Finally, we want to acknowledge the generous support of Knight Foundation in starting and sustaining City Observatory. Without Knight’s support and vision, none of the last year would have been possible.

Our birthday wish: Cities for everyone

Two years and two days ago–on October 15th, 2014–we launched City Observatory, a data-driven voice on what makes for successful cities.  Since then, we’ve weighed in daily on a whole series of policies issues set in and around urban spaces. So today, we’re taking a few moments to celebrate our birthday, reflect back on the past year, and plot a course forward.

Many happy returns! (Flickr: Daniel Nelson)
City Observatory turns 2. Many happy returns! (Flickr: Daniel Nelson)

Its a tremendously encouraging time to be working in cities.   After decades of disinvestment and the out-migration of people and jobs, cities, particularly city centers, are on the comeback.  With each passing day, the evidence that cities are leading our nation’s economy becomes more compelling: City home values are rising much faster than in their surrounding suburbs, an indicator we call the “Dow of Cities.” City center job growth is outpacing that in the suburbs for the first time in decades, and the expansion of large metro economies is driving the national economic expansion. More investment is flowing into downtown areas.  As we’ve chronicled, more people, particularly well-educated young adults, are increasingly choosing to live in close-in urban neighborhoods.

The return to urban living has the potential to help create a healthier planet, as people drive less and use less energy to maintain relatively smaller urban homes. It can help make people healthier, too, by allowing them to walk more, and reducing their exposure to one of the greatest causes of death and serious injury in the country: car crashes. Finally, overwhelming evidence shows that bringing people with social capital and middle-class incomes back into urban neighborhoods that had lacked those things for years creates more social equity and economic mobility.

While we’re fundamentally optimistic about cities, and we see them as essential to tackling many of the nation’s most pressing problems, we also recognize that cities are the epicenter of some serious challenges.

The immediate effect of this recent surge of interest, investment and migration is a shortage of cities.  More people now want to live in great urban neighborhoods than ever before. And the demand for urban living has grown far more rapidly than the supply of great urban places. Unfortunately, too many policies made it difficult to build additional housing in the most desirable neighborhoods. This mismatch is accentuated by the  temporal imbalance between fast-changing demand and slow-changing supply and has manifested itself in the form of higher rents and real estate values in urban centers. While higher rents are an important indicator of a turnaround–and the market signal that will help alleviate this shortage–higher rents pose major problems for many urban residents, particularly low income households.

In our view, the solution to this affordability problem will come from increasing housing supply and from building more great urban neighborhoods.  This is a matter of supply and demand:  as long as the demand exceeds the supply, prices (and rents) will go up.  But its also a matter of arithmetic:  If more people want to live in a neighborhood than their are houses to hold them, then some people who would like to live their will end up living somewhere else. And given the penurious nature of our housing support for the poor, it means low income households will be those disproportionately disadvantaged.

The movement back to the cities is an unparalleled opportunity to tackle one of the most persistent and destructive problems confronting our nation, the growth of economic segregation. We know that as bad as it is to be poor, it’s worse to have to live in a neighborhood where a large fraction of your neighbors are also poor: concentrated poverty amplifies all of the negative effects of poverty and it results in a permanently limited lifetime opportunities. Our work, and that of our colleagues at the Brookings Institution shows that despite all of the focus on an urban renaissance, neighborhoods of concentrated poverty are actually growing, and they are still disproportionately in urban centers.

Too often, unfortunately, discussions of cities get framed as a zero-sum game:  If we make the city or neighborhood better for some group or person, we’re somehow making it worse for everyone else. Many resist any change, for fear that they will end up worse off.

The challenge in our view is to look for ways to turn the revitalization of our cities into a win-win experience for all.  How do we leverage the growth and investment in cities in a way that promotes and expands their cultural, economic, and racial/ethnic diversity?  How do we build cities for everyone? There are promising efforts in many cities, as exemplified by the YIMBY–“Yes in my back yard“–movement that is growing, and which now has friends in very high places. In Seattle that city’s HALA “Housing Affordability and Livability Agenda”  has inspired some provocative conversations that are reshaping the contours of the city’s political scene. Environmental, social justice and housing affordability advocates in Portland have started a “Portland for Everyone” organization to advocate for more supply.

These efforts are hopeful signs that we can take the energy and momentum that is building in favor of urban living, and use that force to help propel efforts to build more diverse and inclusive communities. In our third year, City Observatory will focus on the challenge of building cities for everyone. We hope you’ll join jus.

 

Mystery in the Bookstore

Signs of a rebound in independent bookstores, but not in the statistics

Lately, there’ve been a spate of stories pointing to a minor renaissance of the independent American bookstore. After decades of glum news and closings, there are more and more instances of independent bookstores opening or expanding. The American Bookseller’s Association points with pride to a seven-year string of increases in its dues paying members. Articles in the New York Times “Indie Bookstores are back with a passion,” and US News “Indie Bookstores remain on a roll,” recount first hand accounts of successful firms.

The independent bookstore is an American icon. It’s hard to picture a city–the classic Main Street–without a local bookstore. Bookstores are one of the categories of customer-facing retail and service businesses we’ve used at City Observatory to create our “storefront index” which measures urban walkability. Founding father Ben Franklin was famously a self-taught intellectual who ran a book shop in Philadelphia. The indie bookseller figures prominently in pop culture, from Meg Ryan’s Shop Around the Corner bookstore owner in You’ve Got Mail, to a host of other films and television. In The Big Sleep, Humphrey Bogart’s Phillip Marlowe takes refuge in Dorothy Malone’s Acme Bookshop while staking out a suspect.

More recently, Portlandia has featured ardently feminist booksellers Candace and Toni, the proprietors of Women and Women First Bookstore.

 

For a long time, what with the growth of on-line retailer Amazon (which built its business model selling books at a discount) and with the advent of big-box retailing, it seemed like the small independent bookseller was a doomed anachronism. But in the past few years, there’s been a surprising rebound in local bookselling. It turns out that many readers still prefer the printed page, and gladly patronize a knowledgable and attentive local business. And the surviving and thriving local booksellers have changed their business models to emphasize personal service, community, and on-site experiences that larger and virtual competitors have a hard time matching. But while some stores are flourishing, others are floundering: in Memphis, the Booksellers at Laurelwood, one of three remaining city bookstores is closing this month. In Detroit, the city’s oldest–Big Bookstore in Midtown–is closing after eight decades. In St. Louis, it’s the half century old Webster Groves bookshop that’s closing.

One final sign that a shift back to bricks and mortar bookselling is in the cards: even Amazon is opening its own physical stores.

Government data tell a different story

With such upbeat stories in the popular press, we decided to take a quick look at Census data on the number and geography of bookstores, to see if we could corroborate and quantify these trends. We looked to two key data series, the annual County Business Patterns series, tabulated by the Census Bureau using payroll tax records, and the once-every-five years Economic Census, which survey’s the nation’s businesses about sales, wages, and business operations. We focus on the government definition of bookstores, NAICS 451211.  This statistical category includes all kinds of bookstores, from the large national chains to small, independent businesses, as well as college bookstores, and those that are adjuncts to museums.

According to the Economic Census, the number of bookstores in the US has fallen from 12,363 in 1997 to 7,176 in 2012–a loss of more than 5,000 establishments.  That pattern is also reflected more recently in the data reported as part of the County Business Patterns series. These data show the number of bookstores declining by about 30 percent since 2008, from 9,700 to about 6,900.

 

So here’s our mystery: While there’s been a visible resurgence in bookstores in some locations, the bigger pattern of change remains downward.

We’re not sure what the answer is to this mystery.  There are some of the usual suspects to consider.  First, its likely that many of the bookstores that are closing are the big national chains, like Borders, Barnes and Noble and Waldenbooks.  In market’s where these larger national stores are closing, it may be creating more market space for independent operators to thrive and even occasionally expand. A second factor is that much of the decline in the number of establishments may be among very small bookstores in small towns and rural areas. These are the kinds of places where the threat from Amazon (lower prices, wider selection and convenience) would be a threat.

 

 

 

The danger of taking policy lessons from extreme cases

Two recent press features have suggested that one Utah city has worked out the recipe for equitable development. The cover story from Newsweek’s October 2, issue offers “Lessons from America’s most egalitarian zip code.” It proposes that Ogden, Utah is a model for how the US can address income inequality.

The Newsweek cover.
The Newsweek cover

 

The article is at least the second in this vein. Writing in the Los Angeles Times in July, writer Don Lee said that quiet Ogden offers a surprising glimpse of income equality.

The premise of both of these stories is that the Ogden-Clearwater metropolitan statistical area has the lowest reported Gini coefficient (.3949) of any U.S. metropolitan area, an indication that income inequality is less there than in any other place.

The fact that one metropolitan area should have a measured rate of inequality that is lower than other metropolitan areas is hardly a surprise: somewhere, obviously, has to rank first, and somewhere else has to rank last. From a policy perspective, however, the question has to be: Is there anything we can learn from Ogden that we can apply to other metropolitan areas, or the nation as a whole?

Ogden has several unique characteristics. First, it’s very much in the economic orbit of the considerably larger Salt Lake City metropolitan area. While it is now classified as a separate metropolitan area, until 2000 Ogden and Salt Lake City were combined for federal statistical purposes. In some respects, Ogden resembles a large somewhat distant suburb of Salt Lake. Like many metropolitan areas, very high income and very low income households are somewhat more likely to be found in the urban center. Just as many of the suburbs of metropolitan areas have measurably less inequality than large urban cities, it shouldn’t be any surprise that Ogden has lower measured inequality than Salt Lake City.

Ogden. Credit: Christopher Koppes, Flickr
Ogden. Credit: Christopher Koppes, Flickr

 

As we pointed out in our post, high marks for equality may actually be a symptom of exclusiveness rather than equality: cities that exclude poor people frequently have higher measured equality than do more inclusive cities.

It’s also worth considering the structure of the Ogden economy. Two of the largest employers are arms of the federal government: Hill Air Force Base and the Internal Revenue Service. While neither pays particularly high wages, federal jobs are disproportionately large source of jobs and income in Ogden. According to the Bureau of Economic Analysis, federal payrolls account for more than 15 percent of all non-farm income in Ogden:  more than $2 billion annually in a local economy of about $13.7 billion.  And at more than 12 percent of the local economy, the share of Ogden’s metro economy made up of federal civilian payrolls is the third-highest in the country among large and mid-sized metropolitan areas.  Federal paychecks are a bigger share of the local economy only in Huntsville, Alabama, and metropolitan Washington, D.C.  In the typical large metropolitan area, federal civilian payrolls make up about 2.2 percent of the local economy; in Ogden, they are more than five times larger.

One thing is clear about the Utah experience: Cities in Utah (and many other Mountain West states) have higher rates of measured intergenerational economic mobility than the U.S. as a whole. Relying on statistics from the landmark Equality of Opportunity study conducted by Raj Chetty and his colleagues, both articles report than kids growing up in poor households have measurably better chances of moving up in the income distribution as adults than the typical American.

The Newsweek article attributes the city’s low rates of inequality to its economic and community development efforts. Apparently the city has revived its downtown and added to employment in the past decade or so. But there’s little evidence that higher measured equality in Ogden is the product of any recent policies or programs. The Chetty study measured the intergenerational mobility of kids who grew up in Ogden in the 1980s and ‘90s—at a time when Newsweek says the economy and community were struggling. And even in 1999, Ogden—then combined with Salt Lake City for federal statistical purposes—had one of the nation’s lowest levels of measured inequality. And as we’ve noted, the same pattern holds for many other communities in the Inter-Mountain West.

The statistics in the Chetty study show that the strong correlates of high intergenerational mobility across all communities are intact families, strong schools, limited sprawl, and low levels of racial and ethnic and income segregation.  These factors tend to be more deep-seated and slow changing than economic development programs.

So what are the real lessons Ogden offers for those looking to reverse the growing tide of inequality in the US?

First, it really helps to have a strong source of good, or at least middle-wage, jobs. While those have been increasingly hard to find in the private sector, if you’re lucky enough to have a substantial government employment presence—like a big military base or extensive administrative offices—you’ll probably have a bit more income equality. Whether that’s a recipe for other communities or the nation isn’t clear: quintupling the number of federal jobs in every metropolitan area doesn’t seem to be on anyone’s political agenda.  Being one of the fortunate few cities with a large concentration of government employment isn’t a scaleable solution to inequality.

Second, while some smaller metros on the periphery of a big city do well on the inequality statistics, many of the problems of inequality end up being outsourced to cities. Suburbs, especially exclusionary ones, tend to ban the kinds of affordable housing construction that enable poor people to live in a community. Cities offer a bigger base of job opportunities, more affordable transportation (workable transit systems) and often have better social support networks. Cities also attract high income households. Big city centers have more inequality, but it’s because they facilitate diversity and inclusion—not because they generate inequities.

While it may show up differently in different locales, the roots of the nation’s inequality problem are national and not merely the sum of local inequalities. The diminished value of the minimum wage, the falling clout of unions to raise worker wages, the growth of global competition, a tax code that favors the accumulation of wealth and the rise of superstar returns in a range of industries have all caused inequality to increase nationally. These challenges are largely beyond the power of cities to address.

Ironically, as we’ve pointed out at City Observatory, those places with a low measured level of equality may be the one’s that are the most diverse and inclusive, providing both attractive places for talented higher income workers to live and invest, and also providing access to housing and job opportunities for those at the bottom of the income scale.

So the story of Ogden is reminiscent of H. I. McDonough’s dream at the end of “Raising Arizona”:

But still I hadn’t dreamt nothin’ about me ‘n Ed, until the end. And this was cloudier, ’cause it was years, years away. But I saw an old couple bein’ visited by their children, and all their grandchildren too. The old couple wasn’t screwed up, and neither were their kids or their grandkids… And I don’t know. You tell me. This whole dream, was it wishful thinkin’? Was I just fleeing reality like I know I’m liable to do? But me and Ed, we can be good, too. And it seemed real. It seemed like us, and it seemed like, well, our home. If not Arizona, then a land not too far away, where all parents are strong and wise and capable, and all children are happy and beloved. I don’t know. Maybe it was Utah.

The end of peak driving?

A little over a year ago, a gallon of regular gasoline cost $3.70. Since then, that price has plummeted, and remains more than a dollar cheaper than it was through most of 2014.

Over the same period, there’s been a small but noticeable uptick in driving in the US. After nearly a decade of steady declines in vehicle miles traveled per person, car use has suddenly pushed upwards. Average miles traveled per person, which were 25.7 a year ago, have jumped up to 26.4 in July—the first sustained increase in driving in more than a decade.

Some in the highway community have heralded the growth in driving in recent months as a sign that we need to invest much more in road construction.

The increase isn’t very big, however. In historic terms, Americans are now driving at about the same rate as they were in 2000. It would take nearly a decade of growth at the current rate of expansion just to get back to the level of driving of 2004. But there’s little reason to believe anything like that is in the cards.

During the long period of driving declines, many were tempted to dismiss gas prices as a factor in shaping driving behavior, arguing instead that the decline was solely due to demographics, changing tastes, improved communication technology, and American’s falling out of love with cars. All of these trends were in play, but it’s clear now, as it should have been then, that price matters.

Nevertheless, highway advocates have predictably seized on the uptick in driving to claim that we need to throw a lot more money at road widening projects. Does the upsurge in driving really signal an end to the millennial abandonment of motoring? Is there a renewed “love affair” with automobile?

We would argue no: The cultural explanations of the driving trends have to be read in the context of prices. Millennials coming of age in the era of $4.00 a gallon gas behave very differently than baby boomers who paid 29 cents a gallon.

For some people, however, this is not obvious: they look at the trends in gas prices and vehicle miles traveled (VMT) and conclude that there’s little correlation. The very sharp Doug Short at Adviser Perspectives, argues, for example, that “the correlation is fairly weak over the entire timeframe.” Similarly analysts at the State Smart Transportation Institute argued that: “There is no clear evidence that fuel prices have distinctly influenced driver behavior during the past decade”

But “price elasticity”—the way that people change their consumption behavior in response to how much something costs—still matters. The fact that driving has risen as gas prices fall is far from a coincidence. In fact, evidence shows that Americans react to higher gas prices by driving less—and to lower gas prices by driving more.

Credit: Advisor Perspectives
Credit: Advisor Perspectives

 

It’s important to remember that there’s no reason to expect the reaction to changing gas prices to be instantaneous. An axiom of economics is that elasticities are larger in the long run than the short run. People make so many decisions (where to live and work, whether to own a car, etc) that can’t be changed immediately. Likewise, perceptions and expectations about future price changes make a big difference: Few people anticipated the advent of $4.00 per gallon gasoline in the early 2000s. The technical challenge with estimating price elasticities is that analysts have a hard time sorting out short term and long term effects (prices change rapidly, behavior much more slowly), and many of the changes in prices are either too small to be noticed by consumers or short-lived. As a result, the reaction to price changes plays out slowly, over time—the decline in VMT per capita after 2008 continued right through 2013, even though prices were not increasing above their previous peaks.

But while it may be attenuated, and play out gradually over time, there is still a behavioral response to changing prices. Already, there’s evidence that the decline is gas prices has produced a change in the kind of vehicles we’re buying. The sales-weighted average fuel economy of new vehicles, which has been steadily rising, and reached a high of 25.8 miles per gallon in the summer of 2014, has fallen by 2.3 percent to 25.2 miles per gallon today, according to researchers at the University of Michigan. Because of the long life of vehicles, lower fuel economy gets “baked into the cake”: meaning that lower fuel prices today produce greater fuel consumption and more emissions for years to come.

And while there may be a break in the longer-term decline in VMT per capita, there’s little reason to believe that we’ll see a return to the long term growth path that prevailed in the 1980s and 1990s—growth that many transportation departments’ forecasts still cling to.

It’s certainly true that demographics (an aging population), changes in tastes (the growing preference for urban living, biking and walking), and technology (the ability to use telecommunications to reduce trip taking) will all continue to contribute to a diminished demand for driving. But prices still matter. What we’re embarking on, courtesy of a highly volatile and unpredictable global market for petroleum, is a very interesting experiment to find out exactly how much—in economics terms, to discover the price elasticity of demand for driving. We’ll be watching to see the results.

One of the biggest myths about cities: Crime is rising

There’s a lot happening in American cities these days, which means that there’s a lot to read about! Even for those of us at City Observatory, sometimes good, important articles slip through the cracks. In recognition of that, periodically, we’ll dig back into our archives to republish a piece that we think deserves another go-around.

This time, it’s a post from last October about the myth of rising urban crime rates. Since then, there’s been even more talk about this, fueled in part by fear of Black Lives Matter-related protests.

This persistent alarmist meme about “rising urban crime” got a big boost two weeks ago with an article in the New York Times pointing to a number of examples of higher murder rates in some US cities compared to a year ago.  While the Times analysis was thorough debunked by FiveThirtyEight (absolute must read article here), the more widely read Times piece no doubt gave new life to this discredited old saw about cities—which is why we thought it was timely to recall our earlier analysis of crime rate trends. (Also see this piece from CityLab on the pernicious effects of high-crime myths.)


robocop watch
Credit: Danni Naeil, Flickr

The Myth: Crime in cities is on the rise

The Reality: Cities are getting safer

For decades, the common perception about cities is that they were dangerous, dirty, and crowded. A look at the facts tells a different story: our cities are cleaner, safer, quicker, and healthier than ever. Today I’ll take a look at how urban neighborhoods have become safer despite public attitudes to the contrary.

On the whole, violent crime is declining in the Unites States. The overall murder rate has dropped by more than half since 1991 and property crimes like burglary have been on the decline. As a result, American concern about crime has ebbed: in 1994 a majority of Americans told Gallup crime was the nation’s most pressing issue; only 1 percent gave that answer in 2011. Even though we individually regard crime as less of a problem, people still tend to think of big cities as somehow dangerous. Consider the New York paradox: According to YouGov, Americans who have never been to the Big Apple are evenly divided on whether its safe or not, while those who have traveled their regard it as safe by a two-to-one margin.

This drop in crime has been greatest in the nation’s largest cities. Violent crimes of all kinds declined 29 percent in the central cities of the nation’s 100 largest metropolitan areas — a significantly steeper decline than in the nation’s suburbs (down 7 percent). Property crimes in central cities fell even more — down 46 percent, compared to a 31 percent decrease in suburbs.

Survey evidence demonstrates that the drop in crime is not widely understood by the general public. A September 2014 survey by YouGov found that most Americans believe crime rates have increased over the past two decades. Their data show that 50 percent of Americans think crime rates are up; 22 percent think they are down, 15 percent think crime rates are unchanged, and 13 percent don’t know.

Hollywood continues to peddle the storyline of cities of the future as savage, crime-ridden dystopias (see for example this year’s remake of Robocop). Meanwhile the good news about safer cities goes almost unnoticed. A 2011 study by the Brookings Institution pointing to significant declines in 80 of the nation’s 100 largest cities has gone practically unnoticed, garnering just seven citations in other work, according to Google Scholar. (Google Scholar, August 19, 2014).

While crime has dropped, it’s not the only factor making cities better places to live. Wednesday, I’ll conclude the series by showing how traffic jams aren’t actually as bad as they used to be.

Big city metros are driving the national economy

The nation’s largest city-centered metro areas are powering national economic growth.

2017 will mark a decade since the peak of the last economic cycle (which according to the National Bureau of Economic Research was December 2007.  Since then, we’ve experienced the Great Recession (the biggest economic downturn in eight decades), and a long and arduous recovery.

We’ve always maintained that the word “recovery” is a misleading term, because it seems to imply that we get back exactly the same economy, industries and jobs that we lost to the recession. In fact, that’s not true:  the jobs created since the bottom of the recession in 2009 are in different firms, in different industries, and importantly, in different places than the jobs we lost to the recession.

It’s illuminating to look to see where the jobs that have been created in this recovery are actually located. There’s no question that large metros are important to the economy. These 51 metros with a million or more population are home to 168 million Americans, and account for about 65 percent of the nation’s gross domestic product. But the big question is, how important are they to national growth? It turns out that in this particular recovery, big city metros, those with a population of 1 million or more, have dramatically outperformed the rest of the nation’s economy.

Today we’re revisiting a data series that has been compiled and tracked by our friend Josh Lehner, an economist in the Oregon Office of Economic Analysis. Josh uses Bureau of Labor Statistics employment data to track employment growth by size of metropolitan area. His analysis divides the nation into four groups: the 51 metropolitan areas of 1 million population or more, two groups of mid-sized and smaller metropolitan areas, and nonmetropolitan America.

The latest data show that, as a group, large metropolitan areas have dramatically out-performed the rest of the country in the last economic cycle (dating from the peak of the economy in December 2007). In the aggregate, metros with 1 million or more population have fully recovered the employment lost in the Great Recession, and grown to 6.6 percent above their pre-recession peak. As of September, 2016, smaller and mid-sized areas, collectively were about 2-3 percent above their 2007 peak level of employment. And non-metropolitan America is still 2.3 percent below where it was in 2007.

Here’s another way to think about this same data.  As of September 2016, total U.S. employment was up about 5.3 million jobs from the previous peak recorded in December 2007 (133.03 million jobs in 2007; 138.37 million jobs in 2016).  The fifty-one largest metropolitan areas recorded an increase of 4.66 million jobs between 2007 and 2016. Collectively, these big city metro’s accounted for about 87 percent of the net job growth nationally.

This time is different

What’s new and different here is that big city metros haven’t been the one’s that have driven US economic growth in previous cycles.  It’s usually the case that small and mid-sized metros, as a group, have grown faster than big city metros.  Using Josh’s data, we prepared a second chart, showing the growth in employment for large, middle-sized and small metros and non-metros, for the period 2002 through 2007. During that growth period, small and mid-sized metros decidedly out-performed larger metros in job growth.  The smallest metros grew their employment by 7.5 percent, mid-sized metros grew about 6.5 percent, and large metros grew about 5.5 percent.

Jobs_MetSize_0207

As we’ve pointed out in our report, Surging City Center Job Growth, the last few years have witnessed a historic reversal in the patterns of job growth within large metropolitan areas. After decades of steady decentralization, employment growth in urban centers substantially outpaced that in more peripheral locations from 2007 through 2011. We think there’s strong evidence that this process is driven by employers looking to tap the growing labor market in city centers, which itself is a product of the movement of well-educated young adults back to cities (as we documented in Young and Restless).

All this evidence points to one thing: City centers are the big drivers of national economic growth. Big metros are significantly out-performing smaller metros, which in turn are out-performing rural areas. Within large metros, the decades-long pattern of job decentralization has reversed—and jobs are growing faster in city centers than in the metropolitan periphery. In this economic expansion, the nation’s economic growth is tied to the performance of its large metros and their robust city centers.

Many thanks to Josh Lehner for compiling this data and sharing it. Be sure to visit the Office of Economic Analysis blog for more detail, including a national map showing patterns of county-level job growth since 2007.

Cities’ role in growing our nation’s economy

Cities have always played a vital role in the national economy, but in the past few years their importance has increased.

Last month, we highlighted the “Dow of Cities”—how the rising value of housing in the most central portions of the nation’s metropolitan areas signals the market’s verdict about the growing demand for urban living.

Another indicator comes courtesy of our friend Josh Lehner, an economist in the Oregon Office of Economic Analysis. Josh uses Bureau of Labor Statistics employment data to track employment growth by size of metropolitan area. His analysis divides the nation into four groups: the 51 metropolitan areas of 1 million population or more, two groups of mid-sized and smaller metropolitan areas, and nonmetropolitan America.

There’s no question that large metros are important to the economy. These 51 metros with a million or more population are home to 168 million Americans, and account for about 65 percent of the nation’s gross domestic product. But the big question is, how important are they to national growth?

The latest data show that, as a group, large metropolitan areas have dramatically out-performed the rest of the country in the last economic cycle (dating from the peak of the economy in December 2007). In the aggregate, metros with 1 million or more population have fully recovered the employment lost in the Great Recession, and grown to 3 percent above their pre-recession peak. As of March 2015, smaller and mid-sized areas had barely made it back to the peak. And non-metropolitan America is still 2.1 percent below where it was in 2007.

Jobs_MetSize_0715

What makes this finding even more striking if one cares about cities is that it represents a dramatic shift from the pattern of the last economic expansion. Using Josh’s data, we prepared a second chart, showing the growth in employment for large, middle-sized and small metros and non-metros, for the period 2002 through 2007. During that growth period, small and mid-sized metros decidedly out-performed larger metros in job growth.  The smallest metros grew their employment by 7.5 percent, mid-sized metros grew about 6.5 percent, and large metros grew about 5.5 percent.

Jobs_MetSize_0207

As we’ve pointed out in our report, Surging City Center Job Growth, the last few years have witnessed a historic reversal in the patterns of job growth within large metropolitan areas. After decades of steady decentralization, employment growth in urban centers substantially outpaced that in more peripheral locations from 2007 through 2011. We think there’s strong evidence that this process is driven by employers looking to tap the growing labor market in city centers, which itself is a product of the movement of well-educated young adults back to cities (as we documented in Young and Restless).

All this evidence points to one thing: City centers are the big drivers of national economic growth. Big metros are significantly out-performing smaller metros, which in turn are out-performing rural areas. Within large metros, the decades-long pattern of job decentralization has reversed—and jobs are growing faster in city centers than in the metropolitan periphery. In this economic expansion, the nation’s economic growth is tied to the performance of its large metros and their robust city centers.

Many thanks to Josh Lehner for compiling this data and sharing it. Be sure to visit the Office of Economic Analysis blog for more detail, including a national map showing patterns of county-level job growth since 2007.

The Week Observed, April 28, 2017

What City Observatory did this week

1. The latest from the Louisville travel behavior experiment. Just before the New Year, Louisville started charging tolls to cross its newly-widened I-65 bridge. When it did, traffic across the bridge fell by almost half. Part of the reason was that motorists could take a very short detour and cross the Ohio River on an un-tolled Second Street Bridge. But that bridge was recently closed temporarily as part of the city’s annual Thunder over Louisville celebration. So, did the absence of a free crossing boost traffic on the tolled bridge? Traffic cam photos suggest that the toll bridge is still lightly used at peak hours. But photos are far from the best evidence, which would be actual traffic data. Unfortunately, Riverlink, the I-65 toll collectors, haven’t released any traffic data since the first month of tolling, so we don’t know how things are going. In related news, however, Kentucky did announce that it had signed a $300,000 contract for a consultant to determine whether toll revenues would be sufficient to repay the bonds issued to finance bridge construction, which may be a sign that that there’s some trouble in this river city.

Rush hour on Louisville’s I-65 bridges (April 20, 2017)

2. The 0.1 Percent Solution: Inclusionary Zoning’s fatal scale problem. A new study from noted urban scholars Lance Freeman and Jenny Shuetz looks at the effectiveness of local policies to promote housing affordability. They take a close look at more than 50 inclusionary zoning programs in five different states and conclude that they’ve had an miniscule effect on housing supply, contributing less than 0.1 percent to the building stock, with an average of fewer than ten affordable units per year. Freeman and Schuetz argue that to address affordability local governments need to upzone widely to allow greater density and cut-back on regulations to lower development costs and encourage more supply.

3. Word of the day: Hagiometry. You know doubt already know about “hagiography”–works of art or literature that glorify a particular person (usually the person who commissioned the work). While old-school hagiography may no longer be in fashion, in our data-driven world, a new form of fawning portraiture has emerged, with all the flourishes and exaggerations that have been traditional in the genre. The key difference is that its flattery with numbers, rather than paint or words. We take a close look at some of the common flaws that underlie the economic impact studies that gin up impressive sounding numbers to help sell everything from ballparks to big box retail stores.

4. What does it mean to be a Smart City? The term “smart city” is all the rage, and in generally focuses heavily on technology, and how we might better exert control over systems from streetlights to water systems to traffic by instrumenting everything in sight. But in our view this highly centralized, engineering view of cities smacks of the kind of heavy-handed approach to cities exemplified by the master builder Robert Moses. In contrast, the real smarts, or intelligence of cities stems from their ability to bring people together, in a fashion better described by the work of Jane Jacobs. We shouldn’t be so enamored of technology that we forget that cities succeed because largely because they enable their residents to easily connect to one another in the urban environment.

Must read

1. How affordable housing lotteries work in practice. Inclusionary zoning programs require developers of new apartment buildings to rent out a portion of their units at below market rates. To qualify households have to have income that is in some target range (generally less than 80 percent or less than 60 percent of area median income) with rents pegged to their income level. Because rents are much cheaper than market, there are many more applicants for such housing than available units, so city’s conduct lotteries to allocate the units. City Living, describes how the process works in practice. They emphasize that there’s much more demand among the lowest income groups than moderate income groups (there are more than 1,000 applicants for each unit in two of the three lowest income categories, and as few as 15 applicants for each unit in the “highest” but still moderate income category. That’s hardly surprising: the higher your income the relatively less attractive the affordable unit is compared to market prices.

2. The high cost of starchitecture-London’s Garden Bridge edition.  For the past several years, a new signature bridge has been in the offing for central London. Called the Garden Bridge, this pedestrian only structure would provide a landscaped path over the Thames. The design looks a bit like a freeway overpass with an overgrown hedge. The Happy Pontist, a bridge aficionado and critic writing in the UK chronicles the megaproject bloat that has engulfed the project. Originally proposed at a cost of about 60 million pounds in 2013, the project’s pricetag is now likely to exceed 2o0 million pounds–if the project goes forward. One of its leading proponents, former London Mayor Boris Johnson is out of office, and his successor is likely to be willing to write off sunk costs of 46 million pounds. And as the Happy Pontist notes, the problem is that the so-called “professionals” in architecture, transport and engineering who might have warned about the likely results of this folly did almost nothing to warn their clients or the public of the risks involved.

3. Why its important to tell the truth about economic change. Writing at Vox, Matt Yglesias challenges the notion that public discussions ought not to challenge the economic worldview of Trump voters. Denying the reality of fundamental economic change and offering up nostalgia as an economic strategy isn’t going to help us move forward. Neither the Appalachian coal industry, nor shuttered manufacturing plants like Lewiston Maine’s paper mill, will be revived. Economies don’t stand still, and they don’t go backwards. Ironically, the historical process of economic development in the US has always been about our willingness to embrace and thrive on change, building new industries, making large scale investments, migrating to opportunity, creating new homes and communities. Nostalgia isn’t an economic strategy, and indulging it is an obstacle to actually solving our economic problems.

New ideas

The economic disadvantages of being a suburban state. A new study published by the Philadelphia branch of the Federal Reserve Bank offers up an in-depth analysis of employment change in New Jersey over the past decade. By many traditional measures, including its high level of educational attainment, the Garden State should be well-positioned to compete in a knowledge-driven economy. But for the past 15 years, its economic performance has trailed that of the nation. The Fed study “Is Urban Cool Cooling New Jersey’s economy,” notes that New Jersey’s economy (which still has employment 2.2 percent below its pre-recession peak) has dramatically underperformed the urban centers of Philadelphia and New York. Essentially all of New Jersey is a suburb of one or the other of these two metropolitan areas, so the culprit seems to be the relative decline in the attractiveness of suburban locations. The Fed study includes a detailed shift share analysis which shows that New Jersey recorded much less job growth in urban-centered industries like finance and professional services, that drove national growth in the latest recovery.  This is more evidence that national job growth is shifting away from suburbs and toward cities.

 

 

How smart are “smart” cities, really?

Being a smart city should mean something different than a technology fetish

The growing appreciation of the importance of cities, especially by leaders in business and science, is much appreciated and long overdue. Many have embraced the “smart city” banner. But what does that mean?

People tend to see cities through the lens of their own profession. CEOs of IT firms say that cities are “a system of systems” and visualize the city as a flow of information to be optimized. Physicists have modeled cities and observed relationships between city scale and activity, treating city residents as atoms and describing cities as conforming to natural “laws.”

In part, these metaphors reflect reality. Information flows and physical systems are an important part of what makes cities work. But cities are also something more—and their residents need to be viewed as something other than mindless atoms to be optimized.

The prescriptions that flow from partial and incomplete metaphors for understanding cities can lead us in the wrong direction if we’re not careful. The painful lessons of seven decades of highway building in U.S. cities is a case in point. Led by people like New York’s master builder, Robert Moses, we took an engineering view of cities, one in which we needed to optimize our transportation infrastructure to facilitate the flow of automobiles. The massive investments in freeways (and the rewriting of laws and culture on the use of the right of way) made cities safer for long-distance, high-speed—but at the same time produced massive sprawl, decentralization, and longer journeys, and eviscerated many previously robust city neighborhoods.

Robert Moses, great optimizer. Credit: Metropolitan Transportation Authority, Flickr
Robert Moses, the great optimizer. Credit: Metropolitan Transportation Authority, Flickr

 

If we’re really to understand and appreciate cities, especially smart cities, our focus has to be elsewhere: it has to be on people. Cities are about people, and particularly about bringing people together. We are a social species, and cities serve to create the physical venues for interaction that generate innovation, art, culture, and economic activity.

So what does it mean for a city to be smart?

Fundamentally, smart cities have highly skilled, well-educated residents. We know that this matters decisively for city success. We can explain fully 60 percent of the variation in economic performance across large U.S. metropolitan areas by knowing what fraction of the adult population has attained a four-year college degree. There’s strong evidence that the positive effects of greater education are social—it spills over to all residents, regardless of their individual education.

Educational attainment is a powerful proxy measure of city economic success because having a smart population and workforce is essential to generating the new ideas that cause people and businesses to prosper.

So building a smart city isn’t really about using technology to optimize the efficiency of the city’s physical sub-systems. There’s no evidence that the relative efficiency of water delivery, power supply, or transportation across cities has anywhere near as strong an effect on their success over time as does education.

It is in this process of creating new ideas that cities excel. They are R&D facilities and incubators, and not just of new businesses, but of art, music, culture, fashion trends, and all manner of social activity. In the process Jane Jacobs so compelling described, by juxtaposing diverse people in close proximity, cities produce the serendipitous interactions that generate what she called new work.

Downtown Miami. Credit: Phillip Pessar, Flickr
Downtown Miami. Credit: Phillip Pessar, Flickr

 

We don’t have an exacting recipe for how this happens. But we do know some of the elements that are essential. They include density, diversity, design, discovery and democracy.

Density. The concentration of people in a particular place. Cities, as Ed Glaeser puts it, are the absence of space between people. The less space, the more people, and the greater the opportunities for interaction. Cities are not formless blobs; what happens in the center—the nucleus—matters, because it is the place that provides key elements of identity and structure and connection for the remainder of the metropolitan area it anchors.

Diversity. We have abundant evidence that a more diverse population—by age, race, national origin, political outlook,and other qualities—helps provide a fertile ground for combining and recombining ideas in novel ways.

Design. We are becoming increasingly aware that how we populate and arrange the physical character of cities matters greatly. The arrangement of buildings, public plazas, streetscapes, and neighborhoods matters profoundly for whether people embrace urban spaces or abandon them. We have a growing appreciation for places that provide interesting variety and are oriented to walking and “hanging out.”

Discovery. Cities are not machines; citizens are not atoms. The city is an evolving organism, that is at once host to, and is constantly being reinvented by, its citizen inhabitants. A part of the attraction of cities is their ability to inspire, incubate, and adapt to change. Cities that work well stimulate the creativity of their inhabitants, and also present them all with new opportunities to learn, discover, and improve.

Democracy. The “mayor as CEO” is a tantalizing analogy for both mayors and CEOs: CEOs are used to wielding unitary, executive authority over their organizations, and many mayors wish they could do the same. But cities are ultimately very decentralized, small “d” democratic entities. Decision-making is highly devolved, and the opportunities for top-down implementation are typically limited. Citizens have voice (through voting) and the opportunity to “exit” by moving, appropriately limiting unilateral edicts from City Hall. Cities also give rise to new ideas, and when they work well, city political systems are permeable to the changing needs and values of their citizens—this is when many important changes bubble up.

All of these attributes of cities are susceptible, at least in part, to analysis as “information flows” or “systems of systems.” They may be augmented and improved by better or more widespread information technology. But it would be a mistake to assume that any of them are capable of being fully captured in these terms, no matter how tempting or familiar the analogy.

Ultimately, when we talk about smart cities, we should keep firmly in mind that they are fundamentally about people; they are about smart people, and creating the opportunity for people to interact. If we continuously validate our plans against this key observation, we can do much to make cities smarter, and help them address important national and global challenges.

What does it mean to be a “smart city”?

In light of Smart Cities Week, we’re updating this post from March about the role of smart technology, people, and successful cities.


The growing appreciation of the importance of cities, especially by leaders in business and science, is much appreciated and long overdue. Many have embraced the “smart city” banner. But what does that mean?

People tend to see cities through the lens of their own profession. CEOs of IT firms say that cities are “a system of systems” and visualize the city as a flow of information to be optimized. Physicists have modeled cities and observed relationships between city scale and activity, treating city residents as atoms and describing cities as conforming to natural “laws.”

In part, these metaphors reflect reality. Information flows and physical systems are an important part of what makes cities work. But cities are also something more—and their residents need to be viewed as something other than mindless atoms to be optimized.

The prescriptions that flow from partial and incomplete metaphors for understanding cities can lead us in the wrong direction if we’re not careful. The painful lessons of seven decades of highway building in U.S. cities is a case in point. Led by people like New York’s master builder, Robert Moses, we took an engineering view of cities, one in which we needed to optimize our transportation infrastructure to facilitate the flow of automobiles. The massive investments in freeways (and the rewriting of laws and culture on the use of the right of way) made cities safer for long-distance, high-speed—but at the same time produced massive sprawl, decentralization, and longer journeys, and eviscerated many previously robust city neighborhoods.

Robert Moses, great optimizer. Credit: Metropolitan Transportation Authority, Flickr
Robert Moses, great optimizer. Credit: Metropolitan Transportation Authority, Flickr

 

If we’re really to understand and appreciate cities, especially smart cities, our focus has to be elsewhere: it has to be on people. Cities are about people, and particularly about bringing people together. We are a social species, and cities serve to create the physical venues for interaction that generate innovation, art, culture, and economic activity.

So what does it mean for a city to be smart?

Fundamentally, smart cities have highly skilled, well-educated residents. We know that this matters decisively for city success. We can explain fully 60 percent of the variation in economic performance across large U.S. metropolitan areas by knowing what fraction of the adult population has attained a four-year college degree. There’s strong evidence that the positive effects of greater education are social—it spills over to all residents, regardless of their individual education.

Educational attainment is a powerful proxy measure of city economic success because having a smart population and workforce is essential to generating the new ideas that cause people and businesses to prosper.

So building a smart city isn’t really about using technology to optimize the efficiency of the city’s physical sub-systems. There’s no evidence that the relative efficiency of water delivery, power supply, or transportation across cities has anywhere near as strong an effect on their success over time as does education.

It is in this process of creating new ideas that cities excel. They are R&D facilities and incubators, and not just of new businesses, but of art, music, culture, fashion trends, and all manner of social activity. In the process Jane Jacobs so compelling described, by juxtaposing diverse people in close proximity, cities produce the serendipitous interactions that generate what she called new work.

Downtown Miami. Credit: Phillip Pessar, Flickr
Downtown Miami. Credit: Phillip Pessar, Flickr

 

We don’t have an exacting recipe for how this happens. But we do know some of the elements that are essential. They include density, diversity, design, discovery and democracy.

Density. The concentration of people in a particular place. Cities, as Ed Glaeser puts it, are the absence of space between people. The less space, the more people, and the greater the opportunities for interaction. Cities are not formless blobs; what happens in the center—the nucleus—matters, because it is the place that provides key elements of identity and structure and connection for the remainder of the metropolitan area it anchors.

Diversity. We have abundant evidence that a more diverse population—by age, race, national origin, political outlook,and other qualities—helps provide a fertile ground for combining and recombining ideas in novel ways.

Design. We are becoming increasingly aware that how we populate and arrange the physical character of cities matters greatly. The arrangement of buildings, public plazas, streetscapes, and neighborhoods matters profoundly for whether people embrace urban spaces or abandon them. We have a growing appreciation for places that provide interesting variety and are oriented to walking and “hanging out.”

Discovery. Cities are not machines; citizens are not atoms. The city is an evolving organism, that is at once host to, and is constantly being reinvented by, its citizen inhabitants. A part of the attraction of cities is their ability to inspire, incubate, and adapt to change. Cities that work well stimulate the creativity of their inhabitants, and also present them all with new opportunities to learn, discover, and improve.

Democracy. The “mayor as CEO” is a tantalizing analogy for both mayors and CEOs: CEOs are used to wielding unitary, executive authority over their organizations, and many mayors wish they could do the same. But cities are ultimately very decentralized, small “d” democratic entities. Decision-making is highly devolved, and the opportunities for top-down implementation are typically limited. Citizens have voice (through voting) and the opportunity to “exit” by moving, appropriately limiting unilateral edicts from City Hall. Cities also give rise to new ideas, and when they work well, city political systems are permeable to the changing needs and values of their citizens—this is when many important changes bubble up.

All of these attributes of cities are susceptible, at least in part, to analysis as “information flows” or “systems of systems.” They may be augmented and improved by better or more widespread information technology. But it would be a mistake to assume that any of them are capable of being fully captured in these terms, no matter how tempting or familiar the analogy.

Ultimately, when we talk about smart cities, we should keep firmly in mind that they are fundamentally about people; they are about smart people, and creating the opportunity for people to interact. If we continuously validate our plans against this key observation, we can do much to make cities smarter, and help them address important national and global challenges.

Caught in the prisoner’s dilemma of local-only planning

The fundamental conundrum underlying many of our key urban problems is the conflict between broadly shared regional interests and impacts in local communities. While we generally all share an interest in housing affordability, and therefore it makes sense that we ought to support an expansion of housing supply in our region, it becomes a different matter entirely when it means more housing in our neighborhood. 

That’s exactly the issue plaguing the implementation of New York’s new mandatory inclusionary zoning law. The program–a cornerstone of Mayor Bill de Blasio’s housing affordability policy–requires developers which receive up-zoning permission to set aside a portion of units in newly constructed buildings for low and moderate income families. While the city-wide policy easily gained a majority of the City Council, the individual up-zoning approvals that would activate the “mandatory” portions of the law have run into difficulties. In the first two projects forwarded under the law–in Manhattan and Queens— strong neighborhood opposition has prompted the local city councilor to withdraw support for the needed zone change–effectively torpedoing the project.

This is a classic example of the a prisoner’s dilemma.  Everyone would be better off if each neighborhood allowed some new development (the added supply city wide would dampen rent increases), but individually the neighbors of new projects would rather than new buildings go up elsewhere. As long as the development approval is highly localized, this will be a persistent problem.

One of the most broadly popular ideas about urban planning today is that decisions should be made locally. After all, who knows better what a neighborhood needs than the people who live there? And what better way to squash any would-be Robert Moses than by empowering the people whose homes he would claim for some new megaproject?

The move to greater local democracy since the disastrously inhumane urban renewal period of the 20th century was undoubtedly necessary. But it has also created new problems that some officials, activists, and residents have been slow to acknowledge.

To begin with, it’s worth noting that “local” is a concept without a solid definition. When people object to policies coming out of Washington, DC, they often say that power needs to be brought back to the states. When they disagree with state policy, they’ll often discover a strong attachment to their region—say, downstate Illinois rather than Chicago. When they dislike something happening in their region, they reinforce the importance of their own particular municipality. And when their city government makes a decision they don’t like, they’ll appeal to the power of their neighborhood—which itself may expand or shrink its boundaries based on the issue.

2 way window decal 4 11 13
Image courtesy of the American Independent Business Alliance.

 

In other words, there’s no given geographic level at which people magically all agree with each other about what the “right” thing is. Instead, “local” politics tends to be about strategically choosing an arena in which there’s a strong enough coalition in favor of whatever policy you want.

Which, to be clear, is a totally legitimate way to go about democracy. But while the popular image of local power might be a diverse and representative group of families planning a new school or beating back an invasive and unwanted project from City Hall, localism also has a darker side. Much of the move to local planning since World War Two has taken the form of suburban municipalities created largely as a way to segregate their residents from “undesirable” people—generally blacks and the low-income. In fact, places with more fragmented governments (that is, places that are governed more “locally”) also tend to be more segregated. Partly as a consequence, they also have worse outcomes for people at the losing end of that segregation—so that, for example, health disparities between whites and blacks are significantly worse in metropolitan areas with more local governments per capita.

One of the greatest victories of this kind of exclusionary localism came in 1974, when a federal judge ruled that white parents who had moved beyond the Detroit city limits were exempt from any mandatory school desegregation programs, because to bus children across school district lines would be an affront to local control of education. But anyone who listened to the white parents in Nikole Hannah-Jones’ This American Life documentary excoriate their suburban St. Louis government for allowing non-local—and, of course, black—students to attend “their” schools just two years ago knows that this justification for local control is alive and well.

But localism doesn’t just give outlet to some of America’s less savory impulses. It can also pit municipalities or neighborhoods against each other, encouraging people who might be okay with, say, a moderate amount of affordable housing to advocate for having none at all.

To understand why, it’s useful to think of “the prisoner’s dilemma,” a kind of thought experiment that explains why people behave in ways that don’t lead to their own ideal outcome. (We should note that we didn’t come up with the idea to liken neighborhood development to a prisoner’s dilemma. Here’s an excellent interview with David Schleicher, a professor at Yale Law, making a very similar point.)

Imagine two middle-class neighborhoods, A and B. Each can choose a bundle of policies that are either “inclusionary” (allowing multi-family and subsidized housing, providing services attractive to the low-income, and so on) or “exclusionary” (allowing only large, expensive single family homes, eliminating social services, and so on). The residents of both places would like to be inclusive, as long as their neighborhood will remain predominantly middle class.

If both neighborhoods choose “inclusionary” policies, they’ll each become mixed-income, but mostly middle-class, communities. But if only one chooses “inclusionary” policies and the other chooses “exclusionary,” the “inclusionary” community will become disproportionately low-income, because it’s the only attractive, welcoming place for people who need affordable housing and social services.

In this situation, residents of both neighborhoods will be extremely wary of being the first to choose “inclusionary” policies, because unless the other neighborhood also chooses “inclusionary” policies very soon afterwards, their community will become disproportionately low-income. Any doubt about the other neighborhood’s commitment to choosing “inclusionary” policies—doubt that is more than justified, given the current state of American urban policy—will push them to choose “exclusionary” ones for their own community.

The fundamental problem here is that local communities don’t have the power to get other communities to commit to doing the “right thing”: only higher levels of government can do that. Importantly, though, creating this commitment doesn’t have to be any less democratic than smaller-scale decision-making: elected officials in a city hall or state capitol can work with their constituents to craft a policy that ensures all communities reflect the best values of their residents. This might look like a statewide law that requires every municipality to have a certain percentage of its housing designated as “affordable,” or a citywide plan that allows people from different neighborhoods to commit together to certain distributions of accessible housing and social services. Or, taking an international view, it might look like a state- or provincial-level government setting limits on the kind of zoning that municipalities can choose, restricting their ability to outlaw working-class and low-income housing types.

Unfortunately, there are precious few examples of higher units of government imposing rules that break this prisoner’s dilemma for cities. Massachusetts’ “anti-snob zoning” law, which allows affordable housing developers to ignore local exclusionary zoning in places where more than 90 percent of existing housing is considered unaffordable, is one. Another is in Oregon, where communities are required to zone land for a range of housing types, including apartments.

Research suggests that these kinds of laws can significantly improve the housing affordability landscape. Given the growing economic divides between our communities, we would do well to give them a second look.

Editor’s note: We’ve updated this post from a version we first published in September 2015.

Great neighborhoods don’t have to be illegal—they’re not elsewhere

Ah, Paris! Perhaps one of the world’s most beautiful cities, a capital of European culture, and prosperous economic hub. What’s its secret? Zoning, of course!

RM-6? Maybe DX-2? Credit: Peter McConnochie, Flickr
RM-6? Maybe DX-2? Credit: Peter McConnochie, Flickr

 

Just kidding. Actually, Paris went for the better part of a millennium (until 1967) with nothing that an American might recognize as district-based zoning, a prospect that would surely horrify the planners who have been one-upping not just Paris, but pre-zoning American neighborhoods from brownstone Brooklyn to Midtown Memphis, with postwar sprawl for the last several generations.

Midtown Memphis: Despite the shops on an otherwise residential street, people seem to like it. Credit: Google Street View
Midtown Memphis: Despite the shops on an otherwise residential street, people seem to like it. Credit: Google Street View

 

Today, we live in cities and neighborhoods where zoning has made the kinds of places we used to take for granted, and that still make up some of our most prized communities, illegal to build. These laws are so pervasive that even relatively small changes are considered radical, experimental, and potentially dangerous methods of social engineering. Partly, that’s because we ignore the lessons in our own cities about what works in neighborhoods built before modern zoning. But it also may have something to do with the fact that our conversations about urban planning tend to be hyper-local: we might try to take cues from other neighborhoods, or another city in our region—or maybe another American city several states away. But even many big-time urban wonks would be hard pressed to tell you much about how cities in other countries do it.

Which is just one reason why Zoned in the USA, a book published by Virginia Tech’s Sonia Hirt last year, is so valuable. An entire section is dedicated to describing the zoning and planning processes of countries from Sweden to Australia, and contrasting them to the American system. It’s absolutely worth reading the entire book—including for her arguments about the origins of American zoning, which seek to modify the property-value-based ideas put forward by writers like William Fischel in The Homevoter Hypothesis—but in the meantime, here are some big conclusions about how foreign zoning is different from our own.

1. The single-family-only district is not king

Hirt’s major claim is that what really sets American zoning apart is its orientation, explicit or implicit, to putting the single-family residential zone at the top of the hierarchy of urban land uses. Not only are single-family zones listed first in many zoning codes, but they make up significant pluralities, or even majorities, of total land area in most American cities. Interestingly, Hirt points out that this wasn’t necessarily true when zoning was first introduced: New York’s famous first zoning law didn’t even have a single-family zone at all.

A zoning map of Marietta, GA. Yellow areas are zoned for single-family homes only; brown areas are set aside for apartments. The large brown area in the southeast corner contains apartments to be razed. Pink is commercial. Source: Marietta, GA website
A zoning map of Marietta, GA. Yellow areas are zoned for single-family homes only; brown areas are set aside for apartments. The large brown area in the southeast corner contains apartments to be razed. Pink is commercial. Source: Marietta, GA website

 

Cities in other countries remain closer to our origins, then. In Great Britain, for example, local development plans generally set limits on residential density by the number of housing units per given land area, rather than dictate the form that those housing units must take. In the Paris area, too, land use intensity is determined by something like FAR, or the ratio of total floor area to lot area, rather than prescribing apartments or detached homes. The German zoning system, which in some ways appears very similar to ours, does not even have a single-family category.

As Hirt points out, Americans appear to be unique in believing that there is something so special about single-family homes that they must be protected from all other kinds of buildings and uses—even other homes, if those homes happen to share a wall. The recent revolt in Seattle over a proposal to soften that city’s single-family districts, in other words, would not be possible anywhere else in the world, not least because very few people live in single-family districts to begin with.

2. “Residential” doesn’t mean what we think it means

Not only do other countries lack single-family zones, or at least use them much, much more rarely, but the separation of residential from other uses is much softer. Whereas placing a shop in the middle of a residential block would be counter to the very purpose of an American “residential” zone, it’s considered an essential part of neighborhood planning elsewhere.

The “residential protection sector” zone in Paris, for example, allows a certain percentage of buildings to be used for non-residential uses, as long as they don’t create “nuisance.” In Germany, the “small-scale residential” and “exclusively residential” zones—the lowest-density residential designations possible—actually allow a range of retail and other uses as of right. The general principle is that businesses that serve everyday neighborhood needs, like small bank branches, corner stores, or medical offices, ought to be allowed within walking distance of people’s homes. Bigger uses that might attract more regional traffic are separated out. And in Sweden, although amendments (or what we might call “variances”) are needed to place a shop or other commercial use in a residential district, such amendments are regularly granted.

Is this really an "incompatible use" in a residential neighborhood? Credit: Google Street View
Is this really an “incompatible use” in a residential neighborhood? Credit: Google Street View

3. State and national law is as important as local

In the U.S., states generally have something called a zoning enabling act, which gives municipalities the legal authority to regulate land use through zoning. But these enabling acts tend to be extremely loose in terms of dictating what that land use should look like: the principle here is that planning ought to be done as locally as possible.

In many other rich countries, however, national and state or provincial governments play a much stronger role in urban planning. In Great Britain, the 1947 Town and Country Planning Act sets land use guidelines nationally; local communities are required to create their own land use plans, but using a pre-determined set of categories. Unlike American cities, then, British ones cannot invent their own ultra-exclusionary zones. Even next door, Canada gives its provinces significant power to set the framework of local zoning, resulting in very different systems from one province to the next.

That may offend American ideals about local autonomy. But it also helps prevent some of the problems with hyper-local decision-making, including the tendency for neighbors to use zoning to exclude the kinds of people they don’t want: most often the low-income, or people of color. How to combat that tendency has become one of the central challenges for American housing policy, as evidenced by the recent high-profile Supreme Court case on the Fair Housing Act and the Obama Administration’s new, more aggressive rules around affirmatively furthering fair housing.

4. There are other ways to plan

So zoning laws abroad tend to more flexible and permissive than their American counterparts, which means that many of the things we’ve outlawed here—like a mix of single-family homes and apartments in the same neighborhood, creating homes that are sized and priced appropriately for a diverse range of people; or having local businesses integrated within walking distance of where people live—are still legal in the rest of the industrialized world.

But Hirt points out that this doesn’t mean that other countries necessarily have a more laissez-faire approach to urban planning, letting private actors run wild over the visions that residents have for their communities and cities. Rather, other countries have a broader range of tools at their disposal to shape how their cities work and feel. In the Netherlands, for example, the government purchases land at the edge of a metropolitan area and then uses that ownership to manage the growth of the region. That and other kinds of public ownership, or more aggressive tax and subsidy policies, allows many other wealthy countries to direct their cities’ growth, and meet planning goals, without the kind of limiting prescriptions on land use that prevent our postwar neighborhoods from recreating what works about pre-zoning neighborhoods, while fixing what doesn’t.

Of course, the U.S. can’t, and shouldn’t, just ape other country’s systems; there are very different urban histories and values here that ought to be respected. But it’s worth pointing out that one of the biggest ones—the ideal of homeownership—is actually less uniquely American than we often suppose. In fact, many of the European countries mentioned in this article have higher levels of homeownership than the U.S., even though their land use systems don’t prioritize the homogeneous single-family neighborhood to nearly the extent that we do, or at all.

Source: Eye on Housing
Source: Eye on Housing

 

As we recognize the ways that our neighborhoods and cities could be improved, then—by encouraging less segregation by income and race; reducing travel times by bringing destinations closer together; encouraging more walking and physical exercise; and creating more public places where people can gather and build community—it seems only reasonable to look at what’s different in the places that may do some of those things better.

The top twenty reasons to ignore TTI’s latest Urban Mobility Report

It’s hard to find a more biased and misleading example of pseudo-science than the Texas Transportation Institute’s Urban Mobility Report. Here are our top 20 reasons why you should ignore the latest version.

Since the 1980s, Texas A&M University’s Texas Transportation Institute has periodically trotted out various versions of its  “Urban Mobility Report,” which purports to estimate the dollar cost of urban traffic congestion.  The report has been repeatedly debunked (by us, and others), but after a four-year hiatus, its back, and its making the same false claims based on the same discredited premises.  There’s so much that is wrong with this report, that its hard to fit all of the criticisms into a single blog post.  

If you’re walking, cycling or taking the bus, you’re “inappropriate data” and are not included in the “Urban Mobility Report.”

But to quickly summarize the lengthy case against the UMR, we’ve pulled together a sort of Cliffs Notes version of the problems with the UMR, skimmable by people who just want a quick overview of the issues, without all the background. So without any further ado, the top twenty reasons to be skeptical of the “Urban Mobility Report”:

  1. TTI claims that the UMR proves traffic is worse than at any time since 1982—but major methodological changes make these invalid.
  2. The UMR, in the words of Victoria Transportation Policy Institute executive director Todd Litman, “ignores basic research principles,” including failing to allow other experts to review its data and findings—the sort of “peer review” that is foundational to social science investigations. The 2019 iteration of the report simply ignores the multiple, widely published critiques of its methodology. 
  3. As the Eno Foundation’s Rob Puentes notes, the UMR measures mobility, not access. That is, cities get high scores if you can drive really fast—not if you can actually reach jobs or amenities in less time.
  4. In many cases, the UMR counts an inability to drive faster than the speed limit as “congestion.” The 2019 report counts any reduction of freeway speeds from 65 miles per hour as “time lost to congestion” even if the posted speed limit is 50 or 55 miles per hour. It’s indefensible that this report treats the inability to break the law as a “cost,” especially when speeding is proven to be a major contributor to road fatalities.
  5. The report fails to acknowledge that there’s simply no feasible way to build enough road capacity that all vehicles could travel at free flow speeds (often in excess of speed limits) every hour of the day. The cost of building such capacity would be vastly greater than the supposed “cost” of congestion, meaning that in reality the net cost of congestion is zero (or less).
  6. The UMR completely ignores the effects of induced demand: building more road capacity stimulates more sprawling development patterns, and more driving, which actually aggravate congestion and lead to more pollution and higher costs for the public and commuters. The phenomenon of induced demand is now so well-established that its referred to as “the fundamental law of road congestion.”
  7. Practical experience with capacity expansion has shown that wider roads don’t reduce congestion. In TTI’s own backyard, the multiple widenings of Houston’s massive 23-lane Katy Freeway have only produced more traffic and even slower travel times. There’s simply no evidence than more capacity reduces congestion.
  8. Its remarkable that as traffic engineers, the authors of the UMR ignore the fact that freeways carry the highest amount of traffic at speeds of about 45 miles per hour. (At higher speeds, car spacing increases and roads can carry fewer cars). Building enough capacity to allow 55 or 65 mile per hour speeds is vastly more expensive and less efficient than designing to accomodate a speed that maximizes throughput. This is advocacy for waste.
  9. The UMR has no solution for congestion because it lacks a credible explanation as to why congestion exists in the first place. While the report acknowledges (p. 12) there’s an “imbalance” between demand and supply, it fails to consider that the fact that we charge a zero price for road use means that demand inevitably overwhelms supply in urban areas. As we’ve illustrated with the Ben and Jerry’s seminar on transportation economics, a zero price for a valuable commodity is the cause of congestion and queueing.
  10. The UMR claims that congestion got worse every year between 2009 and 2014, but contemporaneously published data from Inrix (supposedly the source of the UMR estimates)—show that tell show that nationally congestion declined by about 30 percent during these years, due largely to a decline in vehicle miles traveled.   Inrix has disappeared this data from its servers, and the Texas Transportation Institute has never explained the discrepancy.
  11. Despite claims that they favor a mix of measures including transit and more density, the TTI’s travel time index actually penalizes compact cities and places that undertake measures that shorten work trips. The TTI scorecard perversely incentivizes sprawl and ignores the costs associated with longer trips.
  12. The UMR purports to show that commuters in some areas are much worse off due to congestion, but survey data show that there’s simply no correlation (if not a reverse correlation) between the UMR’s key metric (the travel time index) and self-reported satisfaction with the urban transportation system.  People who live in more congested places are not less happy with their urban transport systems than people who live in less congested places.
  13. The UMR’s prediction that traffic will get much worse in the coming years is based on a model that simply pretends the last decade didn’t happen.
  14. The report greatly exaggerates the value commuters attach to travel time savings. The report calculates the value of travel time at $15 per hour, but actual experience with High Occupancy Toll lanes, where people have the option in real time to buy a faster trip, shows that the typical traveler values travel time at just $3 per hour; this single adjustment would lower the supposed value of time “lost”to congestion by 80 percent.
  15. The UMR says that adding road capacity will reduce congestion—but 92 of the top 99 places where congestion increased according to the previous UMR increased their roadway miles per capita. Building more roads won’t reduce congestion or improve access without land use changes and other transportation investments.
  16. The “Urban Mobility Report” is in fact a report just about driving. Public transit, walking, and biking—essential parts of the transportation network in any city—are almost entirely left out. The UMR  actually notes (p. 15) that excludes data on people walking: “The proprietary process filters inappropriate data (e.g., pedestrians walking next to a street)” (This is the sole reference to “pedestrians” in the entire UMR; the word “bike” also appears just once).  If you walk, bike or take transit you simply don’t exist in the eyes of TTI.
  17. The UMR has just a single reference to safety, in a section dealing with autonomous vehicles. There’s no acknowledgement that higher road speeds and more VMT are strongly correlated with increased crashes and traffic deaths. More driving means more dying.
  18. Here’s a list of words you won’t find in the Urban Mobility Report:  “sprawl,” “pollution,” “emissions,” injuries”, “deaths,” “carbon,” “climate,” “VMT”, “induced demand,” “pricing,” “tolling.” Trying to talk about urban transportation systems without considering their effects on these other pressing problems is a measure of how detached the UMR is from the reality of the 21st century.
  19. The 2019 UMR has carefully avoided any references to tolling or road pricing, the one approach to congestion management that’s been proven to work in cities around the world. The 2015 UMR specifically recommended high occupancy toll lanes as a possible solution–the new report (p. 13) omits that recommendation.
  20. The UMR creates a fictitious time trend by ignoring changes in its methodology.  To support its claim that congestion is increasing, the UMR reports data going back to 1982  even though TTI’s methodology has changed several times since then.  The model for through 2007 didn’t actually measure congestion: it simply assumed that increased vehicle volumes automatically produce slower speeds, which is not necessarily accurate. The report’s data from 2007 and earlier isn’t comparable the data that comes afterwards, and can’t legitimately be used to make claims about whether traffic is better or worse than in earlier periods.

The high, high price of affordable housing

Why is affordable housing so expensive?

In many cities, affordable housing has a problem:  it’s not affordable. California Governor Jerry Brown made that point again, emphatically, with his new state budget. He’s said that he’s not putting any new state resources into subsidizing affordable housing until state and local governments figure out ways to bring the costs down. Last year, opposition from labor and environmental groups blocked the governors proposal to exempt affordable housing from some key regulatory requirements.  Brown had offered $400 million in additional state funds for affordable housing if that proposal was adopted. Now that money is off the table.

“We’ve got to bring down the cost structure of housing and not just find ways to subsidize it,” Brown said in is budget speech.

And the costs are substantial. In San Francisco, one of the largest all-affordable housing projects, 1950 Mission Street, clocks in at more than $600,000 per unit.  That number isn’t getting any lower: new units in that city’s Candlestick Point development will cost nearly $825,000 each, according to recent press reports. Brown’s point is that at that cost per unit, its simply beyond the fiscal reach of California or any state to be able to afford to build housing for all of the rent-burdened households.

San Francisco’s 1950 Mission Affordable Housing (Bridge Housing)

And while the problem is extreme in San Francisco, it crops up elsewhere.  In St. Paul, affordable housing–mostly one bedroom units– in a renovated downtown building cost $665,000 per unit.

In Portland, newly elected Mayor Ted Wheeler has temporarily embargoed any further spending of the city’s recently improved $258 million affordable housing bond issue. Shortly before he took office, the Portland Housing Bureau committed to spending nearly 15 percent of the levy’s proceeds to acquire an existing 263 unit housing complex. The city will pay $51 million in total, about $193,000 per unit for the building. The cost of new construction tends to be even higher. Public projects often involve more elaborate design, LEED certification, additional public spaces and higher overhead costs.

More broadly, the case has been made that much publicly subsidized  affordable housing costs much more to build that market rate housing.  Private developers are able to build new multi-family housing at far lower cost. One local builder has constructed new one-bedroom apartments in Portland at cost of less than $100,000 a unit, albeit with fewer amenities and in less central locations that most publicly supported projects. In Portland, local private developer Rob Justus has proposed to build 300 apartments and sell them to the city for $100,000 each on a turn-key basis to be operated as affordable housing. Another possible cost savings measure: off-site construction. The University of California, Berkeley’s Terner Center has a new report that explores the possibility for pre-fabricated, off-site construction to reduce construction costs.

Portland Mayor Wheeler voices the same concerns as California Governor Brown:

“We’ve added a lot of programs to affordable housing that may be socially desirable. But when the goal is to create the maximum number of new doors, we have to reduce costs and get more supply on the market as quickly as possible.”

In the Twin Cities, Myron Orfield has pointed out that the allocation of tax credits and the concentration of community development corporations in urban neighborhoods has tended to produce more housing in costly urban locations. Orfield also blames the high overhead costs of CDCs.

. . . central city development programs are inefficient, spending much more per unit of new affordable housing in the central cities than comparable housing costs in more affluent, opportunity-rich suburbs. Many of the leading developers working in the poorest parts of the region also pay their managers very high salaries. As a result, the funding system incentivizes higher cost projects in segregated neighborhoods over lower cost projects in integrated neighborhoods.

Perhaps the central problem of housing affordability is one of scale: the number of units that we’re able to provide is too small.  That’s true whether we’re talking about through Section 8 vouchers (that go to only about 1 in 5 eligible households), or through inclusionary zoning requirements (which provide only handfuls of units in most cities). The very high per unit construction costs of affordable housing only make the problem more vexing: the pressure to make any project that gets constructed as distinctive, amenity-rich and environmentally friendly as possible, means that the limited number of public dollars end up building fewer units. And too few units–scale–is the real problem here.

The combination of very limited public funds for affordable housing, even in the most prosperous and liberal cities, and the tendency for publicly subsidized housing to be nearly as costly as new, market rate housing, is a recipe for failure.

 

 

 

 

Why is “affordable” housing so expensive to build?

The high price of affordable housing

It’s a problem that isn’t going away:  the so-called “affordable” housing we’re building in many cities–by which we mean publicly subsidized housing that’s dedicated to low and moderate income households–is so expensive to build that we’ll never be able to build enough of it to make a dent in the housing affordability problem.  The latest case in point is a new affordable housing development called Estrella Vista in Emeryville, California (abutting Oakland and just across the bay from San Francisco).  A non-profit housing developer just broke ground on a new mixed use building, about three-quarters of a mile from a local BART transit station, which will include 84 new apartments.  The project also houses about 7,000 square feet of retail space.  The total cost:  $64 million.  Assuming that 90 percent of the building is residential, that means that the cost per apartment is something approaching $700,000 per unit.  While the complex provides many amenities for its residents (proximity to the BART station, a Zen garden and sky deck), its inconceivable that we have enough resources in the public sector to build many such units.

Estrella Vista (EAH Housing)

Policy makers are beginning to realize this problem.  As we wrote earlier this year, California Governor Jerry Brown made that point his state budget. He’s said that he’s not putting any new state resources into subsidizing affordable housing until state and local governments figure out ways to bring the costs down. Last year, opposition from labor and environmental groups blocked the governors proposal to exempt affordable housing from some key regulatory requirements.  Brown had offered $400 million in additional state funds for affordable housing if that proposal was adopted. Brown took that money is off the table.

“We’ve got to bring down the cost structure of housing and not just find ways to subsidize it,” Brown said in is budget speech.

And the costs are substantial. In San Francisco, one of the largest all-affordable housing projects, 1950 Mission Street, clocks in at more than $600,000 per unit.  That number isn’t getting any lower: new units in that city’s Candlestick Point development will cost nearly $825,000 each, according to recent press reports. Brown’s point is that at that cost per unit, its simply beyond the fiscal reach of California or any state to be able to afford to build housing for all of the rent-burdened households. And while the problem is extreme in San Francisco, it crops up elsewhere.  In St. Paul, affordable housing–mostly one bedroom units– in a renovated downtown building cost $665,000 per unit.

More broadly, the case has been made that much publicly subsidized  affordable housing costs much more to build that market rate housing.  Private developers are able to build new multi-family housing at far lower cost. One local builder has constructed new one-bedroom apartments in Portland at cost of less than $100,000 a unit, albeit with fewer amenities and in less central locations that most publicly supported projects. In Portland, local private developer Rob Justus has proposed to build 300 apartments and sell them to the city for $100,000 each on a turn-key basis to be operated as affordable housing. Another possible cost savings measure: off-site construction. The University of California, Berkeley’s Terner Center has a report that explores the possibility for pre-fabricated, off-site construction to reduce construction costs.

Portland Mayor Wheeler voices the same concerns as California Governor Brown:

“We’ve added a lot of programs to affordable housing that may be socially desirable. But when the goal is to create the maximum number of new doors, we have to reduce costs and get more supply on the market as quickly as possible.”

In the Twin Cities, Myron Orfield has pointed out that the allocation of tax credits and the concentration of community development corporations in urban neighborhoods has tended to produce more housing in costly urban locations. Orfield also blames the high overhead costs of CDCs.

. . . central city development programs are inefficient, spending much more per unit of new affordable housing in the central cities than comparable housing costs in more affluent, opportunity-rich suburbs. Many of the leading developers working in the poorest parts of the region also pay their managers very high salaries. As a result, the funding system incentivizes higher cost projects in segregated neighborhoods over lower cost projects in integrated neighborhoods.

Perhaps the central problem of housing affordability is one of scale: the number of units that we’re able to provide is too small.  That’s true whether we’re talking about through Section 8 vouchers (that go to only about 1 in 5 eligible households), or through inclusionary zoning requirements (which provide only handfuls of units in most cities). The very high per unit construction costs of affordable housing only make the problem more vexing: the pressure to make any project that gets constructed as distinctive, amenity-rich and environmentally friendly as possible, means that the limited number of public dollars end up building fewer units. And too few units–scale–is the real problem here.

The combination of very limited public funds for affordable housing, even in the most prosperous and liberal cities, and the tendency for publicly subsidized housing to be nearly as costly as new, market rate housing, is a recipe for failure. Ultimately, we’ve got to find ways to make housing (whether built by the public sector or the private sector) less expensive.

 

 

 

 

My illegal neighborhood

Editor’s note:  City Observatory is pleased to provide this guest commentary by our friend Robert Liberty a keen observer of and advocate for cities.

 

by Robert Liberty

For many years I lived in Northwest Portland, Oregon.

It was a part of the city first settled by white pioneers in the 1860s, but development really took off when the streetcar arrived in the first half of the 1900s. (A century later, the old streetcar tracks had to be dug up so they could put down the new streetcar tracks.)

I first moved there in the 1980s by renting a part of a house. Then I moved a few blocks away into a courtyard apartment building of a type built all over the city in the 1940s. There were a dozen one and two-bedroom apartments on two floors around a small courtyard, built on a 15,000 square foot lot (about one-third of an acre, roughly the size of many suburban house lots). There were storage areas and a laundry room in the basement.

Next door to the west was a large single family house, built around World War I. To the south was a one-story three-plex: three tiny apartments slotted into a narrow strip between our building and a large old home.

Kitty-corner across the street was a small restaurant that served breakfast at a few booths and a counter. For a few years, every Saturday, a long black limousine with tinted windows would park near the restaurant and the chauffeur would deliver a hot breakfast to the occupant and then take away the dirty dishes. I never found out who was in the limousine.

Diagonally across the street to the northeast was a warehouse that processed large volumes of “direct mail”—i.e., spam.

Across the street to the north sat another Edwardian house used for offices, a bland three-plex built in the 1970s, and a four-plex that looked like a large single-family home in Dutch Colonial style. I lived in that four-plex happily for many years.

The rest of the street was a mix of large older homes on small lots and small apartment buildings. Both young families and older couples lived in the houses and apartments.

The street was shaded by big trees and it was usually very quiet. The street was so narrow that bigger cars had to queue to pass each other, partly because so many people parked their cars on the street since the apartment buildings provided few or no parking garage spaces.

At the other end of the block was a park that was also served as part of an elementary school’s grounds. The school was built of blond-colored brick and rose three stories. It’s locally famous as being on the migration path for Vaux’s Swifts. Early each fall thousands of the birds would swarm and then spiral down into the decommissioned smokestack of the school incinerator and boiler. Beside the school were some community tennis courts.

Not far from the school was a senior center and some subsidized housing for families of modest means. Scattered here and there in the nearby blocks were grand old houses—some beautifully maintained and very expensive, other cut up into legal and illegal apartments.

Three blocks away was an arterial street, but it wasn’t too much wider than the street in front of my apartment building. I often walked there to buy groceries from a small grocery store and drop off my dry cleaning. Another block or two farther along the arterial was a branch library. Across the street from the grocery store was a small sheet metal fabrication business.

Once, when I was explaining to a reporter how our neighborhood had every possible kind of use and service, I gestured to the sheet metal company to illustrate the presence of light industrial uses. It was then that I realized is was called Schmeer Sheet Metal Works and Fabrication. “See,” I said, “we have the whole schmeer.”

That neighborhood is typical of many older neighborhoods in American cities. And in almost all of American cities and suburbs, that neighborhood would be illegal.

It is illegal to build an apartment building in a district of single family homes. Residential zoning was adopted in order to prevent single family neighborhood property values and families from being degraded by the presence of apartments where immigrants and low-class people lived. (If you think this is an exaggeration read the early history of zoning including the various state and federal supreme court decisions upholding challenges to the constitutionality of residential zoning.)

Residential zoning today has carried class separation to great extremes, which you can see if you travel by air: Over here, big single-family homes on big lots. Over there a mobile home park. In another direction, a pod of apartment buildings. A place of every income, and every income in its (separate) place.

Some affluent cities use their power to regulate development to exclude entire categories of housing from within their border, like apartments and mobile homes.

Typical city zoning makes it illegal to build or operate a warehouse or a light industrial use next to homes and a grocery store. The separation of industrial and commercial uses from residential uses was the very foundation of zoning a century ago.

It is illegal in most cities to build apartment buildings without providing one or more parking spaces for every apartment. The same would be true of grocery stores or office buildings. The neighborhood’s grocery store has fewer than 20 parking spaces.

The street in my old neighborhood does not meet more current design requirements, because it is considered inappropriate to design a street so that car cannot pass each other at any time or location. The street is 27 feet wide, curb to curb. That includes parallel parking on both sides, leaving a travel lane about 12 feet wide. That violates the standards for a local road recommended by the American Association of State Highway and Transportation Officials.

In most cities, you cannot operate a business out of your home if you have employees or customers arriving from other locations.

In too many places, it is effectively illegal to build subsidized housing for families of modest means. Even when it might be legal, local officials can interpret nebulous phrases like “preserve neighborhood character” or complex regulations in way that such housing is never approved.

A senior center, even though it is not a business, would be treated like a commercial use that cannot be allowed next to single family homes.

The elementary school would probably be illegal too because the school property would be too small to meet many state’s standards. The school is located on about 9.8 acres but many of those acres are occupied by a park open to the public at all times. The school, which has 685 students, would require a site of 11.85 acres in California, Texas and Connecticut, 15 acres in New Mexico and 18-20 acres in suburban Pennsylvania.

And then there is the absence of parking places; according to Virginia’s 2010 school design standards, the school should provide parking for all the staff, visitors and about a third of the students. (Apparently the legal driving age in Virginia is much younger than in Oregon.)

Of course, a jumbled neighborhood like mine would probably be regarded by many residential realtors, local officials, and even prospective home purchasers as a bad investment. After all, it’s about as far from the suburban residential model as possible. But in fact, this neighborhood, while providing many apartments (formerly) affordable by lower-income renters, was and is highly sought after.  According to Zillow, homebuyers in this neighborhood pay more than twice as much per square foot to live here than they would in the region’s suburbs.

One reason the prices are so high is because the supply of this kind of neighborhood has been limited by zoning, parking regulations, street design standards, school design standards, and building codes. We need many more neighborhoods like this all across America, so that all of the increasing numbers of people who want to live in places like this can afford to live in them.

Does that mean do away with all regulations? No. But it does mean that we need to stop assuming that everyone wants to, or can afford to, live in a big-house on a big lot in a residential-only neighborhood.  We shouldn’t be making it illegal to build the kind of neighborhoods, like mine that are increasingly popular and in short supply.

An illegal neighborhood in NW Portland.

Robert Liberty has worked over the last 34 years as an attorney, elected official and university program administrator to help implement plans to create livable, sustainable and equitable cities and to conserve the rural lands and resources we need for food, fiber and wildlife.  He has called Portland home for almost half of a century.

Why do we make it illegal to build the neighborhoods Americans love most?

Narrow streets, a mix of large houses and tiny apartments, interspersed with shops and businesses in close walking distance. It’s the most desirable neighborhood in the city, and we’ve made it illegal to build any more like it.

Editor’s note:  City Observatory originally published this commentary in 2015. Our friend Robert Liberty a keen observer of and advocate for cities, and the questions he posed then are still salient for urbanists.

 

by Robert Liberty

For many years I lived in Northwest Portland, Oregon.

It was a part of the city first settled by white pioneers in the 1860s, but development really took off when the streetcar arrived in the first half of the 1900s. (A century later, the old streetcar tracks had to be dug up so they could put down the new streetcar tracks.)

I first moved there in the 1980s by renting a part of a house. Then I moved a few blocks away into a courtyard apartment building of a type built all over the city in the 1940s. There were a dozen one and two-bedroom apartments on two floors around a small courtyard, built on a 15,000 square foot lot (about one-third of an acre, roughly the size of many suburban house lots). There were storage areas and a laundry room in the basement.

Next door to the west was a large single family house, built around World War I. To the south was a one-story three-plex: three tiny apartments slotted into a narrow strip between our building and a large old home.

Kitty-corner across the street was a small restaurant that served breakfast at a few booths and a counter. For a few years, every Saturday, a long black limousine with tinted windows would park near the restaurant and the chauffeur would deliver a hot breakfast to the occupant and then take away the dirty dishes. I never found out who was in the limousine.

Diagonally across the street to the northeast was a warehouse that processed large volumes of “direct mail”—i.e., spam.

Across the street to the north sat another Edwardian house used for offices, a bland three-plex built in the 1970s, and a four-plex that looked like a large single-family home in Dutch Colonial style. I lived in that four-plex happily for many years.

The rest of the street was a mix of large older homes on small lots and small apartment buildings. Both young families and older couples lived in the houses and apartments.

The street was shaded by big trees and it was usually very quiet. The street was so narrow that bigger cars had to queue to pass each other, partly because so many people parked their cars on the street since the apartment buildings provided few or no parking garage spaces.

At the other end of the block was a park that was also served as part of an elementary school’s grounds. The school was built of blond-colored brick and rose three stories. It’s locally famous as being on the migration path for Vaux’s Swifts. Early each fall thousands of the birds would swarm and then spiral down into the decommissioned smokestack of the school incinerator and boiler. Beside the school were some community tennis courts.

Not far from the school was a senior center and some subsidized housing for families of modest means. Scattered here and there in the nearby blocks were grand old houses—some beautifully maintained and very expensive, other cut up into legal and illegal apartments.

Three blocks away was an arterial street, but it wasn’t too much wider than the street in front of my apartment building. I often walked there to buy groceries from a small grocery store and drop off my dry cleaning. Another block or two farther along the arterial was a branch library. Across the street from the grocery store was a small sheet metal fabrication business.

Once, when I was explaining to a reporter how our neighborhood had every possible kind of use and service, I gestured to the sheet metal company to illustrate the presence of light industrial uses. It was then that I realized is was called Schmeer Sheet Metal Works and Fabrication. “See,” I said, “we have the whole schmeer.”

That neighborhood is typical of many older neighborhoods in American cities. And in almost all of American cities and suburbs, that neighborhood would be illegal.

It is illegal to build an apartment building in a district of single family homes. Residential zoning was adopted in order to prevent single family neighborhood property values and families from being degraded by the presence of apartments where immigrants and low-class people lived. (If you think this is an exaggeration read the early history of zoning including the various state and federal supreme court decisions upholding challenges to the constitutionality of residential zoning.)

Residential zoning today has carried class separation to great extremes, which you can see if you travel by air: Over here, big single-family homes on big lots. Over there a mobile home park. In another direction, a pod of apartment buildings. A place of every income, and every income in its (separate) place.

Some affluent cities use their power to regulate development to exclude entire categories of housing from within their border, like apartments and mobile homes.

Typical city zoning makes it illegal to build or operate a warehouse or a light industrial use next to homes and a grocery store. The separation of industrial and commercial uses from residential uses was the very foundation of zoning a century ago.

It is illegal in most cities to build apartment buildings without providing one or more parking spaces for every apartment. The same would be true of grocery stores or office buildings. The neighborhood’s grocery store has fewer than 20 parking spaces.

The street in my old neighborhood does not meet more current design requirements, because it is considered inappropriate to design a street so that car cannot pass each other at any time or location. The street is 27 feet wide, curb to curb. That includes parallel parking on both sides, leaving a travel lane about 12 feet wide. That violates the standards for a local road recommended by the American Association of State Highway and Transportation Officials.

In most cities, you cannot operate a business out of your home if you have employees or customers arriving from other locations.

In too many places, it is effectively illegal to build subsidized housing for families of modest means. Even when it might be legal, local officials can interpret nebulous phrases like “preserve neighborhood character” or complex regulations in way that such housing is never approved.

A senior center, even though it is not a business, would be treated like a commercial use that cannot be allowed next to single family homes.

The elementary school would probably be illegal too because the school property would be too small to meet many state’s standards. The school is located on about 9.8 acres but many of those acres are occupied by a park open to the public at all times. The school, which has 685 students, would require a site of 11.85 acres in California, Texas and Connecticut, 15 acres in New Mexico and 18-20 acres in suburban Pennsylvania.

And then there is the absence of parking places; according to Virginia’s 2010 school design standards, the school should provide parking for all the staff, visitors and about a third of the students. (Apparently the legal driving age in Virginia is much younger than in Oregon.)

Of course, a jumbled neighborhood like mine would probably be regarded by many residential realtors, local officials, and even prospective home purchasers as a bad investment. After all, it’s about as far from the suburban residential model as possible. But in fact, this neighborhood, while providing many apartments (formerly) affordable by lower-income renters, was and is highly sought after.  According to Zillow, homebuyers in this neighborhood pay more than twice as much per square foot to live here than they would in the region’s suburbs.

One reason the prices are so high is because the supply of this kind of neighborhood has been limited by zoning, parking regulations, street design standards, school design standards, and building codes. We need many more neighborhoods like this all across America, so that all of the increasing numbers of people who want to live in places like this can afford to live in them.

Does that mean do away with all regulations? No. But it does mean that we need to stop assuming that everyone wants to, or can afford to, live in a big-house on a big lot in a residential-only neighborhood.  We shouldn’t be making it illegal to build the kind of neighborhoods, like mine that are increasingly popular and in short supply.

An illegal neighborhood in NW Portland.

Robert Liberty has worked over the last 34 years as an attorney, elected official and university program administrator to help implement plans to create livable, sustainable and equitable cities and to conserve the rural lands and resources we need for food, fiber and wildlife.  He has called Portland home for almost half of a century.

Updated: Is traffic worse now? The “congestion report” can’t tell us

Part 1: Resurrecting discredited data to paint a false history

The Texas Transportation Institute claims that traffic congestion is steadily getting worse.  But its claims are based on resurrecting and repeating traffic congestion estimates from 1982 through 2009 that were based on a deeply flawed and biased model.  Since 2009, TTI has used different data and a different estimation approach, which means it can’t make any accurate or reliable statements about whether today’s congestion is better or worse than a decade ago—or two or three decades ago.

Earlier, we went over some of the big problems with the Texas Transportation Institute’s (TTI) new “Urban Mobility Report.” Today, we want to focus on one claim in particular that’s been repeated by many media outlets: that traffic is worse now than before.

An Associated Press article, for example, highlights the Urban Mobility Report’s (UMR) claim that traffic is now worse than in 2007:

Overall, American motorists are stuck in traffic about 5 percent more than they were in 2007, the pre-recession peak, says the report from the Texas A&M Transportation Institute and INRIX Inc., which analyzes traffic data.

Four out of five cities have now surpassed their 2007 congestion.

And heres the Wall Street Journal claiming that the UMR proves that traffic is worse now than at any time since at least 1982:

In a study set for release Wednesday, the university’s Texas Transportation Institute and Inrix, a data analysis firm, found traffic congestion was worse in 2014 than in any year since at least 1982.

The basis for this claim is that TTI’s 2014 measured level of congestion was higher than what TTI now reports for the entire period since 1982.

Traffic in New Orleans. Clearly, this highway isn't big enough. Credit: Bart Everson, Flickr
Traffic in New Orleans. Clearly, this highway isn’t big enough. Credit: Bart Everson, Flickr

 

The problem is that the TTI has changed its methodology many times—fourteen, according to the authors’ own estimates. In 2009, it totally abandoned its two-decades-old approach, and began using traffic speed data gathered by the traffic monitoring firm Inrix. As a result, the post-2009 data simply aren’t comparable to the pre-2009 data, which means it’s not possible to truthfully claim that traffic is worse (or better) than it was before the recession or in 1982.

Prior to 2010, TTI used an entirely different—and now discredited—methodology to estimate the travel time index. Before the advent of real time speed monitoring, TTI could not directly measure the congestion it reported on. Instead, it built a mathematical model that predicted what the speed would be based on the volume of traffic on a road. It turns out that the model predicts that roads will automatically slow down as more traffic is added, an assumption that is not always correct. As total traffic volumes increased in the 1980s and 1990s, the TTI model mechanically converted higher volumes into lower speeds at the peak hour, and automatically generated a steadily increasing rate of congestion.

In 2010, we published “Measuring Urban Transportation Performance,” which showed that the TTI model lacked statistical validity. The modelers essentially ignored the underlying data and fitted their own relationship by eye. We also showed that the supposed speed declines generated by the model were inconsistent with real world data from the Census and transportation surveys which showed increased speeds. After 2009, UMR simply dropped this methodology. But they didn’t stop reporting the flawed pre-2009 data.

For the technically inclined, here are the details of the critique of the 1982-2009 TTI methodologies. Many transportation experts have noted that TTI’s extrapolation of speeds from volume data was questionable. Dr. Rob Bertini—who was for several years Deputy Administrator of US DOT’s Research and Innovative Technology Administration—warned that the lack of actual speed data undercut the reliability of the TTI claims:

No actual traffic speeds or measures extracted from real transportation system users are included, and it should be apparent that any results from these very limited inputs should be used with extreme caution.

Bertini, R. (2005). Congestion and its Extent. In D. Levinson & K. Krizek (Eds.), Access to Destinations: Rethinking the Transportation Future of our Region: Elsevier.

The report’s authors conceded that they “eyeballed” the data to choose their volume-speed relationship:

…when trying to determine if detailed traffic data resembles the accepted speed-flow model, interpretations by the researcher were made based on visual inspection of the data instead of a mathematical model.

Schrank, D., & Lomax, T. (2006). Improving Freeway Speed Estimation Procedures (NCHRP 20-24(35)). College Station: Texas Transportation Institute. (emphasis added)

The data they used (blue diamonds) and the relationship that they guesstimated (the red line) are shown in the following chart:

Speed_volume

The red line that the researchers drew by eyeballing the data not only has no mathematical basis, but predicts speeds that are slower 80 percent of the time than a simple quadratic curve (the downward-sloping black curve) fitted to the data. For example, the TTI model predicts that a road carrying 30,000 vehicles per lane per day will have an average speed of 40 miles per hour; the real world data show that actual speeds on roads with this volume average more than 50 miles per hour. This has the effect of biasing upward the estimates of delay associated with additional travel volumes.

The UMR authors can’t show that the congestion numbers that they estimated based on their flawed, pre-2009 model are in any way comparable to the post-2009 Inrix data. And in any event, the pre-2009 data is statistically unreliable. The data presented here can’t be squared with the fact that the average American is driving fewer miles today than any time since the late 1990s, and that travel surveys show we’re also spending fewer hours traveling. The bottom line is that there’s simply no basis for the claim that congestion is worse today than any time since 1982.

The fact that TTI makes this claim, and continues to publish its pre-2009 data even after the errors in its methodology have been documented, should lead policy makers—and journalists—to be extremely skeptical of its “Urban Mobility Report.” George Orwell famously observed in 1984 that “He who controls the past controls the future. He who controls the present controls the past.” A fictitious and incorrect history of transportation system performance will be a poor guide to future transportation policy.

In theory, TTI claims to have overcome these problems by switching to actual traffic speed data gathered by Inrix.  Tomorrow, in Part 2 of this post, we’ll examine TTI’s analysis of the Inrix data to see what the real trend in traffic congestion has actually been in the era of big data.

Note: This post has been revised and expanded from a version published earlier on September 1.

Contradictory conclusions and disappearing data

Part 2: A curious discrepancy between two major congestion reports using the same data

Yesterday, we explained why one of the most common takes on the Texas Transportation Institute’s “Urban Mobility Report” is actually totally unjustified: Though many media outlets repeat the UMR’s claim that traffic delays are worse today than it has been since 1982, TTI completely changed the way it measured congestion in 2009, making comparisons before and after that date impossible. Moreover, its pre-2009 data are based on estimates that have shown to be biased in favor of showing more congestion than actually existed.

Since then, TTI has walked away from its pre-2009 approach, when it couldn’t measure speeds directly and so used traffic volume data to estimate them instead. Since 2009, TTI has used data from Inrix, a company that uses data from vehicles connected to electronic networks to measure travel speeds in real time. In theory, at least, the switch to Inrix data should be more reliable.

But there’s a problem—a very big problem.

There’s a profound and unexplained discrepancy between the travel trends in TTI’s latest report and those reported by Inrix. For the period 2010 to 2014, Inrix says that traffic congestion is down by 29 percent—while TTI says it’s up by 4.7 percent.

The TTI report neither acknowledges nor explains the discrepancy between its tabulation of the Inrix data and and that reported by Inrix.

For years, Inrix has regularly published monthly metro- and national-level data on its National Traffic Scorecard. We’ve been following these data for years, as they provide an unique perspective on shifting travel patterns. The Inrix website reported these data monthly from January 2010 through July 2014. Like the Texas Transportation Institute, Inrix reported both a “travel time index”—ratio of travel times in peak to off-peak hours, and the additional amount of time in hours trips took due to traffic congestion.

Inrix’s National Traffic Scorecard data show that traffic congestion peaked in 2010, declined through 2011 and 2012, and has risen slightly in 2013 and 2014. In a May 2012 press release entitled:  “Traffic Congestion Plummets Worldwide: INRIX Traffic Scorecard Reports 30 Percent Drop in Traffic Across the U.S.” Inrix said its Annual Traffic Scorecard revealed “a startling 30 percent drop in traffic congestion in 2011.”  According to data on the Inrix website, average congestion levels, as measured by the travel time index from August 2013 to July 2014 (the latest 12-month period for which Inrix disclosed this data on its website) were 29 percent lower than recorded in calendar year 2010 (the earliest 12 months in its data reported on the website). A direct reading of the Inrix data suggests that time lost to traffic congestion in 2014 was lower, by nearly a third, than in 2010.

Moreover, it’s interesting to note that INRIX reported widespread declines in traffic congestion in almost all major metropolitan areas between 2010 and 2012. A majority of large metropolitan areas saw traffic decline by more than a third; only one metropolitan area—Austin—recorded an increase in congestion.

(Monthly Inrix data compiled from the Inrix National Traffic Scorecard; see postscript for further details)

In contrast, the Texas Transportation Institute’s 2015 Urban Mobility Report claims that the average peak hour trip took 21 percent longer than a non-peak trip in 2010, and 22 percent longer in 2014—an increase of 4.7 percent over four years. Their annual figures suggest that congestion was flat through 2010 and 2011, and increased in 2014.

To be clear, we don’t doubt that TTI’s numbers are in fact based on the underlying Inrix data. But the fact that they come to such different conclusions from Inrix’s publicly available numbers suggests that something else is going on—some notable set of assumptions, for example. Unfortunately, TTI’s report does not itself explain how it reached these figures, even though this is not the first time we (or others) have pointed out this discrepancy.

On top of this, there’s another issue: in addition to finding different trends, Inrix and TTI suggest very different current levels of congestion, with Inrix’s estimate being much lower than TTI’s. Inrix says that in 2014, congestion caused the average trip taken in the peak hour to be 7.9% longer than the same trip taken at another time. TTI says that in 2014, congestion caused the average peak-hour trip to be 22% longer than off-peak. As a result, TTI claims the average traveler experiences 42 hours of delay per year, while  Inrix estimates that number at just 13.7 hours.

The fact that two different summaries of the same underlying data produce such remarkably different results demands an explanation. There is probably some clear, explicable reason why the two studies produce different results. Again, it’s likely that the two studies used different assumptions. But the fact that different assumptions can produce such wildly different—and in this case, conflicting—results tells us that the conclusions presented here are highly sensitive to the assumptions used. And in neither case are the assumptions or the calculations sufficiently transparent that any independent third party can verify these claims.

And that’s important because the researchers at the Texas Transportation Institute, despite their residence in an academic institution (Texas A&M University), have repeatedly declined to submit their work for peer review. Given the wide variation in the results, the absence of clarity in the methodology and assumptions, and the lack of peer review, no one should put any weight on the claims of the Texas Transportation Institute that it can accurately measure or faithfully report the level or trend of traffic congestion in the nation.

A Post-Script: Disappearing Data

Unfortunately, the Inrix website no longer displays monthly data showing the Inrix calculation of the travel time index and hours of time lost for the US and metropolitan markets. It appears that the link to this data was removed from the Inrix website on August 27, 2015.

At City Observatory, we’ve bookmarked and regularly visited the Inrix National Traffic Scorecard page. For several years, the page has featured a Tableau data presentation which allows users to view current and historic traffic congestion data for entire nations and for selected metropolitan areas. The Tableau page is interactive and shows line charts indicating the travel time index for a selected geography for several months and years. Inrix stopped updating the monthly market-level travel time index data with July 2014.  (These data are the source of the monthly national Inrix congestion numbers reported in the first chart in this commentary).

On August 26th the Scorecard page included data from 2010 through 2014, including an interactive chart that looked like this:

inrix_2010_14

Following the release of the Texas Transportation Institute’s report, the link to the Tableau data was removed from this page and replaced with a page of text with links to the Texas Transportation Institute’s urban mobility scorecard page.

Inrix screenshot as of September 2, 2015.
Inrix screenshot as of September 2, 2015.

Relying on our cache of the original website, and links identified by the Wayback Machine, we identified the address of the Tableau files containing the National Transportation Scorecard. They are located here.

UPDATED (again): Another tall tale from the Texas Transportation Institute

UPDATE: A chorus of congestion cost critiques

By this point, researchers and practitioners from around the country (and beyond!) have laid out their problems with TTI’s congestion reports. Here’s a roundup of some of the best:

UPDATE #2: Robert Puentes of the Brookings Institution expresses his own reservations about what “congestion” means, and the goals of access versus mobility.

Transportation For America has also weighed in.


Everything is bigger in Texas—which must be why, for the past 30 years, the Texas Transportation Institute (TTI) has basically cornered the market for telling whoppers about the supposed toll that traffic congestion takes on the nation’s economy. Today, they’re back with a new report, “The Urban Mobility Scorecard,” which purports to measure congestion and its costs in US cities.

The numbers (and from time to time, the methodology) changes, but the story remains the same. Traffic is bad, traffic is costing Americans lots of money, and traffic is getting worse. Here’s the press release: “Traffic Gridlock Sets New Records for Traveler Misery: Action Needed to Reduce Traffic Congestion’s Impact on Drivers, Businesses and Local Economies.”

The trouble with TTI’s work is that, to put it bluntly, it’s simply wrong. For one, their core measure of congestion costs—the “travel time index”—only looks at how fast people can travel, and completely ignores how far they have to go. As a result, it makes sprawling cities with fast roads between far-flung destinations look good, while penalizing more compact cities where people actually spend less time—and money—traveling from place to place. These and other problems, discussed below, mean that the TTI report is not a useful guide to policy.

Moreover, its authors have been consistently indifferent in responding to expert criticism, nor has the report been subjected to peer review. The authors continue to report data for 1982 through 2007, even though TTI’s model for those years doesn’t actually measure congestion: it simply assumes that increased vehicle volumes automatically produce slower speeds, which is not necessarily accurate. The report’s data from 2007 and earlier isn’t comparable the data that comes aftewards, and can’t legitimately be used to make claims about whether traffic is better or worse than in earlier periods. And for decades, TTI used a fuel consumption model to estimate gas savings that was calibrated based on 1970s-era cars, and which assumed that fuel economy improved with higher speeds—forever.

At City Observatory, we’ve spent a lot of time digging through TTI’s work and similar congestion cost reports. A summary of our work is in the City Subjects card deck “Questioning Congestion Costs.” Here’s what we’ve found:

  • The travel time index used to compute costs treats the inability to drive faster than the speed limit due to congestion as a “cost” to commuters.
  • The predicted increase in congestion between now and 2030 will likely be trivial: an increase in delay in the average daily commute of about 25 seconds.
  • Predictions of increases in driving and congestion have repeatedly been proven wrong.
  • Driving is down: the US experienced “peak car” in 2005, and the average number of miles driven per American has fallen seven percent since then from 27.6 miles to 25.6 miles per day.
  • The national Inrix travel data shows that time lost to traffic congestion in the United States has fallen 40 percent since 2010.
  • Time lost to traffic congestion is so small for most travelers that it isn’t noticed and has little economic value.
  • Building enough capacity to eliminate rush hour congestion would be virtually impossible and cost many times more than the supposed value of time lost to congestion.

As we pointed out in with our “Cappucino Congestion Index” published in April, the very premise of the index is silly: it creates an exaggerated perspective of costs of congestion and totally ignores the costs—and ultimately futility—of adding additional road capacity.

We’re in the process of analyzing the details of the latest TTI report. We’ll post additional information here as our analysis proceeds. This post will be updated.

Stay tuned!

 

New Orleans’ missing black middle class

Washed away?  Or moved to the suburbs?

At FiveThirtyEight, Ben Casselman writes: “Katrina Washed Away New Orleans’s Black Middle Class.” It’s a provocative piece showing the sharp decline in the black population of the city of New Orleans, particularly the city’s black middle class. While the city has rebounded in many ways since Katrina, the city’s black population has recovered more slowly, and middle-income blacks especially so. While the white, non-Hispanic population of the city is still below pre-Katrina levels, it has rebounded faster than the black population.

Casselman alludes to the diaspora of the city’s African American population, which is down by nearly 100,000 from pre-Katrina levels. His analysis shows that the black middle class has recovered far more slowly than other demographic groups, but doesn’t say where they might have moved to. And the analysis conspicuously omits one major factor shaping population trends in New Orleans (and for that matter, other U.S cities): the suburbanization of the black middle class. The word “suburb” doesn’t appear in Casselman’s piece.

Casselman is clear that his analysis covers just the city limits of New Orleans, or Orleans Parish. But there’s much more to metro New Orleans than Orleans Parish. Like most US metros, a majority of the region’s population—and most of its population growth—has been in its suburbs. Suburbanization has accelerated post-Katrina. The city’s population is only about 30 percent of the New Orleans metropolitan area, down from about from 36 percent of the metro total in 2000.

The suburbanization of blacks in New Orleans

Those who follow New Orleans closely know that the area’s black population has grown increasingly suburban. The Data Center, a New Orleans-based independent research organization, has tracked the region’s changes before and after Katrina on its website and in a recent report “Who Lives in New Orleans and Metro Parishes Now?” Their data shows the makeup of the metropolitan area according to its constituent parishes for the years 2000 and 2013. Their findings show a stark gap between demographic trends in the city and surrounding suburbs:

While the city has 97,395 fewer black residents, the metro area as a whole has only 66,752 fewer black residents, revealing that the suburban parishes have gained more than 30,000 blacks. Moreover, the metro area as whole has had a net loss of 75,228 white residents. In short, the metro area as a whole is increasingly diverse with gains in blacks, Hispanics, and Asians and losses of white residents in nearly every parish.

While the black population is increasing in suburban parishes, the reverse is true for the white population, according to The Data Center’s report. The white, non-Hispanic population of the suburban parishes has decreased 11 percent, slightly faster than in Orleans parish, compared to an increase the black population of the suburban parishes of 17 percent. (We’ve reproduced the data from the center’s 2014 report below in the Appendix.)

The movement of black Americans to the suburbs is a widespread trend. According to the Brookings Institution’s Bill Frey, the black population of many central cities is decreasing (including in nine of the ten largest cities), and the black population of the suburbs is increasing almost everywhere, with 96 of the 100 largest metropolitan areas recording increasing in their suburban black population. This movement is propelled by the black middle class; Frey notes that the black movement to the suburbs is led by the young, those with higher education, and married couples with children. As Pete Saunders has written, suburban living is still aspirational for many blacks.

Black middle class growing in New Orleans suburbs

But the growing black population in New Orleans’ suburbs is not a representative sample of the region’s African American residents. Rather, Census data suggest that it largely reflects an increase in middle-income and upper-income black households. Data tabulated by the Census Bureau’s American Community Survey show the relative income levels of black households living in Orleans Parish compared to suburban areas. We’ve extracted data from American Fact Finder for 2005 and 2012. (The 2013 data are available, but reflect a change in metro area boundaries, so we use the earlier data for comparability.) While the Census Bureau reports data in the same income ranges in each year, the dollar figures are not directly comparable between 2005 and 2012 due to inflation over that time period. In metropolitan New Orleans, higher income black households are more likely to live in the suburban parishes than are low income blacks. As of 2012, about 72 percent of blacks with incomes under $15,000 live in Orleans Parish, while about 53 percent of blacks with incomes over $35,000 live in one of the surrounding suburban parishes. Comparing the income distribution data for black households in Orleans Parish with those for suburban parishes in 2005 and 2012 shows that while lower income black households have become more heavily concentrated in the city, middle- and upper-income black households have become more likely to live in the suburbs. Here we’ve divided all black households in metro New Orleans into three roughly equal groups based on household income, and reported the share of the metro total in each income group that resides in Orleans Parish in 2005 and 2012. In 2006, about 63.4 percent of the region’s black middle-income households lived in Orleans Parish. This declined to 54.4 percent in 2012. The share of the region’s poorest black households living in Orleans Parish actually increased from about 68 percent to 72 percent. The location of the highest earning third (those with incomes of $40,000 or more) shifted from a majority in Orleans Parish (54 percent in 2005) to a majority living in the suburban parishes in 2012.  

 

A more integrated New Orleans

Overall, the metropolitan New Orleans region has become more integrated. During the decade of the 1990s, black/white segregation in metropolitan New Orleans was actually increasing—a pattern that ran contrary to the national trend. But between 2000 and 2010, the New Orleans metropolitan area recorded a sharp decrease in segregation as measured by the black/white dissimilarity index. According to William H. Frey at the University of Michigan Population Studies Center, the black-white dissimilarity index for metropolitan New Orleans has fallen from 68.3 in 1990 and 69.2 in 2000 to 63.9 in 2010.

A racial dot map of metropolitan New Orleans, showing the state of segregation in 2010. Credit: Cooper Center Dot Map
A racial dot map of metropolitan New Orleans, showing the state of segregation in 2010. Credit: Cooper Center Dot Map

One of the keys to addressing the black-white earnings disparity is reducing segregation. As we wrote earlier this year, metropolitan areas with higher levels of segregation have, on average, much higher black-white earnings gaps. Similarly, as the work of Raj Chetty and his colleagues has shown, income and racial segregation is a powerful correlate of impaired economic mobility. The problem is exceptionally acute in New Orleans, which ranks 99th of the 100 largest metropolitan areas on Chetty’s index of intergenerational economic mobility.

As New Orleans rebuilds, it has an opportunity to address the historic patterns of segregation that have aggravated the economic plight of the area’s African-American population. It appears that it is making some progress on this front.

It’s fair, as FiveThirtyEight has done, to acknowledge the significant demographic changes that have taken place in New Orleans. Unquestionably, Katrina has had an enormous impact. But the decline of the black middle class in New Orleans also reflects two well-established trends: the national decline of segregation in housing and the movement of higher income blacks to the nation’s suburbs. Fewer blacks live in New Orleans, and more live in its suburbs. While the white non-Hispanic share of the city’s population has increased—at xx percent it’s still a minority—the white, non-Hispanic population of the area’s suburbs has decreased even faster. In the wake of Katrina, metro New Orleans is gradually becoming a more integrated city.

Appendix:

Population Change, by Race & Ethnicity,

Metropolitan New Orleans, 2000 to 2013

Orleans Parish
2000 2013 Chg. %Chg
White, Non-Hispanic 128,871 117,377 -11,494 -9%
Black 323,392 223,742 -99,650 -31%
Hispanic 14,826 20,849 6,023 41%
Asian 11,007 11,356 349 3%
Other 6,578 5,391 -1,187 -18%
Total 484,674 378,715 -105,959 -22%
Balance of MSA
2000 2013 Chg. %Chg
White, Non-Hispanic 602,643 536,753 -65,890 -11%
Black 175,177 204,179 29,002 17%
Hispanic 43,719 82,212 38,493 88%
Asian 17,621 23,882 6,261 36%
Other 13,892 15,236 1,344 10%
Total 853,052 862,262 9,210 1%
New Orleans MSA
2000 2013 Chg. %Chg
White, Non-Hispanic 731,514 654,130 -77,384 -11%
Black 498,569 427,921 -70,648 -14%
Hispanic 58,545 103,061 44,516 76%
Asian 28,628 35,238 6,610 23%
Other 20,470 20,627 157 1%
Total 1,337,726 1,240,977 -96,749 -7%

 

Source: http://www.datacenterresearch.org/data-resources/who-lives-in-new-orleans-now/

Why Cyber-Monday doesn’t mean delivery gridlock Tuesday?

Far from increasing traffic congestion, more on-line shopping reduces it, by reducing personal shopping trips

Delivery trucks generate 30 times less travel than people traveling to stores to make the same purchases

The more deliveries they make, the more efficient delivery services become

December first is famously “Cyber-Monday,” the day on which the nation’s consumers take to their web-browsers and started clicking for holiday shopping in earnest. Last year, its is estimated that online shoppers orders more than $3 billion worth of merchandise on this single day, and the expectation is this will grow even further this year.

The steady growth of e-commerce has many people worrying that urban streets will be overwhelmed by UPS and Fedex delivery trucks ferrying cardboard boxes from warehouses to homes.  One of these jeremiads was published by Quartz:  “Our Amazon addiction is clogging up our cities—and bikes might be the best solution.”  Benjamin Reider notes–correctly–that UPS and other are delivering an increasing volume of packages, and asserts–without any actual data–that truck deliveries are responsible for growing urban traffic congestion.

While there’s no question that it’s really irritating when there’s a UPS truck doubled-parked in front of you–it’s actually the case that on-balance, online shopping reduces traffic congestion.  The simple reason:  Online shopping reduces the number of car trips to stores.  Shoppers who buy online aren’t driving to stores, so more packages delivered by UPS and Fedex and the USPS mean fewer cars on the road to the mall and local stores. And here’s the bonus: this trend benefits from increased scale.  The more packages these companies deliver, the greater their deliver density–meaning that they travel fewer miles per package. So if we look at the whole picture, shifting to e-commerce actually reduces congestion.


Delivering packages and reducing urban traffic congestion! Credit: Jason Lawrence, Flickr
Delivering packages and reducing urban traffic congestion! Credit: Jason Lawrence, Flickr

The rise of e-commerce and attendant residential deliveries has led to predictions that urban streets will be choked to gridlock by delivery trucks. A recent article in Forbes predicted that package deliveries would triple in a few years, adding to growing traffic congestion in cities around the world.

In our view, such fears are wildly overblown.  If anything they have the relationship between urban traffic patterns and e-commerce exactly backwards.  The evidence to date suggests that not only has the growth of e-commerce done nothing to fuel more urban truck trips, but on net, e-commerce coupled with package delivery is actually reducing total urban VMT and traffic congestion, as it cuts into the number and length of shopping trips that people take in urban areas. The first point is that, despite the rapid growth of e-commerce, truck traffic has been essentially flat.

Shopping on line substitutes for personal shopping trips and actually reduces traffic congestion

It actually seems like that increased deliveries will reduce urban traffic congestion, for two reasons.  First, in many cases, ordering on line substitutes for shopping trips.  Customers who get goods delivered at home forego personal car shopping trips.  And because the typical UPS delivery truck makes 120 or so deliveries a day, each delivery truck may be responsible for dozens of fewer car-based shopping trips.  At least one study suggests that the shift to e-commerce may reduce total VMT and carbon emissions.  And transportation scholars have noted a significant decrease in shopping trips and time spent shopping.

There are already signs that e-commerce is reducing the amount of travel associated with shopping.  The National Household Travel Survey, conducted in 2009 and 2017, shows a decrease in travel-related shopping.  The US Department of Transportation concludes:

In 2017 people made fewer everyday trips than previously. The decline in travel for shopping and running errands was primarily due to the increase in online shopping and home deliveries.

The decline in vehicle miles traveled per person per day was greatest for younger adults–the group that reports the most frequent use of on-line shopping.  On-line shopping creates some travel for delivery, but reduces the number of consumer shopping trips.  And there are vastly more consumers than delivery trucks; and each delivery truck makes many deliveries.  Professor William Wheaton of MIT estimates that $100 spent on line generates about eight-tenths of a mile of vehicle travel for UPS delivery trucks; while the same amount of consumer spending at brick and mortar retailers generates about 28 miles of vehicle travel.  This means that on-line shopping produces 30 times less vehicle travel than personal shopping.

Source: William Wheaton, MIT Center for Real Estate, 2019.

The more deliveries, the more efficient they become

But there’s a second reason to welcome–and not fear–an expansion of e-commerce from a transportation perspective.  The efficiency of urban trucks is driven by “delivery density”–basically how closely spaced are each of a truck’s stops.  One of the industry’s key efficiency metrics is “stops per mile.”  The more stops per mile, according to the Institute for Supply Management, the greater the efficiency and the lower the cost of delivery.  As delivery volumes increase, delivery becomes progressively more efficient.  In the last several years, thanks to increased volumes — coupled with computerized routing algorithms — UPS has increased its number of stops per mile–stops increased by 3.6 percent but miles traveled increased by only about half as much, 1.9 percent.  UPS estimates that higher stops per mile saved an estimated 20 million vehicle miles of travel.  Or consider the experience of the U.S. Postal Service:  since 2008, its increased the number of packages it delivers by 700 million per year (up 21 percent) while its delivery fleet has decreased by 10,000 vehicles (about 5 percent).

As e-commerce and delivery volumes grow, stop density will increase and freight transport will become more efficient.  Because Jet.com is a rival internet shopping site to Amazon.com, and not a trucking company, its growth means more packages and greater delivery density for UPS and Fedex, not another rival delivery service putting trucks on the street.

So, far from putative cause of worry about transportation system capacity–and inevitably, a stalking horse for highway expansion projects in urban areas–the growth of e-commerce should be seen as another force that is likely to reduce total vehicle miles of travel, both by households (as they substitute on-line shopping for car travel) and as greater delivery density improves the efficiency of urban freight delivery. A study of the shopping and travel habits in the United Kingdom showed that those who used on-line shopping reduced the total number of shopping trips that they took, suggesting that package delivery stops substitute for personal shopping trips. The study concludes:

Crucially, having shopped online since the last shopping trip significantly reduces the likelihood of a physical shopping trip.

As David Levinson reports, data from detailed metropolitan level travel surveys and the national American Time Use Study show that time spent shopping  has declined by about a third in the past decade.    As Levinson concludes “. . . our 20th century retail infrastructure and supporting transportation system of roads and parking is overbuilt for the 21st century last-mile delivery problems in an era with growing internet shopping.”

So the next time you see one of those white or brown package delivery trucks, think about how many car based shopping trips its taking off the road.

 

Source:

William Wheaton, The IT-Energy Transportation Revolution: Implications for Urban Form

Department of Economics, Center for Real Estate, MIT May, 2019

 

Black Friday, Cyber-Monday and the myth of gridlock Tuesday

Far from increasing traffic congestion, more on-line shopping reduces it, by reducing personal shopping trips

Delivery trucks generate 30 times less travel than people traveling to stores to make the same purchases

The more deliveries they make, the more efficient delivery services become

The day after a nation celebrates its socially distanced “Zoom Thanksgiving” we’ll look to see how the pandemic affects the traditional “Black Friday” shopping spree. Last year, it is estimated that online shoppers orders billions worth of merchandise on this single day, and the expectation is this will grow even further this year.

The steady growth of e-commerce has many people worrying that urban streets will be overwhelmed by Amazon, UPS and Fedex delivery trucks ferrying cardboard boxes from warehouses to homes.  One of these jeremiads was published by Quartz:  “Our Amazon addiction is clogging up our cities—and bikes might be the best solution.”  Benjamin Reider notes—correctly—that UPS and other are delivering an increasing volume of packages, and asserts—without any actual data—that truck deliveries are responsible for growing urban traffic congestion.

While there’s no question that it’s really irritating when there’s a UPS truck doubled-parked in front of you—it’s actually the case that on-balance, online shopping reduces traffic congestion.  The simple reason:  Online shopping reduces the number of car trips to stores.  Shoppers who buy online aren’t driving to stores, so more packages delivered by UPS and Fedex and the USPS mean fewer cars on the road to the mall and local stores. And here’s the bonus: this trend benefits from increased scale.  The more packages these companies deliver, the greater their deliver density–meaning that they travel fewer miles per package. So if we look at the whole picture, shifting to e-commerce actually reduces congestion.


                                               Delivering packages and reducing urban traffic congestion!

To the fleets of UPS, USPS and Fedex delivery trucks, we can now add tens of thousands of Amazon trucks. The e-commerce giant has even contracted for 100,000 electric vans from startup Rivian. The rise of e-commerce and attendant residential deliveries has led to predictions that urban streets will be choked to gridlock by delivery trucks. A recent article in Forbes predicted that package deliveries would triple in a few years, adding to growing traffic congestion in cities around the world. Added to this, the flood of on-line shoppingt triggered by the pandemic—there was a 30 percent increase in e-commerce sales in the second quarter—seems to mean we’re approaching delivery truck gridlock on city streets.

In our view, such fears are wildly overblown.  If anything they have the relationship between urban traffic patterns and e-commerce exactly backwards.  The evidence to date suggests that not only has the growth of e-commerce done nothing to fuel more urban truck trips, but on net, e-commerce coupled with package delivery is actually reducing total urban VMT and traffic congestion, as it cuts into the number and length of shopping trips that people take in urban areas. The first point is that, despite the rapid growth of e-commerce, truck traffic has been essentially flat.

Shopping on line substitutes for personal shopping trips and actually reduces traffic congestion

It actually seems like that increased deliveries will reduce urban traffic congestion, for two reasons.  First, in many cases, ordering on line substitutes for shopping trips.  Customers who get goods delivered at home forego personal car shopping trips.  And because the typical UPS delivery truck makes 120 or so deliveries a day, each delivery truck may be responsible for dozens of fewer car-based shopping trips.  At least one study suggests that the shift to e-commerce may reduce total VMT and carbon emissions.  And transportation scholars have noted a significant decrease in shopping trips and time spent shopping.

There are already signs that e-commerce is reducing the amount of travel associated with shopping.  The National Household Travel Survey, conducted in 2009 and 2017, shows a decrease in travel-related shopping.  The US Department of Transportation concludes:

In 2017 people made fewer everyday trips than previously. The decline in travel for shopping and running errands was primarily due to the increase in online shopping and home deliveries.

The decline in vehicle miles traveled per person per day was greatest for younger adults–the group that reports the most frequent use of on-line shopping.  On-line shopping creates some travel for delivery, but reduces the number of consumer shopping trips.  And there are vastly more consumers than delivery trucks; and each delivery truck makes many deliveries.  Professor William Wheaton of MIT estimates that $100 spent on line generates about eight-tenths of a mile of vehicle travel for UPS delivery trucks; while the same amount of consumer spending at brick and mortar retailers generates about 28 miles of vehicle travel.  This means that on-line shopping produces 30 times less vehicle travel than personal shopping.

Source: William Wheaton, MIT Center for Real Estate, 2019.

It’s likely that, just as more fuel efficient cars generated a “rebound” effect of more driving, that convenient and seemingly “free” delivery will prompt more frequent shopping.  But given the scale of the differences in VMT generated by e-commerce and auto shopping trips, there’s still plenty of room for more frequent shopping and still having vastly less traffic.

The more deliveries, the more efficient they become

But there’s a second reason to welcome–and not fear–an expansion of e-commerce from a transportation perspective.  The efficiency of urban trucks is driven by “delivery density”–basically how closely spaced are each of a truck’s stops.  One of the industry’s key efficiency metrics is “stops per mile.”  The more stops per mile, according to the Institute for Supply Management, the greater the efficiency and the lower the cost of delivery.  As delivery volumes increase, delivery becomes progressively more efficient.  In the last several years, thanks to increased volumes — coupled with computerized routing algorithms — UPS has increased its number of stops per mile–stops increased by 3.6 percent but miles traveled increased by only about half as much, 1.9 percent.  UPS estimates that higher stops per mile saved an estimated 20 million vehicle miles of travel.  Or consider the experience of the U.S. Postal Service:  since 2008, its increased the number of packages it delivers by 700 million per year (up 21 percent) while its delivery fleet has decreased by 10,000 vehicles (about 5 percent). As e-commerce and delivery volumes grow, stop density will increase and freight transport will become more efficient.

So, far from putative cause of worry about transportation system capacity—and inevitably, a stalking horse for highway expansion projects in urban areas—the growth of e-commerce should be seen as another force that is likely to reduce total vehicle miles of travel, both by households (as they substitute on-line shopping for car travel) and as greater delivery density improves the efficiency of urban freight delivery. If you don’t need a car for so many shopping trips, then owning just one car, rather than two, or going carless, becomes that much more attractive. A study of the shopping and travel habits in the United Kingdom showed that those who used on-line shopping reduced the total number of shopping trips that they took, suggesting that package delivery stops substitute for personal shopping trips. The study concludes:

Crucially, having shopped online since the last shopping trip significantly reduces the likelihood of a physical shopping trip.

As David Levinson reports, data from detailed metropolitan level travel surveys and the national American Time Use Study show that time spent shopping  has declined by about a third in the past decade.    As Levinson concludes “. . . our 20th century retail infrastructure and supporting transportation system of roads and parking is overbuilt for the 21st century last-mile delivery problems in an era with growing internet shopping.”

So the next time you see one of those white or brown package delivery trucks, think about how many car based shopping trips its taking off the road.

 

Source:

William Wheaton, The IT-Energy Transportation Revolution: Implications for Urban Form

Department of Economics, Center for Real Estate, MIT May, 2019

 

Does Cyber-Monday mean delivery gridlock Tuesday?

Today is famously “Cyber-Monday,” the day on which the nation’s consumers take to their web-browsers and started clicking for holiday shopping in earnest. Last year, its is estimated that online shoppers orders more than $3 billion worth of merchandise on this single day, and the expectation is this will grow even further this year.

The steady growth of e-commerce has many people worrying that urban streets will be overwhelmed by UPS and Fedex delivery trucks ferrying cardboard boxes from warehouses to homes.  One of these jeremiads was published by Quartz:  “Our Amazon addiction is clogging up our cities—and bikes might be the best solution.”  Benjamin Reider notes–correctly–that UPS and other are delivering an increasing volume of packages, and asserts–without any actual data–that truck deliveries are responsible for growing urban traffic congestion.

While there’s no question that it’s really irritating when there’s a UPS truck doubled-parked in front of you–it’s actually the case that on-balance, online shopping reduces traffic congestion.  The simple reason:  Online shopping reduces the number of car trips to stores.  Shoppers who buy online aren’t driving to stores, so more packages delivered by UPS and Fedex and the USPS mean fewer cars on the road to the mall and local stores. And here’s the bonus: this trend benefits from increased scale.  The more packages these companies deliver, the greater their deliver density–meaning that they travel fewer miles per package. So if we look at the whole picture, shifting to e-commerce actually reduces congestion.

We’ve sifted through the data on urban truck transport and package delivery economics, and here are four key takeaways:

  • Urban truck traffic is flat to declining, even as Internet commerce has exploded.
  • More e-commerce will result in greater efficiency and less urban traffic as delivery density increases
  • We likely are overbuilt for freight infrastructure in an e-commerce era
  • Time-series data on urban freight movements suffer from series breaks that make long term trend comparisons unreliable.

Delivering packages and reducing urban traffic congestion! Credit: Jason Lawrence, Flickr
Delivering packages and reducing urban traffic congestion! Credit: Jason Lawrence, Flickr

The rise of e-commerce and attendant residential deliveries has led to predictions that urban streets will be choked to gridlock by delivery trucks. A recent article in Forbes predicted that package deliveries would triple in a few years, adding to growing traffic congestion in cities around the world.

In our view, such fears are wildly overblown.  If anything they have the relationship between urban traffic patterns and e-commerce exactly backwards.  The evidence to date suggests that not only has the growth of e-commerce done nothing to fuel more urban truck trips, but on net, e-commerce coupled with package delivery is actually reducing total urban VMT and traffic congestion, as it cuts into the number and length of shopping trips that people take in urban areas. The first point is that, despite the rapid growth of e-commerce, truck traffic has been essentially flat.

E-Commerce is increasing rapidly; Urban truck travel is flat

 

The period since 2007 coincides with the big increase in e-commerce in the U.S.  From 2007 to 2017, Amazon‘s North American sales increased by a factor of 13, from $8 billion to $106 billion. Between 20007 and 2017, the total e-commerce revenues in the United States have tripled, from about $137 billion to about $448 billion according to the U.S. Department of Commerce.  But over this same time period, according to the US DOT data as tabulated by Brookings, truck traffic in urban areas has increased only about 3 percent.  All the increase in e-commerce appears to have very little net effect on urban truck traffic.

Does an increase in package deliveries mean increased urban traffic?

It actually seems like that increased deliveries will reduce urban traffic congestion, for two reasons.  First, in many cases, ordering on line substitutes for shopping trips.  Customers who get goods delivered at home forego personal car shopping trips.  And because the typical UPS delivery truck makes 120 or so deliveries a day, each delivery truck may be responsible for dozens of fewer car-based shopping trips.  At least one study suggests that the shift to e-commerce may reduce total VMT and carbon emissions.  And transportation scholars have noted a significant decrease in shopping trips and time spent shopping.

But there’s a second reason to welcome–and not fear–an expansion of e-commerce from a transportation perspective.  The efficiency of urban trucks is driven by “delivery density”–basically how closely spaced are each of a truck’s stops.  One of the industry’s key efficiency metrics is “stops per mile.”  The more stops per mile, according to the Institute for Supply Management, the greater the efficiency and the lower the cost of delivery.  As delivery volumes increase, delivery becomes progressively more efficient.  In the last several years, thanks to increased volumes — coupled with computerized routing algorithms — UPS has increased its number of stops per mile–stops increased by 3.6 percent but miles traveled increased by only about half as much, 1.9 percent.  UPS estimates that higher stops per mile saved an estimated 20 million vehicle miles of travel.  Or consider the experience of the U.S. Postal Service:  since 2008, its increased the number of packages it delivers by 700 million per year (up 21 percent) while its delivery fleet has decreased by 10,000 vehicles (about 5 percent).

As e-commerce and delivery volumes grow, stop density will increase and freight transport will become more efficient.  Because Jet.com is a rival internet shopping site to Amazon.com, and not a trucking company, its growth means more packages and greater delivery density for UPS and Fedex, not another rival delivery service putting trucks on the street.

So, far from putative cause of worry about transportation system capacity–and inevitably, a stalking horse for highway expansion projects in urban areas–the growth of e-commerce should be seen as another force that is likely to reduce total vehicle miles of travel, both by households (as they substitute on-line shopping for car travel) and as greater delivery density improves the efficiency of urban freight delivery. A study of the shopping and travel habits in the United Kingdom showed that those who used on-line shopping reduced the total number of shopping trips that they took, suggesting that package delivery stops substitute for personal shopping trips. The study concludes:

Crucially, having shopped online since the last shopping trip significantly reduces the likelihood of a physical shopping trip.

As David Levinson reports, data from detailed metropolitan level travel surveys and the national American Time Use Study show that time spent shopping  has declined by about a third in the past decade.    As Levinson concludes “. . . our 20th century retail infrastructure and supporting transportation system of roads and parking is overbuilt for the 21st century last-mile delivery problems in an era with growing internet shopping.”

So the next time you see one of those white or brown package delivery trucks think about how many car based shopping trips its taking off the road.

 

Does Cyber-Monday mean delivery gridlock Tuesday?

Yesterday was, famously, cyber-Monday, the day in which the nation’s consumers took to their web-browsers and started clicking for holiday shopping in earnest. Tech Crunch reports that estimated e-commerce sales will yesterday were predicted at $3.36 billion, coming on top of almost $5 billion in on-line sales on Thanksgiving and Black Friday.

The steady growth of e-commerce has many people worrying that urban streets will be overwhelmed by UPS and Fedex delivery trucks ferrying cardboard boxes from warehouses to homes.  The latest of these is an article published at Quartz:  “Our Amazon addiction is clogging up our cities—and bikes might be the best solution.”  Benjamin Reider notes–correctly–that UPS delivered some 14 billion packages this year, and asserts–without any actual data–that truck deliveries are responsible for growing urban traffic congestion.

While there’s no question that its really irritating when there’s a UPS truck doubled-parked in front of you–its actually the case that on-balance, on-line shopping reduces traffic congestion.  The simple reason:  On-line shopping reduces the number of car trips to stores.  Shoppers who buy on line aren’t driving to stores, so more packages delivered by UPS and Fedex and the USPS mean fewer cars on the road to the mall and local stores. And here’s the bonus: this trend benefits from increased scale.  The more packages these companies deliver, the greater their deliver density–meaning that they travel fewer miles per package. So if we look at the whole picture, shifting to e-commerce actually reduces congestion.

We’ve sifted through the data on urban truck transport and package delivery economics, and here are four key takeaways:

  • Urban truck traffic is flat to declining, even as Internet commerce has exploded.
  • More e-commerce will result in greater efficiency and less urban traffic as delivery density increases
  • We likely are overbuilt for freight infrastructure in an e-commerce era
  • Time-series data on urban freight movements suffer from series breaks that make long term trend comparisons unreliable.

Delivering packages and reducing urban traffic congestion! Credit: Jason Lawrence, Flickr
Delivering packages and reducing urban traffic congestion! Credit: Jason Lawrence, Flickr

Despite the rapid growth of e-commerce, truck traffic has actually been increasing.  Last year, the Brookings Institution’s Adie Tomer performed a significant public service by assembling several decades of US DOT data on vehicle miles traveled.  A significant weakness of US DOT’s website is that it mostly presents data a single year at a time, which makes it really difficult to observe and analyze trends in the data.

Tomer’s work plots the US DOT data on urban travel by passenger cars, unit-trucks, and combination trucks.  He points to the growth of e-commerce, and the recent entry of Jet.com–which aims to be a challenger to Amazon’s dominance of web-based retailing.  Tomer speculates that growing e-commerce will lead to more and more delivery trucks crowding urban streets.

He marshals several decades of data on urban truck VMT to claim that urban truck traffic is up an eye-popping 800 percent since 1966.

Another way to see trucking’s urban trajectory is to view aggregate growth since the 1960s. While urban vehicle miles traveled for both passenger cars and trucks grew steadily between 1966 and 1990—in both cases, far surpassing urban population growth—urban trucking absolutely exploded thereafter, reaching almost 800 percent growth until the Great Recession led to reduced demand. That pattern coincided almost perfectly with the rise of e-commerce and the use of digital communications to manage shipping for logistics firms like UPS and FedEx and major private shippers like Walmart.

The post concludes by warning us that we need to provide for additional infrastructure for urban freight movement.

With new companies like Jet and continued growth in stalwarts like Amazon, we should expect e-commerce and urban trucking to keep growing. Those patterns bring some significant implications at all levels of government.

On the transportation side, freight investment will need to be targeted at pinch points and bottlenecks. Those specific sites of congestion deliver disproportionate costs to shippers, which get passed along to consumers, and create supply chain uncertainty

But the problems of doing time series analysis with DOT’s VM-1 (vehicle miles traveled) data is not limited to the largely cosmetic problem of web-site layout.  The more serious problem is the significant series breaks that underlie the published data.  Over time, US DOT has had to make important changes to the way it defines urban and rural areas (as urban development has occurred) and has had to cope with changing data sources.  And, to be sure, DOT has tried to improve the accuracy of its estimates over time.  The cumulative result of these changes is that it is very difficult to make statistically valid statements about the change in truck traffic in cities.  (We’ve spelled out our concerns about the series break in the freight data in a technical appendix, below).

Urban truck travel actually peaked in 2008, and has mostly been declining, except for the past year.

Truck_VMT

In our view, we ought to heavily discount the published data, and not make comparisons that assume that the pre-2006 data are comparable to the post 2006 data.  If we look only at the post-2006 data, a very different picture emerges. For the past six years–a period for which we have apparently comparable estimates, which appear to be not significantly affected by re-definitions of urban and rural areas–there is little evidence that urban truck traffic is increasing.  If anything, the data suggest that it is flat to decreasing.

The alarmist implication of the “800% growth” statistic is that urban traffic will be significantly worsened by growing e-commerce sales.  For example, the Brookings data prompted bloggers at SSTI to write “Urban truck traffic has boomed alongside the rise in e-commerce. ” and to fret that “If the rapid growth in urban truck VMT is a result of increasing e-commerce deliveries, we are a long way from peak urban truck traffic.”

In our view, such fears are wildly overblown.  If anything they have the relationship between urban traffic patterns and e-commerce exactly backwards.  The evidence to date suggests that not only has the growth of e-commerce done nothing to fuel more urban truck trips, but on net, e-commerce coupled with package delivery is actually reducing total urban VMT, as it cuts into the number and length of shopping trips that people take in urban areas.

E-Commerce is increasing rapidly; Urban truck travel is flat

Urban_Freight

The period since 2007 coincides with the big increase in e-commerce in the U.S.  From 2007 to 2013, Amazon‘s North American sales increased by a factor of 5, from $8 billion to$44 billion. Between 20007 and 2013, the total e-commerce revenues  United States has doubled, from about $137 billion to about $261 billion according to the U.S. Department of Commerce.  But over this same time period, according to the US DOT data as tabulated by Brookings, truck traffic in urban areas actually declined.  All the increase in e-commerce appears to have no net effect on urban truck traffic.

Does an increase in package deliveries mean increased urban traffic?

It actually seems like that increased deliveries will reduce urban traffic congestion, for two reasons.  First, in many cases, ordering on line substitutes for shopping trips.  Customers who get goods delivered at home forego personal car shopping trips.  And because the typical UPS delivery truck makes 120 or so deliveries a day, each delivery truck may be responsible for dozens of fewer car-based shopping trips.  At least one study suggests that the shift to e-commerce may reduce total VMT and carbon emissions.  And transportation scholars have noted a significant decrease in shopping trips and time spent shopping.

But there’s a second reason to welcome–and not fear–an expansion of e-commerce from a transportation perspective.  The efficiency of urban trucks is driven by “delivery density”–basically how closely spaced are each of a truck’s stops.  One of the industry’s key efficiency metrics is “stops per mile.”  The more stops per mile, according to the Institute for Supply Management, the greater the efficiency and the lower the cost of delivery.  As delivery volumes increase, delivery becomes progressively more efficient.  In the last several years, thanks to increased volumes — coupled with computerized routing algorithms — UPS has increased its number of stops per mile–stops increased by 3.6 percent but miles traveled increased by only about half as much, 1.9 percent.  UPS estimates that higher stops per mile saved an estimated 20 million vehicle miles of travel.  Or consider the experience of the U.S. Postal Service:  since 2008, its increased the number of packages it delivers by 700 million per year (up 21 percent) while its delivery fleet has decreased by 10,000 vehicles (about 5 percent).

As e-commerce and delivery volumes grow, stop density will increase and freight transport will become more efficient.  Because Jet.com is a rival internet shopping site to Amazon.com, and not a trucking company, its growth means more packages and greater delivery density for UPS and Fedex, not another rival delivery service putting trucks on the street.

So, far from putative cause of worry about transportation system capacity–and inevitably, a stalking horse for highway expansion projects in urban areas–the growth of e-commerce should be seen as another force that is likely to reduce total vehicle miles of travel, both by households (as they substitute on line shopping for car travel) and as greater delivery density improves the efficiency of urban freight delivery.

As David Levinson reports, data from detailed metropolitan level travel surveys and the national American Time Use Study show that time spent shopping  has declined by about a third in the past decade.    As Levinson concludes “. . . our 20th century retail infrastructure and supporting transportation system of roads and parking is overbuilt for the 21st century last-mile delivery problems in an era with growing internet shopping.”

So the next time you see one of those white or brown package delivery trucks think about how many car based shopping trips its taking off the road.

Technical Appendix:  Urban Truck Data

We’re strongly of the opinion that its not appropriate to treat the pre-2006 and post-2007 truck freight data as a single series that represents the actual year to year growth in urban freight mileage.  There are good reasons to treat this as a “series break” look separately at the two series.  The technical reasons behind this judgment are two-fold.

Series Break 1:  Urbanized area boundaries

Tony Dutzik explored this issue last year in a post for the Frontier Group.  Briefly, a number of rural roads were re-classified as urban roads (reflecting changes in development patterns over time).  This has the effect of biasing upwards later year estimates of urban VMT when compared to previous years.  Some part of the apparent increase in “urban” VMT over the past decade has been a result of reclassifying formerly “rural” traffic as “urban”–not more urban traffic.

Series Break 2:  Vehicle classifications

US DOT has used different data and different definitions to classify vehicles pre- and post-2007.  Methodologically, what USDOT has done is migrated their vehicle classification system from that used in the now discontinued Vehicle Inventory and Use Survey and instead substituted RL Polk data.  As a result of this shift in methodology the number of truck miles on urban roads jumped almost 50 percent in one year, from about 102 billion miles in 2006 to about 150 billion miles in 2007.  In 2009, USDOT explained how they’d changed their estimating procedures.

The data now on the website for 2000-2006 were estimated using a methodology developed in the late 1990s. FHWA recently developed a new methodology and used it for this year’s Highway Statistics. This methodology takes advantage of additional and improved information available beginning in 2007 when states were first required to report motorcycle data – before that time, the reporting was not mandatory and the data were missing for a few states. Also, the new methodology does not rely on data from the national vehicle inventory and use survey which provided critical data for the original methodology but was not collected in 2007 as planned.

In April 2011, FHWA recalculated the 2000-2008 data along with the 2009 data to estimate trends. However, after further review and consideration, the agency determined that it is more reliable to retain the original 2000-2006 estimates because the information available for those years does not fully meet the requirements of the new methodology. Thus, the original 2000-2006 estimates are now used, whereas the 2007-2009 data are still based on the new methodology.

The author gratefully acknowledges Adie Tomer’s willingness to share the Excel spreadsheets upon which his analysis was based.

Growing e-commerce means less urban traffic

The takeaway:

  • Urban truck traffic is flat to declining, even as Internet commerce has exploded.
  • More e-commerce will result in greater efficiency and less urban traffic as delivery density increases
  • We likely are overbuilt for freight infrastructure in an e-commerce era
  • Time-series data on urban freight movements suffer from series breaks that make long term trend comparisons unreliable.

Delivering packages and reducing urban traffic congestion! Credit: Jason Lawrence, Flickr
Delivering packages and reducing urban traffic congestion! Credit: Jason Lawrence, Flickr

Over at the Brookings Institution, Adie Tomer has performed a significant public service by assembling several decades of US DOT data on vehicle miles traveled.  A significant weakness of US DOT’s website is that it mostly presents data a single year at a time, which makes it really difficult to observe and analyze trends in the data.

Tomer’s post plots the US DOT data on urban travel by passenger cars, unit-trucks, and combination trucks.  He points to the growth of e-commerce, and the recent entry of Jet.com–which aims to be a challenger to Amazon’s dominance of web-based retailing.  Tomer speculates that growing e-commerce will lead to more and more delivery trucks crowding urban streets.

He marshals several decades of data on urban truck VMT to claim that urban truck traffic is up an eye-popping 800 percent since 1966.

Another way to see trucking’s urban trajectory is to view aggregate growth since the 1960s. While urban vehicle miles traveled for both passenger cars and trucks grew steadily between 1966 and 1990—in both cases, far surpassing urban population growth—urban trucking absolutely exploded thereafter, reaching almost 800 percent growth until the Great Recession led to reduced demand. That pattern coincided almost perfectly with the rise of e-commerce and the use of digital communications to manage shipping for logistics firms like UPS and FedEx and major private shippers like Walmart.

The post concludes by warning us that we need to provide for additional infrastructure for urban freight movement.

With new companies like Jet and continued growth in stalwarts like Amazon, we should expect e-commerce and urban trucking to keep growing. Those patterns bring some significant implications at all levels of government.

On the transportation side, freight investment will need to be targeted at pinch points and bottlenecks. Those specific sites of congestion deliver disproportionate costs to shippers, which get passed along to consumers, and create supply chain uncertainty

But the problems of doing time series analysis with DOT’s VM-1 (vehicle miles traveled) data is not limited to the largely cosmetic problem of web-site layout.  The more serious problem is the significant series breaks that underlie the published data.  Over time, US DOT has had to make important changes to the way it defines urban and rural areas (as urban development has occurred) and has had to cope with changing data sources.  And, to be sure, DOT has tried to improve the accuracy of its estimates over time.  The cumulative result of these changes is that it is very difficult to make statistically valid statements about the change in truck traffic in cities.  (We’ve spelled out our concerns about the series break in the freight data in a technical appendix, below).

Urban truck travel actually peaked in 2008, and has mostly been declining, except for the past year.

Truck_VMT

In our view, we ought to heavily discount the published data, and not make comparisons that assume that the pre-2006 data are comparable to the post 2006 data.  If we look only at the post-2006 data, a very different picture emerges. For the past six years–a period for which we have apparently comparable estimates, which appear to be not significantly affected by re-definitions of urban and rural areas–there is little evidence that urban truck traffic is increasing.  If anything, the data suggest that it is flat to decreasing.

The alarmist implication of the “800% growth” statistic is that urban traffic will be significantly worsened by growing e-commerce sales.  For example, the Brookings data prompted bloggers at SSTI to write “Urban truck traffic has boomed alongside the rise in e-commerce. ” and to fret that “If the rapid growth in urban truck VMT is a result of increasing e-commerce deliveries, we are a long way from peak urban truck traffic.”

In our view, such fears are wildly overblown.  If anything they have the relationship between urban traffic patterns and e-commerce exactly backwards.  The evidence to date suggests that not only has the growth of e-commerce done nothing to fuel more urban truck trips, but on net, e-commerce coupled with package delivery is actually reducing total urban VMT, as it cuts into the number and length of shopping trips that people take in urban areas.

E-Commerce is increasing rapidly; Urban truck travel is flat

Urban_Freight

The period since 2007 coincides with the big increase in e-commerce in the U.S.  From 2007 to 2013, Amazon‘s North American sales increased by a factor of 5, from $8 billion to$44 billion. Between 20007 and 2013, the total e-commerce revenues  United States has doubled, from about $137 billion to about $261 billion according to the U.S. Department of Commerce.  But over this same time period, according to the US DOT data as tabulated by Brookings, truck traffic in urban areas actually declined.  All the increase in e-commerce appears to have no net effect on urban truck traffic.

Does an increase in package deliveries mean increased urban traffic?

It actually seems like that increased deliveries will reduce urban traffic congestion, for two reasons.  First, in many cases, ordering on line substitutes for shopping trips.  Customers who get goods delivered at home forego personal car shopping trips.  And because the typical UPS delivery truck makes 120 or so deliveries a day, each delivery truck may be responsible for dozens of fewer car-based shopping trips.  At least one study suggests that the shift to e-commerce may reduce total VMT and carbon emissions.  And transportation scholars have noted a significant decrease in shopping trips and time spent shopping.

But there’s a second reason to welcome–and not fear–an expansion of e-commerce from a transportation perspective.  The efficiency of urban trucks is driven by “delivery density”–basically how closely spaced are each of a truck’s stops.  One of the industry’s key efficiency metrics is “stops per mile.”  The more stops per mile, according to the Institute for Supply Management, the greater the efficiency and the lower the cost of delivery.  As delivery volumes increase, delivery becomes progressively more efficient.  In the last several years, thanks to increased volumes — coupled with computerized routing algorithms — UPS has increased its number of stops per mile–stops increased by 3.6 percent but miles traveled increased by only about half as much, 1.9 percent.  UPS estimates that higher stops per mile saved an estimated 20 million vehicle miles of travel.  Or consider the experience of the U.S. Postal Service:  since 2008, its increased the number of packages it delivers by 700 million per year (up 21 percent) while its delivery fleet has decreased by 10,000 vehicles (about 5 percent).

As e-commerce and delivery volumes grow, stop density will increase and freight transport will become more efficient.  Because Jet.com is a rival internet shopping site to Amazon.com, and not a trucking company, its growth means more packages and greater delivery density for UPS and Fedex, not another rival delivery service putting trucks on the street.

So, far from putative cause of worry about transportation system capacity–and inevitably, a stalking horse for highway expansion projects in urban areas–the growth of e-commerce should be seen as another force that is likely to reduce total vehicle miles of travel, both by households (as they substitute on line shopping for car travel) and as greater delivery density improves the efficiency of urban freight delivery.

As David Levinson reports, data from detailed metropolitan level travel surveys and the national American Time Use Study show that time spent shopping  has declined by about a third in the past decade.    As Levinson concludes “. . . our 20th century retail infrastructure and supporting transportation system of roads and parking is overbuilt for the 21st century last-mile delivery problems in an era with growing internet shopping.”

So the next time you see one of those white or brown package delivery trucks think about how many car based shopping trips its taking off the road.

Technical Appendix:  Urban Truck Data

We’re strongly of the opinion that its not appropriate to treat the pre-2006 and post-2007 truck freight data as a single series that represents the actual year to year growth in urban freight mileage.  There are good reasons to treat this as a “series break” look separately at the two series.  The technical reasons behind this judgment are two-fold.

Series Break 1:  Urbanized area boundaries

Tony Dutzik explored this issue last year in a post for the Frontier Group.  Briefly, a number of rural roads were re-classified as urban roads (reflecting changes in development patterns over time).  This has the effect of biasing upwards later year estimates of urban VMT when compared to previous years.  Some part of the apparent increase in “urban” VMT over the past decade has been a result of reclassifying formerly “rural” traffic as “urban”–not more urban traffic.

Series Break 2:  Vehicle classifications

US DOT has used different data and different definitions to classify vehicles pre- and post-2007.  Methodologically, what USDOT has done is migrated their vehicle classification system from that used in the now discontinued Vehicle Inventory and Use Survey and instead substituted RL Polk data.  As a result of this shift in methodology the number of truck miles on urban roads jumped almost 50 percent in one year, from about 102 billion miles in 2006 to about 150 billion miles in 2007.  In 2009, USDOT explained how they’d changed their estimating procedures.

The data now on the website for 2000-2006 were estimated using a methodology developed in the late 1990s. FHWA recently developed a new methodology and used it for this year’s Highway Statistics. This methodology takes advantage of additional and improved information available beginning in 2007 when states were first required to report motorcycle data – before that time, the reporting was not mandatory and the data were missing for a few states. Also, the new methodology does not rely on data from the national vehicle inventory and use survey which provided critical data for the original methodology but was not collected in 2007 as planned.

In April 2011, FHWA recalculated the 2000-2008 data along with the 2009 data to estimate trends. However, after further review and consideration, the agency determined that it is more reliable to retain the original 2000-2006 estimates because the information available for those years does not fully meet the requirements of the new methodology. Thus, the original 2000-2006 estimates are now used, whereas the 2007-2009 data are still based on the new methodology.

The author gratefully acknowledges Adie Tomer’s willingness to share the Excel spreadsheets upon which his analysis was based.

The Dow of Cities

The Dow Jones Industrial may be down, but the Dow of Cities is rising

The daily business news is obsessed with the price of stocks. Widely reported indicators like the Dow Jones Industrial average gauge the overall health of the US economy by how much, on any given day (or hour, or minute) investors are willing to pay for a bundle of stocks that represent the ownership of some of the nation’s biggest businesses. After peaking in January, investors have become decidedly skittish and pessimistic about the US economy, as evidenced by wild daily gyrations and an overall fall of almost 10 percent the Dow Jones Industrials (DJI).

At City Observatory, we’ve applied the same idea–a broad market index of prices–to America’s cities. We’ve developed an indicator we call “The Dow of Cities.”  Like the DJI, we look at the performance of a bundle of asset prices, in this case, the market values of homes in the nation’s densest urban neighborhoods.  And because we’re focused on cities, we compare how prices for houses in cities compare with the price of houses in the more outlying portions of metro areas.

Here’s the simple number:  since 2000, home prices in city centers have outperformed those in suburbs by 50 percent. In graphic terms, it looks like this:

Screenshot 2015-08-12 20.59.59

The data were complied by Fitch–the investment rating agency, in a report released with the announcement that “U.S. Demand Pendulum Swinging Back to City Centers.”  What the data show is that the dark blue line–which represents housing in city centers–is consistently outpacing the other lines–representing increasingly suburban rings of housing. The premium that the most urban houses command over the rest of the metro housing stock reflects the growing market value Americans attach to urban living.

If you care about cities, and you’re looking for definitive evidence of the verdict of the market on urbanism—this is it.  But we are also resigned to the fact that we are geeks, and stuff that gets our blood-racing leaves most people cold.  So I’m groping for an analogy:  the most convenient one is to the stock market.

Image a CNN business reporter saying:

“In the market today, city centers were up strongly to a new high”

Or a Wall Street Journal headline

“A bull market for city centers”

That’s the news here. Just as with private companies this price index is a great indicator of market performance. Imagine for a moment if you were CEO of Widgets, Inc, a publicly traded company. Every day, you’d be getting feedback from the market on how well you were doing, and on investor’s expectations for your company’s future.  If your stock price went up, it would be a good indication that you were doing better, and that expectations were rising for future performance. Especially if you had a sustained rise in your stock price, and if your company were regularly outperforming  both other companies in the widget industry, and the overall stock market. The reason the investment world is gaga over Warren Buffet is pretty much because he’s been able to do just that with the portfolio of companies he’s assembled under the Berkshire-Hathaway banner.

Wouldn’t it be great if we had the same kind of clear cut financial market style indicator on the health and prospects of our nation’s center cities? Wouldn’t it be useful if we could show in a stark and quantitative way how city centers are performing relative to suburbs? That, in essence, is what the Fitch data shows. Fitch’s analysts looked at 25 years worth of zip code level home price data in 50 of the nation’s largest metropolitan areas to track how well city centers performed compared to surrounding neighborhoods and suburbs. They divided zip codes within metropolitan areas into four groups based on their proximity to the city center.

You can’t literally buy stock in a city, but buying a house is the closest thing imaginable.  The price a buyer is willing to pay for a home in a particular city or neighborhood is a reflection both of the current value of that location, and the buyer’s expectations of the future character and performance  of the neighborhood and city.  Add up all the home values in the city, and you’ve got an indicator of the market for the city as a whole.

This Fitch chart is, in effect, a kind of Dow Jones Index for the performance of the nation’s center cities.  It clearly shows that over the course of the last housing cycle—beginning before the big run up in housing prices, but then continuing through the housing bubble, and growing during the bust and recovery, is an ever wider edge of city center housing prices compared to more suburban, out-lying locations.  And this isn’t a short term aberration or a recession artifact. The Fitch data show the trend emerging in the late 1990s, and growing steadily over time.

While there’s a growing recognition that cities are back, in some quarters there’s denial.  The truly great thing about this measure its it definitively puts the lie to the claims by perennial city nay-sayers like Joel Kotkin that the overall growth or size of suburbs is somehow a manifestation of their revealed economic superiority. In economic terms, bigger doesn’t necessarily mean better.  In the economic world, market prices, and particularly changes in relative market prices are the best indicator of what’s hot and what’s not.  The new Fitch analysis make it abundantly clear that cities are hot, and suburbs are not.

The reason of course is that housing demand can (and is) changing much faster than supply—which is why prices are rising so much. Rising prices are both a positive indicator of the value consumers place on city center living, and a reminder that as we’ve said many times at City Observatory, we’re experiencing a shortage of cities. And the rising relative prices for city locations are the market’s way of saying “we want more housing in cities” and “we want more cities.” While the strength in the housing sector has been an urban focused boom in new rental apartments, the fact is that supply isn’t growing rapidly enough.  We aren’t creating new San Franciscos and new dense, walkable, transit-served neighborhoods in other cities as fast as the demand for urban living is increasing—and that means that prices are continuing to rise.

 

The Dow of Cities

3220516188_59e62a3d26_oOK, we admit it.  We’re data geeks.  To us, sometimes — well, often — a single number or data set is compelling proof of an important proposition:  bare-naked, and with no verbal embellishment or deeply personal anecdote or cutesy infographic.

Here’s the simple number:  since 2000, home prices in city centers have outperformed those in suburbs by 50 percent.   In graphic terms, it looks like this:

Screenshot 2015-08-12 20.59.59

The data were complied by Fitch–the investment rating agency, in a report released with the announcement that “U.S. Demand Pendulum Swinging Back to City Centers.”

In our view, its the most under-reported story of last week, and if you are an urbanist, maybe the most under-reported story of the year.  We blogged about it when the report came out, but it bears repeating: If you care about cities, and you’re looking for definitive evidence of the verdict of the market on urbanism—this is it.  But we are also resigned to the fact that we are geeks, and stuff that gets our blood-racing leaves most people cold.  So I’m groping for an analogy:  the most convenient one is to the stock market.

Image a CNN business reporter saying:

“In the market today, city centers were up strongly to a new high”

Or a Wall Street Journal headline

“A bull market for city centers”

That’s the news here.  Just as with private companies this price index is a great indicator of market performance.  Imagine for a moment if you were CEO of Widgets, Inc, a publicly traded company.  Every day, you’d be getting feedback from the market on how well you were doing, and on investor’s expectations for your company’s future.  If your stock price went up, it would be a good indication that you were doing better, and that expectations were rising for future performance.  Especially if you had a sustained rise in your stock price, and if your company were regularly outperforming  both other companies in the widget industry, and the overall stock market.  The reason the investment world is gaga over Warren Buffet is pretty much because he’s been able to do just that with the portfolio of companies he’s assembled under the Berkshire-Hathaway banner.

Wouldn’t it be great if we had the same kind of clear cut financial market style indicator on the health and prospects of our nation’s center cities? Wouldn’t it be useful if we could show in a stark and quantitative way how city centers are performing relative to suburbs? That, in essence, is what the Fitch data shows.   Fitch’s analysts looked at 25 years worth of zip code level home price data in 50 of the nation’s largest metropolitan areas to track how well city centers performed compared to surrounding neighborhoods and suburbs.  They divided zip codes within metropolitan areas into four groups based on their proximity to the city center.

You can’t literally buy stock in a city, but buying a house is the closest thing imaginable.  The price a buyer is willing to pay for a home in a particular city or neighborhood is a reflection both of the current value of that location, and the buyer’s expectations of the future character and performance  of the neighborhood and city.  Add up all the home values in the city, and you’ve got an indicator of the market for the city as a whole.

This Fitch chart is, in effect, a kind of Dow Jones Index for the performance of the nation’s center cities.  It clearly shows that over the course of the last housing cycle—beginning before the big run up in housing prices, but then continuing through the housing bubble, and growing during the bust and recovery, is an ever wider edge of city center housing prices compared to more suburban, out-lying locations.  And this isn’t a short term aberration or a recession artifact. The Fitch data show the trend emerging in the late 1990s, and growing steadily over time.

While there’s a growing recognition that cities are back, in some quarters there’s denial.  The truly great thing about this measure its it definitively puts the lie to the claims by perennial city nay-sayers like Joel Kotkin that the overall growth or size of suburbs is somehow a manifestation of their revealed economic superiority.  In economic terms, bigger doesn’t necessarily mean better.  In the economic world, market prices, and particularly changes in relative market prices are the best indicator of what’s hot and what’s not.  The new Fitch analysis make it abundantly clear that cities are hot, and suburbs are not.

The reason of course is that housing demand can (and is) changing much faster than supply—which is why prices are rising so much.  Rising prices are both a positive indicator of the value consumers place on city center living, and a reminder that as we’ve said many times at City Observatory, we’re experiencing a shortage of cities.  And the rising relative prices for city locations are the market’s way of saying “we want more housing in cities” and “we want more cities.”  While the strength in the housing sector has been an urban focused boom in new rental apartments, the fact is that supply isn’t growing rapidly enough.  We aren’t creating new San Franciscos and new dense, walkable, transit-served neighborhoods in other cities as fast as the demand for urban living is increasing—and that means that prices are continuing to rise.

 

The war of words: rhetoric and the city

Over at Belt Magazine, editor Anne Trubek is fed up with the overuse of planning cliches in writing about cities.  She’s asking, nay demanding, that everyone stop using ten words:

walkability
liveability
placemaking
civic engagement
sustainability
smart growth
mixed-use
accessibility
adaptive reuse
gentrification

She’s put her finger on something.  These words are used, and over-used and sometimes abused.  We share Anne’s pain–especially when it comes to the vague and elastic way in which the term “gentrification” is invoked in all kinds of different contexts.  Its natural to be frustrated that a complex, multi-faceted concept can’t be boiled down to a single word that can be universally used with precision to mean the same thing to everyone who encounters it.  And it’s inevitable that some words will become, in sequence, fashionable and popular and then trite and shop-worn.  To be sure, when we turn dynamic verbs into prosaic nouns (walk to walkability), we almost automatically make ourselves more pedantic and boring.

Even so, we must politely, but firmly disagree that we ought to stop using these words.

Why?  In a world where 140 character-at-a-time communication is increasingly the norm, having a simple, evocative way of stating your case is imperative–even if it isn’t as precise and nuanced as it might be, and even if some people will occasionally or often misuse the word.

Words have weight and meaning:  We need to use them well and wisely, and back them up with powerful illustrations and compelling stories, and where possible, data.

Let’s take walkability.  Its a passive, noun-ified, mouthful.  But it captures in 11 precious characters a key notion.  Its intuitive and resonates with most people.  We can illustrate it with images and stories, and thanks to our friends at Walk Score we can measure it (roughly and imperfectly).  By giving at name, illustrating with pictures, and measuring it with data, we can raise its profile the the debates and discussions about cities.

We’re fully aware of the consequences of the sloppy use of terms.  Take gentrification:  Critics of the process rightly assail instances in which the entire population of a neighborhood is dislocated by the in-migration of the wealthy (like say SoHo, in Manhattan).  But then people apply that same term to any time any new development occurs in a previously high poverty neighborhood.  As we pointed out, Governing employed a definition of gentrification so sweeping that it classified neighborhoods with increasing poverty rates as “gentrifying” because of small increases in property values and educational attainment rates.

But the problem is not the words:  its how they’re defined and used.  And banning the words serves no useful purpose, and perhaps sets back the conversation, by eliminating some sign posts (and frankly, bumper stickers) that can succinctly raise attention and start conversations.  In many urban policy debates banning these words wouldn’t so much enrich the conversation as cede the rhetorical advantage to those who use other words that are not benign and helpful.

The trouble is urbanists (or as Anne might have us say “advocates for building great cities and interesting diverse neighborhoods where people can easily bike and walk to schools, parks, and shops”) are engaged in extended conversations and frequently adversarial debates with others:  road builders, neighborhood groups, developers, mayors, city councils and legislators, and the general public, about hard policy decisions.  In many cases, its a war of words.  And banning just these ten words would be the equivalent of unilateral rhetorical disarmament.

As we pointed out in our essay on “Rules of Thumb” in transportation planning, there is a conventional wisdom about how we design roads that has subtle and profound biases.  Words like “level of service” and “functionally obsolete” have embedded, but largely hidden value judgments that greatly influence policy decisions.

Consider how the media has come to use the term “accidents” rather than “crashes” to describe the death and carnage that cars inflict.

We call subsidized, socialized car storage in the public right of way “free parking,” ignoring or minimizing the substantial costs it imposes on everyone.

Highway planners talk about enhancing “mobility”–by which they mean making cars go faster, which paradoxically, thanks to induced demand and sprawl, often causes a decline in accessibility–the ability to reach destinations–especially by means other that a private automobile.

And occasionally, we cripple our own advocacy by using jargony, non-euphonious terms that obscure our message.  For example, we use the ugly, unwieldy term “transit-oriented development” to describe neighborhoods and housing where people can easily walk and take transit to many common destinations.  (Please, somebody, come up with a better moniker for this!)  MobilityLab tells us of efforts to re-brand the mouthful “transportation demand management” as “transportation options” in hopes of getting greater acceptance and more funding.

So we should continue to use all ten words on Anne’s proposed banned word list.  But we should take pains to explain them, illustrate them and where possible measure them with data so that they have a depth of meaning that will add to the discussion.  And in addition to quantifying, we can use pictures people what some of these seemingly lifeless, technical terms mean in the real world.  Take GranolaShotgun’s compelling photo essay on infill development, showing how denser, but still small scale new residential development in cities can enliven streetscapes pockmarked with vacant lots, parking, and auto oriented commercial uses.

But imperfect as they may be words have value and convey meaning.  So rather than ban them, we should nurture and polish them, using them carefully and with due care; there are important conversations ahead.

 

The edifice complex and our infrastructure problems

As Robert Caro chronicled in his riveting biography “The Power Broker,” the great builder Robert Moses had a foolproof strategy for getting new highways approved.  He’d take a little bit of money and get the project started, driving stakes in the ground and manufacturing expectations about future development opportunities.  Then he’d dare the Legislature not to give him the money to finish the project.  They invariably did.

The political allure of the building big projects–especially bridges and highways–continues to this day.  Call it the “Edifice Complex.”  With a bonanza of jobs and contract dollars, there are a wealth of constituents who support more spending and the prospect of a prominent ribbon cutting and a tangible evidence of your efforts to reduce congestion and speed traffic. But as Bengt Flyvbjerg  has documented the construction of megaprojects–ranging from Boston’s Big Dig, to Seattle’s Bertha boring machine to San Francisco’s expensive new Bay Bridge–is one of repeated and substantial cost overruns.  Megaprojects have their own pathology:  they are the product of excessive optimism, over-predicting revenues and benefits, and consistently under-estimating costs and risks.  Public officials routinely engage in “strategic misrepresentation”–Flyvbjerg’s polite academic term for “lying.”  And cost overruns predictably average 30 percent above budget.

Build now, figure out how to pay later

In many respects, one current project, the replacement of New York’s Tappan Zee Bridge, symbolizes much of what’s wrong about the way we try to tackle infrastructure problems in the US.

obama_cuomoThe Tappan Zee construction site afforded a great backdrop for a photo op with the President. (WNYC).

 

The Tappan Zee Bridge spans the Hudson about 25 miles north of New York, connecting suburban Westchester County with Rockland County.  The original bridge, built in 1955 carries about 138,000 vehicles per day.  A $3.9 billion replacement project is now underway, which would build a new and larger freeway bridge, with eight lanes (and space for more).  According to the New York Thruway Authority, the state agency charged with building the bridge the Tappan Zee is the largest infrastructure project underway in the country right now.

But even though construction on the replacement bridge is now 25 percent complete, the agency started construction without a complete financial plan explaining how it will be paid for.  The Thruway Authority has blocked requests from local newspapers to release financial data and documents.  From what is known, it’s likely that the new bridge, once completed will require a doubling of one-way tolls from the current level of $5 per vehicle to $10 or more.  Like many highways around the country, traffic growth on the Tappan Zee bridge is stagnant, having peaked in 2004.  The danger is that high tolls could further depress traffic across the bridge and produce a financial death spiral, as higher tolls are needed to make up for lower traffic.  Since no financial plans or toll revenue forecasts have been released to the public, it’s impossible to say how likely this is.  Nonetheless, the Thruway Authority has issued upwards of a billion dollars in debt, and negotiated a TIFIA loan from the US Department of Transportation.  And because the Tappan Zee is the largest source of toll revenue for the Thruway Authority, shortfalls in expected revenue from this project are likely to impact the financial condition of other state toll roads.

Project development processes almost invariably focus on the most expensive build alternatives.  Early on, engineering estimates suggested that the existing bridge could be renovated at a cost about 40 percent less than building a new highway bridge (Philip Mark Plotch:  Politics Across the Hudson:  The Tappan-Zee Megaproject, Rutgers University Press, 2015).  Nonetheless the state moved ahead with a more expensive full replacement alternative (albeit after stripping out a long-sought transit component to save costs).

The way we design, select and pay for big highway projects tends automatically to generate bloated projects. The reason: the cost to project beneficiaries of pursuing big projects that benefit them is low or even zero.  Big projects funded by state or federal gas taxes (or any statewide or regional funding source) generally don’t require any special contribution from the local community or properties that will benefit.   With essentially no cost and big local returns, it’s no surprise that local officials clamor for big highway projects.  In short, our method for selecting and funding highway projects encourages communities to pursue the biggest, most expensive project they can get because other people will be paying for it.  “I’ll have the biggest solution someone else will pay for. ”

Questionable finances, but great politics?

Despite the project’s shaky finances, demonstrable risks, and questionable utility, it is widely seen as a substantial political achievement for Governor Andrew Cuomo.  The New York Times described the Tappan Zee project, along with a proposed renovation of LaGuardia Airport’s terminals as the signature accomplishments of Cuomo’s administration.  In the eyes of the political observers and academics the Times quoted, the Governor would get credit for building things.  One lauded Cuomo for “taking a page from the Robert Moses playbook” and another said “You get credit for things you build, not things you maintain.”

But is it either good government or smart transportation policy?  As WNYC’s Andrea Bernstein pointed out, Governor Cuomo unilaterally decided to exclude transit options from the project, and has been anything but transparent about how the bridge will be financed.  The state Thruway Commission blocked release of a federal report on bridge financing in which the US Department of Transportation described the state’s financial plan as “hypothetical, misleading and inaccurate.”

And there are good reasons to question the utility of the proposed project.  As Streetsblog’s Stephen Miller has said, Governor Cuomo is building a legacy for the 1950s.  A new eight-lane freeway bridge–with no provisions for dedicated transit service, and room for widening is a project from another era.

The allure of megaprojects vs. the tedium of maintenance

The Tappan Zee Bridge is just the biggest example of the edifice complex at work.  The temptation to spend money on shiny new projects and to defer or under-fund maintenance has an irresistible political logic.  New projects provide showy ribbon cutting opportunities that make great campaign fodder–while the costs of debt service and deferred maintenance are largely invisible and gets passed on to your political successors.

Short-changing maintenance, in turn, becomes fodder for a kind of bait-and-switch tactic in which politicians point to rough roads, sub-standard bridges and maintenance backlogs as a justification for new funding–and then once new revenues are approved, shift the money to pay for new projects.   Examples of this kind of politically expedient thinking are abundant: Louisiana’s decision to raid its maintenance funds for $21.6 million per year, for the next 28 years, to fund new highway construction is one such example.

Megaprojects suck the life out of state transportation budgets.  In Connecticut, a single megaproject is consuming more than 90 percent of available state highway funds, even as the state faces a significant backlog of maintenance on overburdened highways and rail lines.

In “Overpasses:  A Love Story,” Politico’s John Grunwald chronicles Wisconsin DOTs continuing plans to build new and wider highways while deferring maintenance.  Despite the fact that 70 percent of the state’s highways are in fair or poor condition, Wisconsin spends more money on new projects than on maintenance.  And in turn, the chronic fiscal crisis of highways becomes an excuse to underfund transit spending.

Growing reliance on debt-financing for new capital projects, coupled with stagnant growth in gas tax revenues (largely a product of much absolute declines and slower growth in driving) mean that maintenance budgets are steadily squeezed already.  Prior to adopting its latest transportation spending package (funded by a 7 cent per gallon increase in the state gas tax) Washington State was on track to spend nearly 70 percent of its state gas tax revenue retiring debt for earlier projects.

The “Git ‘er done” mentality impresses people in some quarters, particularly when it’s their projects that are advanced.  But in an era of scarce resources, diminished driving, and deferred maintenance, the reckless borrowing and optimistic projections that fuel big spending for these megaprojects come with huge opportunity costs.  Money spent on over-built and under-used highways and bridges means other needed projects, including transit, don’t get built. And while politicians today are still executing the classic tactics from the Bob Moses playbook for pushing megaprojects, the old master-builder left no notes on winning strategies for paying ongoing maintenance costs.  The money spent on a few prominent megaprojects isn’t available to pay for maintenance and the debt service burdens incurred to get new construction today will be a burden on transportation finance for decades to come.  That’s frequently the real legacy of the edifice complex.

 

The Week Observed, August 21, 2015

What City Observatory did this week

1.  The suburbs: where the rich ride transit.  In many cities, transit ridership is dominated by a transit dependent population:  people who can afford to own private cars don’t use the transit system.  But in some places transit is a mode of choice for higher income commuters. Daniel Kay Hertz mines Census data to identify and map high income neighborhoods where transit use rates exceed the regional average.

2. The Edifice Complex & Our Infrastructure Problems.  Joe Cortright looks at the political incentives that lead to the pursuit of mega-projects and their implications for transportation finance.  Around the country, leaders are dusting off the Robert Moses playbook for build now, pay later highway projects that shortchange other priorities, including transit and maintenance.

3.  A War of Words. Belt magazine editor Anne Trubek is tired of a number of words in the common vocabulary of urbanism, and has called for banning, among others “walkability, “livability” and “placemaking.”  Joe Cortright disagrees, and points out these terms have a powerful impact and can be illustrated and measured in ways that helps change the conversation about cities.

4.  The Dow of Cities. If there were a financial-market style indicator of the health of cities, it would be something very much like the ratio of city to suburban house prices that’s been constructed by Fitch, the investment rating firm.  Joe Cortright examines why the big run-up in city home values compared to the suburbs since 2000 is the most powerful evidence yet that the market is turning decisively to city centers.  The high and rising price of city centers clearly signals the “shortage of cities” that needs a new policy response.

The week’s must reads

1. Race Wealth Gap not solved by education–at least not when you’re late to the housing market and there’s an epic bust.  Monday’s New York Times describes the results of a new St. Louis Federal Reserve Bank study of wealth disparities among racial and ethnic groups.    Black and Hispanic college degree holders saw their wealth decline between 1992 and 2013, according to the study, while white and Asian college graduates saw increases.  A key factor seems to be the housing market, which accounts for a bigger share of wealth for Hispanics and blacks; these two groups also saw proportionately larger declines in their housing wealth.

2.  Housing Vouchers are the subject of two great posts.  At the Brookings Institution, Elizabeth Kneebone and Natalie Holmes look at the neighborhood patterns of housing voucher use.  They find that while voucher users tend to live in relatively poor neighborhoods, they are less likely than residents of public housing to live in neighborhoods of concentrated poverty.  The Urban Institute has constructed a terrific visualization of the distribution of housing vouchers by income level, and compares it to the distribution of benefits from the home mortgage interest and property tax deductions.  Voucher benefits go disproportionately to the lowest 15 percent of the income distribution, and have a measurable, if quite modest effect of ameliorating income inequality.

New knowledge

1.  In a study of migration patterns in the  UK published in Urban Studies, Lance Freeman and his colleagues find little  evidence that gentrification is associated with increased rates of out-migration from poor neighborhoods.  In our opinion, in describing this piece CityLab downplayed the import of this research:  they emphasized the difficulty of measuring gentrification and said that the new Freeman study provided “mixed” evidence.  What this study shows is that–as in other published research–there’s actually no data to support the commonly held belief that increased displacement is a regular occurrence in gentrifying neighborhoods.  Freeman and his co-authors conclude “The results presented here are for the most part inconsistent with the notion that gentrification leads to widespread direct displacement that manifests itself in higher mobility rates among residents of gentrifying neighborhoods.”

2.  A new study published by the journal Nature shows that an abundance of street trees has strong positive effects on measures of self-reported health and well-being. Survey evidence from Toronto indicates that having 10 more trees in a city block, on average, improves health perception in ways comparable to an increase in annual personal income of $10,000 and moving to a neighborhood with $10,000 higher median income or being 7 years younger.Bonus kudos to Nature for publishing the article under a Creative Commons license.

3.  The Census Bureau has released detailed geographic data for 2013 on the location of jobs and workers.  Its Local Employment and Housing Dynamics (LEHD) is available through its “On-the-Map” mapping application.

The Week Observed: August 14, 2015

What City Observatory did this week

1. City home prices outpacing suburbs by 50 percent.  Joe Cortright examines a new study prepared by investment firm Fitch looking at the growing value premium in central cities.  Since 2000, home prices have grown 50 percent faster in urban centers than in their surrounding metro areas.  For hard headed Wall Street types, the analysts sound surprisingly like new urbanists, citing walkability and a “paradigm shift” in attitudes about cities.  This is strong evidence of the growing demand for urban living, which Fitch expects to continue.  Their outlook for homeownership and suburban growth is decidedly bearish.

2. The next road safety revolution.  Daniel Kay Hertz looks at the mayhem that cars cause in urban areas.  While we’ve made significant advances in protecting vehicle occupants from the effects of crashes, the way we’ve designed our cities around car travel has created big risks for people who walk or bicycle.  The victims are disproportionately young: car crashes are the leading cause of death for those under 25, and more than 3,000 Americans age 19 and under died in car crashes in 2013. Many of them weren’t even in cars: almost 500 were simply walking down a sidewalk or crossing the street.

3.  Between high rises and single family homes.  The housing affordability and availability problems that plague many of our cities may be aggravated by the missing middle of housing types–smaller-scale, lower-impact duplexes, triplexes and apartment courts, that have all but disappeared with the advent of stringent single-family zoning.  Daniel Kay Hertz reviews metro level data on the dominance of single-family housing, and the general paucity of these smaller, 2-4 unit structures.  As we look to accommodate the growing demand for urban living, filling the missing middle void is one way to affordably provide a range of housing options in existing neighborhoods.

4.  StrongTowns published Joe Cortright’s critical review of Rosabeth Moss Kanter’s recent book on infrastrucuture “Move.” For those hoping that a Harvard Business School professor would do a rigorous, hard-headed look at the deep-seated business model flaws in our current transportation system, the reader will be greatly disappointed.  Cortright argues that “Move” overlooks important trends–notably the declining demand for car travel–and fails to diagnose the chief underlying problem with transportation–we get the prices wrong.


The week’s must reads

1. In a new report for the Century Foundation —The Architecture of Segregation — Paul Jargowsky maps the continuing increase in the number of neighborhoods of concentrated poverty in the nation’s urban areas.   His data show that the number of people living in high-poverty ghettos, barrios, and slums has nearly doubled since 2000, rising from 7.2 million to 13.8 million.   Blacks, Latinos and children are disproportionately likely to live in these high poverty neighborhoods.  These data show that the trends we outlined in our 2014 report Lost in Place have continued, and if anything appear to be accelerating.

2. Its one of the most common refrains in almost every land use controversy everywhere:  there isn’t enough parking.  But is it true?  Writing in the Minneapolis Star Tribune Nathaniel Hood shows that its easy to put that glib assertion to a straightforward quantitative test.  His article, “How to respond when someone complains there’s no parking” outlines four simple steps that just about anyone can take to measure parking availability.  He shows how you can Google Maps to flag parking structures and surface parking lots, photograph under-utilized parking, and even conduct your own parking availability experiments.


New knowledge

1. New evidence for the power of industry clusters. Alexander Klein and Nicholas Crafts of the University of Kent look at the growth of cities and manufacturing productivity over the period 1880 to 1930.  They explore the relative contributions of industry specialization (so-called Marshallian externalities) and industry diversity (so called Jacobs externalities) on the rate of employment growth and producitivity in US cities.  They conclude that during this time period, growth was strongly influenced by industry specialization (cities getting better at doing the things they were strong at doing already), and that only larger cities exhibited significant gains from having a variety or diversity of industry sectors.

2. Uber and auto crashes.  By giving people who drink a ready alternative to driving their own vehicle, ride sharing services like Uber may reduce the number of alcohol related fatalities. A study by two Temple University researchers looks at the correlation between car crash rates and the growth of Uber use in different cities.  The researchers exploit a natural experiment–the gradual roll-out of UberX services in different California communities.  They find that the introduction of the lower-cost ride dispatch service is associated with a 3.6 percent to 5.6 percent reduction in “motor vehicle homicides”.  The study suggests that if UberX availability extended nationally, it might save as many as 500 lives annually.


The Week Observed is City Observatory’s weekly newsletter. Every Friday, we give you a quick review of the most important articles, blog posts, and scholarly research on American cities. You can sign up to get it in your inbox by clicking “Subscribe” at the top of the page!

Our goal is to help you keep up with – and participate in – the ongoing debate about how to create prosperous, equitable, and livable cities, without having to wade through the hundreds of thousands of words produced on the subject every week by yourself.

If you have ideas for making The Week Observed better, we’d love to hear them! Let us know at jcortright@cityobservatory.org, dkhertz@cityobservatory.org, or on Twitter at @cityobs.

The McMansion mirage reappears

OK, we admit we might be a bit obsessed with this story. But if you can, bear with us one more time.

Here’s the most basic fact: The number of newly-built McMansions—single family homes of 4,000 square feet or larger—is down 43 percent since 2007. By any standard that’s a stunning decline. But because the market for smaller homes has declined even more, a commonly used but in this case misleading statistic—median home size—has floated upwards. That’s lead to some fundamentally flawed claims about the U.S. housing market.

Claims that the McMansion is back

Earlier this year, and again over the last week or two, several journalistic outlets—CityLab, Wonkblog, and the Minneapolis Star Tribune, among others—have written stories with the theme “McMansions are back.” Americans want, and are buying, bigger and bigger houses.

CityLab tells us “the economic recovery is super-sizing houses, and that “the housing crisis may have wiped out Lehman Brothers, Iceland, and the credit of home buyers across the nation. But it didn’t put a dent in the McMansion.”

The Boston Globe repeated CityLab’s analysis, saying: “the McMansion trend is thriving.”

The Minneapolis Star Tribune chimed in with: “After years of downsizing, big houses make a comeback.”

These stories are prompted by the Census Bureau’s publication of its annual report, “Characteristics of New Single-Family Houses Completed,” which dutifully reports the number of new single family homes built, by size, and by region of the country. The report focuses heavily on the median size, in square feet of new single family homes.

These outlets focus on the median home size and the share of newly-built homes that are McMansion-sized, and finding, correctly, that this percentage is increasing. The problem is that there are two ways to look at this, and in this case the median is misleading.

The number of McMansions is down substantially

From our point of view, what’s really important to the “McMansions are back” thesis is the actual number of these extra-large houses being built. And when you look at it that way, it turns out that they’re down 43 percent from the peak in 2006.

Single-family construction is still at a half-century low

The only reason that it looks like McMansions are “growing” is that the market for smaller single family homes remains deeply depressed. It’s important to keep in mind that the US housing market is still in worse shape today than at basically any other time in the previous half-century. The seven years since 2007 represent the seven worst years for housing production in the period since 1959. Excluding recession years, for the past 50 years, the US has built a minimum of about 4,000 new single family homes per million people; for the past seven years we’ve averaged about half that rate of construction. It’s really hard to overstate just how awful the single-family housing market still is in the United States. On a population-adjusted basis, we’re still building fewer single family houses now, several years out of the recession, than at the bottom of the recessions of 1970, 1980-82, 1990, and 2001.

The middle class has been pushed out of the single family home market

Just looking at the median size number misses this much larger point about housing markets.  The trouble with this misleading median, as we explained in our March post on this subject, is that it is highly influenced by compositional effects. So that when the bottom drops out of the housing market, and Americans of typical means can no longer afford homes or qualify for mortgages, the construction of smaller, low-cost homes evaporates. Basically, in this very depressed housing market, the only people who can afford new homes (and qualify for mortgages, given much, much tighter underwriting standards) are high-income, high-wealth households. Which is why the market for big homes 4,000 square feet and larger is down a mere 43% from the peak, compared to a 60% decline for homes under 1,800 square feet.

The only way the “growth of McMansions” story holds up is if you believe that the composition of homebuyers in today’s very distressed market is representative of what Americans would buy if incomes and lending standards were at “normal” levels. If we were ever to get back to the 1.5 million annual housing start figure that we took for granted prior to 2007 (which seems an increasingly doubtful proposition), the growth in home buyers would come from those with lower and moderate incomes who would be far less likely to buy McMansions.

From our perspective, the big story about American single family homes is that, from micro-homes to McMansions, they’re all still in dire straits. For one of the country’s major industries to be at half its historic rate of production is a big deal, both for housing and the broader economy.

There’s no question that the median size of new single family homes is larger today than a few years ago. But as we’ve explained here, that actually signifies something very different about the housing market, and American’s demand for housing than conveyed by a quick look at the misleading median.

City home prices outpacing metro by 50%

Since 2000, home prices have grown 50 percent faster in urban centers than in their surrounding metro areas. If your are an urban data geek, like we are, this is big news.  A dramatic shift in city-suburb price differentials strongly signals a deep and enduring market demand for cities.

A new research report from investment rating agency Fitch provides strong confirmation for the case we’ve been making at City Observatory:  the demand for urban living is strong and increasing.  Fitch calls the trend “striking” and summarizes:

Since the mid- 1990s, demand has skyrocketed in many urban centers, with home price growth in the closest distance tiers growing at significantly higher rates than the MSAs in which they are contained.

Compiling Case-Shiller home price data for 4,600 zip codes in 50 large metropolitan since the 1970s, Fitch shows that home prices in urban centers have significantly outperformed home prices in the balance of their respective metropolitan areas throughout the country.  The pattern is widespread:  it holds for large and high value coastal markets like San Francisco and Boston, but also for middle sized metros like Nashville, Denver and Portland.

The Dow of Cities

Fitch summarizes its national findings in a single chart, showing how home values in the most central urban neighborhoods performed compared to successively more distant tiers of housing.

Screenshot 2015-08-12 20.59.59

If you were to construct a “Dow-Jones” Index for cities–something that would be a comprehensive, consolidated, all-purpose summary measure of the relative economic strength of cities compared to their suburbs–it would look very much like the chart prepared by Fitch.  The steady divergence between the typical home values for city centers (shown in blue; the densest and most central one-fifth of census tracts) relative to the rest of their respective metropolitan areas is striking.  The chart which shows home values indexed to a 1975 base year clearly illustrates the housing bubble that peaked about 2006, and shows that while city centers also experienced absolute price declines (as did the rest of the market), they fared far better than prices in each of the four more peripheral tiers.

The Fitch analysis of relative city-suburb price trends confirms what we showed was happening in the Portland metropolitan area in a commentary we wrote last fall:  in Portland, since 2005, the traditional relationship between city and suburban home values has reversed:  a decade ago, city center homes sold at an 9 percent discount to those in the suburbs, now they sell at an 7 percent premium.  Fitch shows this same pattern holds nationally.

Behind the price trend

The reasons behind this shift are familiar to City Observatory readers, and are laid out in several of our reports.  In Young and Restless, we showed that well educated young adults (25 to 34 year-olds with at least a four year degree) are increasingly choosing to live in close-in urban neighborhoods in the nation’s largest metropolitan areas.  In Surging City Center Job Growth, we showed how, for the first time in decades, employment growth in city centers was outpacing that in suburbs, in part at least, to the growing desire of firms to locate their operations closer to the preferred residential locations of young workers.

When it comes to explaining these trends, Fitch’s investment analysts sound like dyed-in-the-organic-merino new urbanists:

Updated urban planning that focuses on walkable cities, improved transportation networks and green space has improved the urban quality of life, drawing in multitudes of residents, who, in prior generations, aspired to stretches of lawn in the less dense suburban and exurban rings.

Its notable that Fitch mentions “walkable cities.”  A key factor in the attractiveness of city living is walkability, and various studies have shown that increases in the walk score are associated with higher home values. The Fitch charts of city v. suburban home price disparities are strikingly similar to those generated by Zillow economists showing the relationship between walkability and home price performance.  As we highlighted at City Observatory last month, within metropolitan areas, home values in more walkable neighborhoods have dramatically outpaced home prices in car dependent locations.

A big deal for the future of housing — and the suburbs

Fitch’s analysis suggests the move back to the city has important implications for the housing market.  Most importantly, they don’t expect a resurgence in the falling homeownership rate:

With a trend toward increasing populations in many cities, where a far higher portion of units are rentals than in suburban areas, a return to the lofty home ownership rates seen before the housing crisis of the 2000s is unlikely . . . .

And Fitch is glum about the future of sprawling suburbs and exurbs:  the shift of demand to cities implies “long run risks of declining property values in the urban periphery.”

While the strong–and in Fitch’s view, accelerating–price premium for city center living is a harbinger of robust growth in the urban core for years to come, it also signals what we think is a huge challenge for the nation.  Rising prices are a clear sign that we have a shortage of cities.  Many of our urban problems–especially those related to housing affordability–are directly tied to American’s growing demand for city living, and the relative paucity of places with great urban character.  Higher prices are a market signal that we need more and better cities.

The full report -“U.S. RMBS Sustainable Home Price Report,Second-Quarter 2015 Update, Special Report, August 12, 2015-is available at the Fitch website.

 

The difficulty of applying inequality measurements to cities

Earlier this year, our friends at the Brookings Institution released a new tabulation of Census data on levels of inequality in the nation’s largest cities. Inequality, in this case, is measured by dividing the income of a household at the 95th percentile of the population by the income of a household at the 20th percentile. The higher the ratio, the greater the degree of income inequality.

The post has generated a lot of interest in the urban policy world: Some cities, it appears, have a lot more inequality than others.

But a closer look at this data suggest that it paints a misleading picture of the nature of inequality, and some important respects, gets the role of cities in fighting inequality—and, importantly, in reducing concentrated poverty—exactly backwards.

Inequality is a big, national problem

First let’s stipulate a central point: inequality is a big and growing problem in the United States. By virtually any measure, income inequality is as high as it’s been at any time since 1929—the high water mark following the last Gilded Age. The chief aspect of the growth of inequality has been the prodigious gains realized by the top one percent, and among their number, the top tenth, and even one hundredth of one percent. We should further stipulate that Brookings has accurately reported the data that have been tabulated by the Census Bureau. There’s nothing wrong with the math here.

 

But does computing an income disparity ratio for every city in the United States add anything to our understanding of the extent, geography, or underlying causes of inequality? Is our national inequality problem merely the sum of a vast series of local inequality problems? If a city has a large number of people at the high end and at the low end of the income distribution, does that mean that the city is contributing to the nation’s inequality problem?

In some respects, these data lead us to exactly the wrong conclusions about the nature and geography of inequality. They mute what is the true geographic aspect of inequality: income segregation.

Cities don’t cause poverty

As Ed Glaeser has pointed out, poor people concentrate in cities precisely because they have good transit systems and plenty of jobs. Even if their current income is lower (not necessarily in absolute terms, but relative to the rich people who skew the income distribution) than in smaller places, the lower cost of transportation coupled with job opportunities means that they have a better chance to improve their economic condition over time:

It is critical to recognize that cities rarely make people poor. Rather, cities attract poor people, with economic opportunity, a better social safety net, and the ability to get around, usually without owning cars.

But concentrated poverty does create real problems. Most recently, the major studies released by Raj Chetty and his colleagues have shown how poor neighborhoods reduce the likelihood of economic mobility for their residents. Our own work, including Lost in Place, has shown how durable these neighborhoods can be.

A major contributor to this kind of economic segregation is driven by the secession of the rich. The latest research from Stanford’s Sean Reardon and his colleagues shows income segregation is being driven by the decisions of higher-income families to increasingly isolate themselves in higher-income enclaves, often in exclusive suburbs and established high-income areas.

National inequality is not the sum of local inequality

Moreover, the commonly-cited reasons for growing income inequality have little to do with local policies: Falling value of the minimum wage, R>G, falling effective tax rates on the highest income households, skill-biased technological change, superstar payment, crony-compensation setting, financialization of the economy. Aside from subsidies for sports franchises owned by billionaires, and restrictive zoning that tends to drive up housing prices, there’s precious little cities have to do with generating income inequality per se.

What cities do influence, however, is who lives within their boundaries.

The way this measure is constructed, however, describes places where people have very similar incomes as having lower rates of inequality: if everyone in your community is very low income (i.e., Gary, Indiana), you have income equality. Likewise, if everyone in your community is very high income (i.e., Beverly Hills) you have income equality. The cities in the United States with the highest levels of income equality are exclusive, high income enclaves, and cities of unrelenting poverty.

But if your community contains a mix of high income and low income people, your community will be scored by the 95/20 ratio as having a high level of income inequality. Another word for this might be “diverse and inclusive.”

Are localized inequality statistics a good guide to policy?

From a policy standpoint, the question ultimately has to be whether the measured inequality in cities is susceptible to any meaningful policy solution at the city level. Here it’s helpful to remember that one can attack income inequality at either end of the economic spectrum. City policies that raise the incomes and wages of lower income households (or which lower their cost of living) could clearly ameliorate at least some of the inequality in a city. But it’s a dubious proposition to suggest that cities can (or should) look to address income inequality by reducing the incomes of the well-off. The primary problem is practical: the rich are generally under no obligation to live in a given community, and so easily have the option of simply moving away if faced with effective re-distribution.

The irony here is that policies that encourage the rich to leave your city (or to not live there in the first place) invariably reduce measured inequality. It’s worth noting that Detroit has one of the lowest levels of measured inequality of any large city in the United States.

The lesson here is that for cities, a focus on inequality, while distinctly in rhythm with a serious and growing national malaise, is poor guide to municipal policy. On the other hand, cities ought to have a laser like focus on poverty, especially concentrated poverty.

Is there are “right” geography for localized inequality measures?

In our view, cities are plainly the wrong geography for thinking about inequality. Municipal boundaries of the nation’s largest cities are widely variable; sometimes they cover a majority of a metropolitan area (Jacksonville, San Antonio) and in other cases they are barely 10 percent or more (Atlanta, Miami). Comparing different sized fragments of metro areas can lead to misleading conclusions.

So while Atlanta has the highest 95/20 ratio of any US city, the Atlanta metropolitan area has a level of inequality that is actually below the national average. Atlanta’s high score is influenced by the fact that its municipal boundaries include only about eight percent of the metropolitan area population, and has a bigger share of both high-income and low-income households than the metropolitan area as a whole.

But more fundamentally, the problem is not simply the choice of the optimal geographic units for analysis.

Even at the metropolitan level, the policy implication of the variations in equality is that one ought to have a very, very expensive metro housing market. The metropolitan areas with the most expensive housing in the country (San Jose, Washington, Boston, San Francisco) have some of the lowest levels of inequality. Why? Because poor people can’t afford to live there. At the metro level—as at the municipal level—one way to improve one’s measured equality is simply to exclude the poor. If anything, when measured at the local level, income equality is an indication of income segregation.

And that’s bad on both ends. Having more high income people in your city may increase measured inequality, but it doesn’t make the poor people who live in your city poorer. In fact, at the extremes, having some higher-income people is important to having a tax base that can support the kinds of services that low-income residents rely on. As Alan Berube has acknowledged: “Detroit does not have an income inequality problem—it has a poverty problem. It’s hard to imagine that the city will do better over time without more high-income individuals.”

On the other end, when the rich secede to their own gated suburbs and cities, whether Grosse Pointe or, increasingly, San Francisco, they’re creating relatively “equal” municipalities—but the low-income people left behind are hardly benefiting.

Local inequality measures may be a classic case of “drunk under the streetlight”—we’re looking at the problem because the light shed by the data is good, but it turns out that its not where the problem, and more importantly, the solution can be found.

Revisiting Marietta

Last month, we questioned why people weren’t paying more attention to Marietta, the Atlanta suburb that is tearing down 1,300 apartments and permanently displacing their low-income residents. We wondered why this large-scale displacement of poor households—most of whom are black or Latino—didn’t generate the same kind of outcry as much more ambiguous situations in urban settings that are criticized as heartless examples of gentrification.

Our commentary generated press coverage in the local Marietta Daily Journal. Reporters for the Daily Journal put our critique to local Councilman Grif Chalfant. Chalfant defended the plan, saying that by demolishing the Franklin Road apartments, the city was dispersing a high-poverty neighborhood. He compared Marietta’s demolition of those homes to the razing of public housing in Atlanta, Chicago, and elsewhere. Of course, the Franklin Road apartments weren’t public housing—they were private homes purchased by the city. And there was still no indication of whether or how their residents would be given relocation assistance, or if they would simply end up in another low-income neighborhood further away.

The main square in Marietta. Credit: Ken Cook, Flickr.
The main square in Marietta. Credit: Ken Cook, Flickr.

 

In our original commentary, we conjectured that there’s been a widespread internalization of the notion that suburbs are white and wealthy, and city centers are for people of color and the poor—and that anything that changes this arrangement is a violation of the natural order of things. While that is certainly true, in a sense, it’s unfair to single out Marietta. Ironically, this city found itself with a concentration of affordable housing—and lower income families—largely because it failed to to be as effective at exclusionary zoning as are most suburbs. When suburbs ban or greatly restrict apartment construction, there’s little chance of becoming another Marietta.  The more common and more daunting example of suburban exclusivity may the be places that never have any affordable housing.

Consider Marin County, a supposedly liberal part of Northern California where well-to-do local residents have reacted with great alarm that Star Wars director George Lucas is planning on building 224 units of affordable housing on land he owns. (And keep in mind that in this upscale Bay Area suburb, “affordable” means housing for households with annual incomes between $65,000 and $100,000 per year.) Thanks to exclusionary zoning and other measures, Marin County has very little affordable housing of any kind, and is now enthusiastically opposing the small amount they might get. Something similar happened in Darien, Connecticut, when a local couple tried to develop affordable housing in a wealthy suburb.

Neither Marin County nor Darien are planning to tear down any low-income apartments like Marietta is—but perhaps only because they’ve been so effective at keeping out the poor that there aren’t any low-income apartments to tear down.

It may be that Marietta is the victim of two things: the large-scale Euclidean zoning that separated multi-family housing from other sections of the community, causing it to decline in value while surrounding single family neighborhoods continued to be desirable by more solidly middle-class and upper-middle-class residents. If instead of being concentrated in just one part of the community, these multi-family units had been more dispersed in mixed-use, mixed income-developments, they might have avoided the “concentrated poverty” that prompted the city to acquire and demolish them.

A zoning map of Marietta. Yellow areas are zoned for single-family homes only; brown areas are set aside for apartments. The large brown area in the southeast corner contains the homes to be razed. Pink is commercial. Source: Marietta, GA website
A zoning map of Marietta. Yellow areas are zoned for single-family homes only; brown areas are set aside for apartments. The large brown area in the southeast corner contains the homes to be razed. Pink is commercial. Source: Marietta, GA website

 

The second problem may be the high elasticity of the housing supply in Atlanta, and the region’s widespread sprawl. (Elasticity is the economist’s term for the ease with which new units can be added to a region’s housing stock. “High elasticity” means you can add more housing very easily.)  In another metropolitan area, like San Francisco or Washington, DC, 1960s-era apartments might still command a high price, because of the very constrained housing supply. In Atlanta, where it’s relatively easy to build more housing, older apartments can “filter” down market faster—leading to the kind of rapid increase in poverty seen in Marietta. It may be that displacement is less an issue in the Atlanta area than in other metros because new housing is so readily constructed.

Still left unanswered are the fiscal “beggar thy neighbor” qualities of this policy decision. Marietta officials are quite candid that lowering their city’s costs for schools and public safety expenditures are a key motivation for undertaking the demolitions. And while Marietta may shed these costs, it is certain that at least the cost of educating the children displaced from these apartments will be shifted to other school districts.

But for every Marietta that takes the visible step of demolishing some aging apartments that have moved down market, there are dozens (or perhaps hundreds) of other suburbs that never allowed multi-family housing to be built in significant numbers in the first place, and so precluded this question from ever being raised. And while it may be far less visible and obvious, this use of local land use controls to engineer the social and class structure of a community is no less profound in restricting the opportunities of lower income Americans.

Our old planning rules of thumb are “all thumbs”

We all know and use rules of thumb. They’re handy for simplifying otherwise difficult problems and quickly making reasonably prudent decisions. We know that we should measure twice and cut once, that a stitch in time saves nine, and that we should allow a little extra following distance when the roads are slick.

What purport to be “standards” in the worlds of transportation and land use are in many cases just elaborate rules of thumb. And while they might have made sense in some limited or original context, the cumulative effect of these rules is that we have a transportation system which is by regulation, practice, and received wisdom, “all thumbs.”

How we feel about bad rules of thumb. Get it? Credit: Jesper Ronn-Jensen, Flickr.
How we feel about bad rules of thumb. Get it? Credit: Jesper Ronn-Jensen, Flickr.

 

One of the problems with rules of thumb (or the more academic term, “heuristics”) is that while they may work well in many cases, they may work very poorly in others – and they may be subject to important cognitive biases that lead us to make bad decisions.

Here are five rules of thumb that have led to a distorted view of our transportation problems and their appropriate solutions.

Old rule of thumb #1: We should have a high “level of service” on our streets. Around the country, traffic engineers have long assigned one of six letter grades A through F to describe traffic flow on streets. (A is free-flowing traffic, F is highly congested.) Many planning decisions emphasize the need to maintain high levels of service, which means that roads are designed to be much bigger (and more expensive) than they need to be most of the time. And level of service only measures car travel time on a particular road, ignoring non-car travelers, and – importantly – the effect of more roads on sprawl and overall trip lengths. These flaws have lead California to eliminate level of service as a factor in environmental analyses of traffic impacts.

Old rule of thumb #2: Wider streets are safer streets. It’s long been an engineering axiom that wider roads are safer, because they give cars and others more space to avoid collisions. But the behavioral effects of wider roads overwhelm the supposed safety advantages. Wider lanes encourage vehicles to drive faster, and higher speeds produce deadlier consequences—especially for cyclists and pedestrians. New research shows that the optimal lane width for minimizing crashes and injuries is something like 10 or 11 feet, not the 12-14 feet of many travel lanes in streets around the country.

The wider the lanes, the easier it is to speed. Credit: Pier-Luc Bergeron, Flickr.
The wider the lanes, the easier it is to speed. Credit: Pier-Luc Bergeron, Flickr.

 

Old rule of thumb #3: We should require “enough” off-street parking for every use. As Donald Shoup has shown, parking requirements spelled out in zoning codes—often based on formidably inaccurate estimates prepared by the ITE (Institute of Traffic Engineers) lead to a situation where every business’s parking lot is sized for the peak hour of the peak day of the year (holiday shopping season at the Mall, example). Not only does this produce more parking than is needed the rest of the year, it turns out that parking “requirements” grossly overstate demand even in peak periods, and especially for urban uses where more people arrive by other means, and park for shorter periods of time. The product of this rule of thumb is that parking is over-supplied, destinations are further apart than they would otherwise be, and walking, transit and cycling are non –functional.

Old rule of thumb #4: We should plan for a certain number of car trips to be generated by every land use, no matter where it is. Another rule of thumb for planning is that every land use “creates” or generates a certain number of trips. But it isn’t necessarily so: the studies used to make these esimates are drawn from large-scale suburban development where proportionately more trips are by auto. A careful analysis of the data shows that trip generation estimates for most uses are overstated by a factor of 2, leading local governments to require even more transportation capacity than is needed—driving up development costs, and inducing additional travel.

Old rule of thumb #5: We should have a hierarchy of streets. The street hierarchy makes an explicit analogy to the human circulatory system. Just as we have an increasingly fine array of arteries, veins and capillaries, so too does the transportation system have freeways, arterials, collectors and local streets. And we’ve abandoned the traditional street grid for a dendritic pattern. It turns out that these hierarchical street systems are less resilient to disruption and have less capacity than the old-fashioned grids they replace, and are especially hostile to non-automotive modes of travel (pedestrians and bikes are forced to take circuitous routes and are hard to accommodate at the intersections of major arterials that have limited “green” time to accommodate cross=traffic and turning movements. The hierarchal system of “arterials, collectors, and local roads that we’ve adopted in place of the traditional street grid has had the effect of making the average trip between any two points longer. Over the past two decades the “circuity” of trips has increased by 3.7 percent in the nation’s 50 largest metropolitan areas. This increase is on top of the increase in trip distance due to sprawl and decentralization.

 

Our “all thumbs” approach to transportation planning leads to a specific pattern of development that is as inefficient for cars as it is hostile for persons traveling on foot, via bicycle and on transit.

What is needed are a new set of rules of thumb. Like all heuristics, this isn’t meant to be taken as a final set of “standards” to fit every situation – but there are some emerging ideas about what we might emphasize.

New rule of thumb #1: Closer is better. Having more different destinations close at hand facilitates a wide range of mode choices, especially walking and cycling. Mixing uses, which is often anathema under traditional zoning codes turns out to be desirable for consumers and expeditious for transportation.

New rule of thumb #2: Slower is safer. When cars and people on foot and on bikes interact, safety comes from slow speeds even more than separation. Local streets that move traffic slowly are friendlier—and safer—for non-auto modes of transportation.

15106162164_1b7fc8ce47_z
Source: NYC DOT, Flickr.

 

New rule of thumb #3: Sharing is efficient. Rather than require every use to provide parking for the peak hour of the year, arranging uses so people can park once, and walk mostly leads to less traffic, greater safety and more congenial, fine-grained development patterns.

New rule of thumb #4: Our objective should be accessibility, not mobility. Many transportation heuristics emphasize speed: how do we make things move faster. But what we really care about is getting to (or being at) our destinations, not rapidly traveling among them. Speed should be secondary to choice.

Of course, these new “rules of thumb” are just a beginning. There’s a lot of work to be done to un-learn and re-think the unfortunate heuristics we’ve employed in thinking about transportation planning and land use. But as these examples illustrate, re-thinking these issues isn’t a purely technical matter: it depends critically on re-imagining the way we visualize and tell stories about how our transportation system works.

When planning rules of thumb are “all thumbs”

Some commonly used “rules of thumb” produce very bad results 

We all know and use rules of thumb. They’re handy for simplifying otherwise difficult problems and quickly making reasonably prudent decisions. We know that we should measure twice and cut once, that a stitch in time saves nine, and that we should allow a little extra following distance when the roads are slick.

What purport to be “standards” in the worlds of transportation and land use are in many cases just elaborate rules of thumb. And while they might have made sense in some limited or original context, the cumulative effect of these rules is that we have a transportation system which is by regulation, practice, and received wisdom, “all thumbs.”

How we feel about bad rules of thumb. Get it? Credit: Jesper Ronn-Jensen, Flickr.
How we feel about bad rules of thumb. Get it? Credit: Jesper Ronn-Jensen, Flickr.

 

One of the problems with rules of thumb (or the more academic term, “heuristics”) is that while they may work well in many cases, they may work very poorly in others – and they may be subject to important cognitive biases that lead us to make bad decisions.

Here are five rules of thumb that have led to a distorted view of our transportation problems and their appropriate solutions.

Old rule of thumb #1: We should have a high “level of service” on our streets. Around the country, traffic engineers have long assigned one of six letter grades A through F to describe traffic flow on streets. (A is free-flowing traffic, F is highly congested.) Many planning decisions emphasize the need to maintain high levels of service, which means that roads are designed to be much bigger (and more expensive) than they need to be most of the time. And level of service only measures car travel time on a particular road, ignoring non-car travelers, and – importantly – the effect of more roads on sprawl and overall trip lengths. These flaws have lead California to eliminate level of service as a factor in environmental analyses of traffic impacts.

Old rule of thumb #2: Wider streets are safer streets. It’s long been an engineering axiom that wider roads are safer, because they give cars and others more space to avoid collisions. But the behavioral effects of wider roads overwhelm the supposed safety advantages. Wider lanes encourage vehicles to drive faster, and higher speeds produce deadlier consequences—especially for cyclists and pedestrians. New research shows that the optimal lane width for minimizing crashes and injuries is something like 10 or 11 feet, not the 12-14 feet of many travel lanes in streets around the country.

The wider the lanes, the easier it is to speed. Credit: Pier-Luc Bergeron, Flickr.
The wider the lanes, the easier it is to speed. Credit: Pier-Luc Bergeron, Flickr.

 

Old rule of thumb #3: We should require “enough” off-street parking for every use. As Donald Shoup has shown, parking requirements spelled out in zoning codes—often based on formidably inaccurate estimates prepared by the ITE (Institute of Traffic Engineers) lead to a situation where every business’s parking lot is sized for the peak hour of the peak day of the year (holiday shopping season at the Mall, example). Not only does this produce more parking than is needed the rest of the year, it turns out that parking “requirements” grossly overstate demand even in peak periods, and especially for urban uses where more people arrive by other means, and park for shorter periods of time. The product of this rule of thumb is that parking is over-supplied, destinations are further apart than they would otherwise be, and walking, transit and cycling are non –functional.

Old rule of thumb #4: We should plan for a certain number of car trips to be generated by every land use, no matter where it is. Another rule of thumb for planning is that every land use “creates” or generates a certain number of trips. But it isn’t necessarily so: the studies used to make these esimates are drawn from large-scale suburban development where proportionately more trips are by auto. A careful analysis of the data shows that trip generation estimates for most uses are overstated by a factor of 2, leading local governments to require even more transportation capacity than is needed—driving up development costs, and inducing additional travel.

Old rule of thumb #5: We should have a hierarchy of streets. The street hierarchy makes an explicit analogy to the human circulatory system. Just as we have an increasingly fine array of arteries, veins and capillaries, so too does the transportation system have freeways, arterials, collectors and local streets. And we’ve abandoned the traditional street grid for a dendritic pattern. It turns out that these hierarchical street systems are less resilient to disruption and have less capacity than the old-fashioned grids they replace, and are especially hostile to non-automotive modes of travel (pedestrians and bikes are forced to take circuitous routes and are hard to accommodate at the intersections of major arterials that have limited “green” time to accommodate cross=traffic and turning movements. The hierarchal system of “arterials, collectors, and local roads that we’ve adopted in place of the traditional street grid has had the effect of making the average trip between any two points longer. Over the past two decades the “circuity” of trips has increased by 3.7 percent in the nation’s 50 largest metropolitan areas. This increase is on top of the increase in trip distance due to sprawl and decentralization.

 

Our “all thumbs” approach to transportation planning leads to a specific pattern of development that is as inefficient for cars as it is hostile for persons traveling on foot, via bicycle and on transit.

What is needed are a new set of rules of thumb. Like all heuristics, this isn’t meant to be taken as a final set of “standards” to fit every situation – but there are some emerging ideas about what we might emphasize.

New rule of thumb #1: Closer is better. Having more different destinations close at hand facilitates a wide range of mode choices, especially walking and cycling. Mixing uses, which is often anathema under traditional zoning codes turns out to be desirable for consumers and expeditious for transportation.

New rule of thumb #2: Slower is safer. When cars and people on foot and on bikes interact, safety comes from slow speeds even more than separation. Local streets that move traffic slowly are friendlier—and safer—for non-auto modes of transportation.

15106162164_1b7fc8ce47_z
Source: NYC DOT, Flickr.

 

New rule of thumb #3: Sharing is efficient. Rather than require every use to provide parking for the peak hour of the year, arranging uses so people can park once, and walk mostly leads to less traffic, greater safety and more congenial, fine-grained development patterns.

New rule of thumb #4: Our objective should be accessibility, not mobility. Many transportation heuristics emphasize speed: how do we make things move faster. But what we really care about is getting to (or being at) our destinations, not rapidly traveling among them. Speed should be secondary to choice.

Of course, these new “rules of thumb” are just a beginning. There’s a lot of work to be done to un-learn and re-think the unfortunate heuristics we’ve employed in thinking about transportation planning and land use. But as these examples illustrate, re-thinking these issues isn’t a purely technical matter: it depends critically on re-imagining the way we visualize and tell stories about how our transportation system works.

Our old planning rules of thumb are “all thumbs”

We all know and use rules of thumb. They’re handy for simplifying otherwise difficult problems and quickly making reasonably prudent decisions. We know that we should measure twice and cut once, that a stitch in time saves nine, and that we should allow a little extra following distance when the roads are slick.

What purport to be “standards” in the worlds of transportation and land use are in many cases just elaborate rules of thumb. And while they might have made sense in some limited or original context, the cumulative effect of these rules is that we have a transportation system which is by regulation, practice, and received wisdom, “all thumbs.”

How we feel about bad rules of thumb. Get it? Credit: Jesper Ronn-Jensen, Flickr.
How we feel about bad rules of thumb. Get it? Credit: Jesper Ronn-Jensen, Flickr.

 

One of the problems with rules of thumb (or the more academic term, “heuristics”) is that while they may work well in many cases, they may work very poorly in others – and they may be subject to important cognitive biases that lead us to make bad decisions.

Here are five rules of thumb that have led to a distorted view of our transportation problems and their appropriate solutions.

Old rule of thumb #1: We should have a high “level of service” on our streets. Around the country, traffic engineers have long assigned one of six letter grades A through F to describe traffic flow on streets. (A is free-flowing traffic, F is highly congested.) Many planning decisions emphasize the need to maintain high levels of service, which means that roads are designed to be much bigger (and more expensive) than they need to be most of the time. And level of service only measures car travel time on a particular road, ignoring non-car travelers, and – importantly – the effect of more roads on sprawl and overall trip lengths. These flaws have lead California to eliminate level of service as a factor in environmental analyses of traffic impacts.

Old rule of thumb #2: Wider streets are safer streets. It’s long been an engineering axiom that wider roads are safer, because they give cars and others more space to avoid collisions. But the behavioral effects of wider roads overwhelm the supposed safety advantages. Wider lanes encourage vehicles to drive faster, and higher speeds produce deadlier consequences—especially for cyclists and pedestrians. New research shows that the optimal lane width for minimizing crashes and injuries is something like 10 or 11 feet, not the 12-14 feet of many travel lanes in streets around the country.

The wider the lanes, the easier it is to speed. Credit: Pier-Luc Bergeron, Flickr.
The wider the lanes, the easier it is to speed. Credit: Pier-Luc Bergeron, Flickr.

 

Old rule of thumb #3: We should require “enough” off-street parking for every use. As Donald Shoup has shown, parking requirements spelled out in zoning codes—often based on formidably inaccurate estimates prepared by the ITE (Institute of Traffic Engineers) lead to a situation where every business’s parking lot is sized for the peak hour of the peak day of the year (holiday shopping season at the Mall, example). Not only does this produce more parking than is needed the rest of the year, it turns out that parking “requirements” grossly overstate demand even in peak periods, and especially for urban uses where more people arrive by other means, and park for shorter periods of time. As Smart Growth America’s report “Empty Spaces shows, when developments have density, transit access and mixed uses, spaces mandated by parking requirements simply sit unused.. The product of this rule of thumb is that parking is over-supplied, destinations are further apart than they would otherwise be, and walking, transit and cycling are non –functional.

Old rule of thumb #4: We should plan for a certain number of car trips to be generated by every land use, no matter where it is. Another rule of thumb for planning is that every land use “creates” or generates a certain number of trips. But it isn’t necessarily so: the studies used to make these esimates are drawn from large-scale suburban development where proportionately more trips are by auto. A careful analysis of the data shows that trip generation estimates for most uses are overstated by a factor of 2, leading local governments to require even more transportation capacity than is needed—driving up development costs, and inducing additional travel.

Old rule of thumb #5: We should have a hierarchy of streets. The street hierarchy makes an explicit analogy to the human circulatory system. Just as we have an increasingly fine array of arteries, veins and capillaries, so too does the transportation system have freeways, arterials, collectors and local streets. And we’ve abandoned the traditional street grid for a dendritic pattern. It turns out that these hierarchical street systems are less resilient to disruption and have less capacity than the old-fashioned grids they replace, and are especially hostile to non-automotive modes of travel (pedestrians and bikes are forced to take circuitous routes and are hard to accommodate at the intersections of major arterials that have limited “green” time to accommodate cross=traffic and turning movements. The hierarchal system of “arterials, collectors, and local roads that we’ve adopted in place of the traditional street grid has had the effect of making the average trip between any two points longer. Over the past two decades the “circuity” of trips has increased by 3.7 percent in the nation’s 50 largest metropolitan areas. This increase is on top of the increase in trip distance due to sprawl and decentralization.

 

Our “all thumbs” approach to transportation planning leads to a specific pattern of development that is as inefficient for cars as it is hostile for persons traveling on foot, via bicycle and on transit.

What is needed are a new set of rules of thumb. Like all heuristics, this isn’t meant to be taken as a final set of “standards” to fit every situation – but there are some emerging ideas about what we might emphasize.

New rule of thumb #1: Closer is better. Having more different destinations close at hand facilitates a wide range of mode choices, especially walking and cycling. Mixing uses, which is often anathema under traditional zoning codes turns out to be desirable for consumers and expeditious for transportation.

New rule of thumb #2: Slower is safer. When cars and people on foot and on bikes interact, safety comes from slow speeds even more than separation. Local streets that move traffic slowly are friendlier—and safer—for non-auto modes of transportation.

15106162164_1b7fc8ce47_z
Source: NYC DOT, Flickr.

 

New rule of thumb #3: Sharing is efficient. Rather than require every use to provide parking for the peak hour of the year, arranging uses so people can park once, and walk mostly leads to less traffic, greater safety and more congenial, fine-grained development patterns.

New rule of thumb #4: Our objective should be accessibility, not mobility. Many transportation heuristics emphasize speed: how do we make things move faster. But what we really care about is getting to (or being at) our destinations, not rapidly traveling among them. There’s great new work being done on how to measure accessibility, and use it as a guide to policy.  Speed should be secondary to choice.

Of course, these new “rules of thumb” are just a beginning. There’s a lot of work to be done to un-learn and re-think the unfortunate heuristics we’ve employed in thinking about transportation planning and land use. But as these examples illustrate, re-thinking these issues isn’t a purely technical matter: it depends critically on re-imagining the way we visualize and tell stories about how our transportation system works.

How cutting back on driving helps the economy

There are two kinds of economics: macroeconomics, which deals in big national and global quantities, like gross domestic product, and microeconomics, which focuses on a smaller scale, like how the prices of specific products change. Macroeconomics gets all the attention in the news cycle, as people talk about the unemployment rate, the money supply, inflation, and the monthly payroll reports. Micro-economists usually labor in obscure corners, studying things like commodity prices, wage rates and industry trends.

The President’s Council of Economic Advisers (CEA) is the nation’s leading group of economists, focused heavily on understanding–and explaining big macroeconomic trends.

A new CEA report, “The surprising decline in US petroleum consumption,” highlights an important decades-in-the-making trend in the US economy: we’re consuming a lot less oil that everyone thought we would. Obviously, oil consumption is a big deal in the macro economy. Oil imports are the biggest factor in the nation’s long running balance of trade deficit (we imported 2.7 billion barrels of oil in 2014, at an average cost of $91), and from the first energy crisis of the early 1970s onward, there’s been a strong recognition of the critical role that oil supplies and oil prices played in shaping global and national economic conditions.

While all of the models constructed by the experts, including the Energy Information Administration at the Department of Energy, predicted that US petroleum consumption would grow from 18 to 30 million barrels per day between 1970 and 2030, something very different is happening: US oil consumption has leveled off at about 21 million barrels per day. Even though population is increasing, and the economy is still growing, petroleum consumption has been essentially flat.

What’s keeping consumption down? According to the CEA analysis, transportation explains 80-90 percent of the trend. While industrial, commercial, and residential energy use have generally followed predictions, energy use for transportation is far below where it was predicted.

From the CEA's report.
From the CEA’s report.

And within transportation, the big savings have come from a surprising source. While many people focus on improved fuel efficiency of cars, that actually turns out to be a negligible factor in cutting energy use. Better gas mileage accounts for only about 15% of the difference in 2014. The big factor is that Americans are driving less: vehicle miles traveled are far below projected levels.

Clearly, a combination of demographic, technological, social and price factors are at work. The big run up in gas prices after 2004 has played a role in reducing driving (and prompting people to buy more fuel-efficient vehicles).

This highlights a couple of things. For one, simple-minded projections based on past relationships are likely to be wrong. Big demographic changes, and shifts in tastes (toward urban living, and away from time spent driving) can dramatically change

And, as the CEA report signals, these have big macroeconomic effects. The decline in petroleum consumption dramatically improves our international trade position compared to what was projected, and means US consumers have about $150 billion annually to spend in the American economy (and their local economies) than if they drove more.

Although CEA characterizes the decline in petroleum consumption as surprising, for those of us who have been following the microeconomics of demand for transportation closely for the past decade, this is old news. But it’s also big news that bodes so well for the macroeconomy and the environment.

The value of walkability across the US

One of the factors that seems to be propelling the resurgence of cities around the nation is the growing demand for housing in walkable locations. One of the best sources of evidence of the value of walkability is home values, and some new evidence confirms that walkability adds to home values, and also shows that walkable homes have held and increased their value more even in turbulent real estate markets.

The latest insight on this question comes from a new book, Zillow Talk, by Zillow’s CEO Spencer Rascoff and Chief Economist, Stan Humphries, explain what they call “The New Rules of Real Estate.” Zillow has emerged as one of the leading web-based real estate information companies, tracking the sales of millions of housing units around the country, and building sophisticated econometric models to provide regularly updated “Zestimates” of the likely sales price of almost all of the nation’s housing.

We’re big fans of Zillow’s work – and its data, which they’ve made freely available on their website – its a terrific resource for tracking important trends in local housing markets. We used it, for example, to look at the growth of prices in close-in urban neighborhoods in Portland relative to houses in surrounding suburban counties.

There’s a lot to read about in Zillow Talk – you can learn about the best time to sell your home, clever pricing strategies, and which descriptors tend to drive consumer interest. But our attention was drawn to Chapter 23, which explores the question: “What’s Walkability Worth?” The authors use Zillow’s copious data about home prices coupled with Walk Score’s measure of walkability to estimate how improving access to walkable destinations affects a home’s market price. (Walk Score is the innovative web-based tool for measuring the walkability of homes and apartments assigns a score from zero to 100 representing the proximity to common destinations like stores, parks, schools and restaurants.)

Zillow Talk estimated the effect of a 15 point improvement in walkability on Walk Score’s hundred-point scale across a number of metropolitan markets. They found that a 15 point increase in walkability would increase home values by an average of about 12 percent, with the actual values ranging between 4 and 24 percent depending on the metropolitan area. Chicago had the greatest effect of increases in walkability, and New York the least. The author’s also found that the positive effects of an increased walk score weren’t felt in car dependent neighborhoods.

Zillow Talk tracked home values in several major markets from 2000 through 2014, and reported average sales values for the most walkable neighborhoods (“Walker’s Paradise” and “Very Walkable”) and the less walkable places (“Somewhat Walkable” and “Car Dependent.”) In every market they examined, home values in more walkable neighborhoods outperformed those in less walkable neighborhoods in the same market – particularly in recent years. In New York and Chicago, for example, homes in the most walkable neighborhoods increased in value 160 percent more than homes in car-dependent neighborhoods. Even though all these neighborhoods and markets showed the effects on the housing market cycle (with declines after 2006), Rascoff and Humphries report that more walkable areas are more resilient: they recovered their values faster after the collapse of the housing bubble.

The findings presented in Zillow Talk confirm my own earlier work examining the connection between walkability and market values. In 2009, I published a study for CEOs for Cities – Walking the Walk – that used home sales data from 15 markets to assess the impact of walkability on home values. I found that after controlling for the effect of home size, age, number of bedrooms and bathrooms, the overall income of the neighborhood, and its proximity to the region’s urban center and to employment opportunities, that walkability had a significant impact on home values in 13 of the 15 markets we studied. On average, in the markets we examined, going from an average level of walkability to an above average level (from the market median to the 75th percentile) was associated with a $10,000 to $30,000 increase in home values.

For those of us interested in urban places and walkability, Zillow’s data shows that walkability is a major contributor to housing values in urban locations and that houses with high levels of walkability, as measured by Walk Score, have maintained and increased their value relative to housing in car-dependent locations. This is clear evidence that consumers attach major economic value to walkability.

The devilish details of getting a VMT fee right

At City Observatory, we’re big believers that many of our transportation problems come from the fact that our prices are wrong – and solving those problems will require us to get prices right. While we desperately need a way to pay for roads that better reflects the value of the space we use, just moving to a new model isn’t enough. If we don’t get the new pricing system right, it could make many of our transportation problems worse. As the old adage goes, the devil is in the details.

In short, American cities have too much traffic today for the same reason that the old Soviet Union always had bread lines: we charge too little for a scarce and valuable commodity. As a result, people consume too much, and we end up rationing access by making people wait.

Similar ideas. Top: Oran Viriyincy, Flickr. Bottom: Jake, Flickr.
Similar ideas. Top: Oran Viriyincy, Flickr. Bottom: Jake, Flickr.

 

The main way we price road travel today is the gas tax, but it doesn’t send the right signals to travelers about how much different kinds of travel, in different places, at different times, actually cost. In contrast, the proposal to replace the gas tax with a vehicle miles traveled (VMT) tax – directly charging people by how much they drive – is clearly a step in the right direction.

With great fanfare, the State of Oregon announced its road pricing demonstration program, OReGO, on July first. Under this voluntary program, up to 5,000 motorists will sign up to pay a per mile fee of 1.5 cents rather than the current 30 cent state gas tax. Motorists will have two different options for monitoring their mileage: one that periodically reads the vehicle’s diagnostic data port, and another that uses GPS technology.

Over at CityLab, our friend Eric Jaffe is enthusiastic about this kind of mileage-based road fee, listing 18 reasons why they’re a good idea. While the concept of charging more for the roads, and charging in a way that reflects the cost of use – including contributing to congestion, road damage, and pollution – is essential, Oregon’s proposed road use system does exactly none of these things. The crux of the problem is that 1) it raises no more money than the current gas tax, and 2) it ends up subsidizing heavier, more polluting vehicles while actually punishing lighter, more fuel-efficient ones.

For many, the primary reason to favor a VMT tax is, as Eric Jaffe puts it, to “raise a gargantuan amount of money.” That would replace the gas tax, which, according to accepted political wisdom, is a dying revenue source. But VMT may not be a revenue panacea.

For one thing, total driving in the US is declining. Moreover, if we tied the tax to VMT—and set the tax at a high enough level to produce the “buckets” of revenue that proponents want–we’d expect that people would do what they normally do when something gets more expensive: do less of it. From a transportation perspective, this is a feature, not a bug. In Oregon, for example, even with increasing population, people are driving less now than they did a decade ago. As a result, a VMT tax would also have stagnant revenue growth–one of the same problems that plagues the gas tax. The data come from the Oregon Department of Transportation:

Credit: Oregon DOT
Credit: Oregon DOT

 

It also turns out that the gas tax is more proportional to the physical damage vehicles cause to the roadway and to the environment. A gas tax functions very much as a carbon tax (albeit a very low one): the more you pollute, the more you pay. But shifting to a tax based solely on mileage, without regard to how much pollution a vehicle creates, would essentially tax hybrids to subsidize hummers.

The flat, undifferentiated VMT fee would be like a butcher that charged a single price per pound for every cut of meat in the shop. You’d quickly find that you would have long lines of people lining up to buy steak and you’d end up throwing out over-priced hamburger that no one would buy. A key part of a VMT fee should be its ability to signal to users how much travel costs to society as a whole, depending on when, where and how they do it. A flat fee per mile, whether it’s in Manhattan, New York or Manhattan, Kansas, or at 5am or 5pm, will do nothing to encourage people to use cars at more efficient times or places, or to choose to take transit, bike, or walk instead.

Getting a VMT fee right is going to become increasingly important, because the problem of mis-pricing and under-pricing road use is going to become much worse in the years ahead. The business models of Uber and other “ride-sharing” services are predicated on very low-cost access to the public right of way.

Already, there is evidence that the growth of Uber and other for-hire vehicles is putting further strains on the very limited street capacity of New York. The number of for-hire vehicles in the city has grown 63 percent since 2011, and traffic speeds on Manhattan streets have fallen 9 percent since 2010. Slower traffic has resulted in slower rush hour bus service, and contributed to declining Manhattan bus ridership, which fell 5.8 percent last year.

The problem will mushroom if, as many expect, someone figures out how to build and deploy fully autonomous vehicles. If they aren’t charged for their use of the public right of way – both when carrying paying passengers, or when hovering in high volume locations – we’ll likely see even greater congestion of the roadway. Under-priced roads signal to road users–and innovative transportation companies–to over-use them, with potentially negative effects for the entire system.

We had a preview of this problem with the short-lived parking App, Monkey Park, which set up a way for people to auction off public, on-street parking spaces as they drove away. Unlike with Uber and Air BNB, San Francisco successfully imposed a cease-and-desist order, based on the premise that it’s illegal to sell public space. Monkey Park’s business model was predicated on extracting profit from an under-priced public resource—which is exactly what is at work with Uber and other businesses providing traffic in public streets.

It’s tempting to treat road pricing as just a way to raise more money for construction and maintenance. But that would be a huge mistake. If we get the prices right, we can make a significant dent in congestion by signalling to travelers how to make more efficient use of the roads we have, avoiding the need for expensive new capacity.

Already, we have good models of how this works with congestion pricing systems that vary by place and time of day in London, Stockholm and Singapore. San Francisco has implemented variable pricing for parking and has explored proposals to charge for vehicle miles traveled–based on time of day. And the evidence from earlier experiments in Oregon is clear, while a flat VMT fee has very little impact on peak hour travel, a fee that ranges from .4 cents a mile (off peak) to 10 cents per mile during peak hours in the central city would reduce vehicle miles traveled by more than 20 percent.

Our transportation problems are–at their root–a product of getting the prices wrong. If we adopt some kind of VMT fee, we have a once in several generations chance to get the prices right. Let’s not blow it by failing to make sure that the way we price roads and travel sends the right signals to everyone about how, and when to use the roads. The devil is in the details here.

An idea whose time has passed: The VMT Fee

Obsolete before its even tried: A simple mileage fee is a bad way to pay for roads

  • It’s being touted as a replacement for the gas tax, but the VMT fee is a flawed way to pay for roads.
  • We should adopt a pricing system that reflects impacts on the environment, wear and tear on the roadway, and the costs of congestion, not just how far a vehicle is driven.
  • Chances to change the way we pay for our transportation system come along about once a century, it would be a shame to get locked into a flawed, second-rate system.

At City Observatory, we’re big believers that many of our transportation problems come from the fact that our prices are wrong – and solving those problems will require us to get prices right. While we desperately need a way to pay for roads that better reflects the value of the space we use, just moving to a new model isn’t enough. If we don’t get the new pricing system right, it could make many of our transportation problems worse. As the old adage goes, the devil is in the details. There’s growing interest, and even some state experiments with a “vehicle miles traveled” fee (VMT) fee.  This would basically tote up the number of miles a car is driven each year, and charge a price per mile (a few cents). Interest in the VMT fee has been prompted by the advent of fuel efficient vehicles and electric cars, which have cut into gas tax receipts.

While a VMT fee has some advantages over a gas tax, its a far from optimal way to pay for roads, and send the right signals to road users about how their choices affect the transportation system and society.

Technology and consumer acceptance have blown past the simple-minded idea of a VMT fee.  Nearly all American adults already have smart-phones and other GPS-enabled devices that track their locations in real time. Uber and Lyft have conditioned urban travelers to paying a la carte for time and distance (and paying a surcharge when demand is particularly heavy). Insurance companies and telecommunication firms have developed the mileage tracking dongles that plug in to cars’ data ports. Shortly, new cars will have sophisticated vehicle-to-vehicle communication built-in.

So as we think about how to design a road finance and pricing system to replace the gas tax (and other taxes), we ought to have a system that accounts for all  the cost-drivers associated with travel:  heavier vehicles that cause more road wear should pay higher fees, as should vehicles that pollute more. How much you pay to drive on a road should be related to how much that road costs to build and maintain. Use a congested urban highway at the peak hour, and you’ll pay a higher fee than if you use a rural road at 2 am.

In short, American cities have too much traffic today for the same reason that the old Soviet Union always had bread lines: we charge too little for a scarce and valuable commodity. As a result, people consume too much, and we end up rationing access by making people wait.

Similar ideas. Top: Oran Viriyincy, Flickr. Bottom: Jake, Flickr.
Similar ideas. Top: Oran Viriyincy, Flickr. Bottom: Jake, Flickr.

The main way we price road travel today is the gas tax, but it doesn’t send the right signals to travelers about how much different kinds of travel, in different places, at different times, actually cost. In contrast, the proposal to replace the gas tax with a vehicle miles traveled (VMT) tax – directly charging people by how much they drive – is clearly a step in the right direction.

With great fanfare, the State of Oregon launched its road pricing demonstration program, OReGO, three years ago. Under this voluntary program, up to 5,000 motorists were supposed to sign up to pay a per mile fee of 1.5 cents rather than state’s 30 cent state gas tax. Motorists had two different options for monitoring their mileage: one that periodically reads the vehicle’s diagnostic data port, and another that uses GPS technology.

For those hoping for a sensible alternative to the gas tax, the VMT fee seems like a big improvement.  CityLab’s  Eric Jaffe was enthusiastic about this kind of mileage fee concept, listing 18 reasons why it’s a good idea. While the concept of charging more for the roads, and charging in a way that reflects the cost of use – including contributing to congestion, road damage, and pollution – is essential, Oregon’s VMT fee does exactly none of these things. The crux of the problem is that 1) it raises no more money than the current gas tax, and 2) it ends up subsidizing heavier, more polluting vehicles while actually punishing lighter, more fuel-efficient ones.

For many, the primary reason to favor a VMT tax is, as Eric Jaffe puts it, to “raise a gargantuan amount of money.” That would replace the gas tax, which, according to accepted political wisdom, is a dying revenue source.  But VMT may not be a revenue panacea. For one thing, total driving in the US has declined in the past, and is likely to decline in the future. Moreover, if we tied the tax to VMT—and set the tax at a high enough level to produce the “buckets” of revenue that proponents want–we’d expect that people would do what they normally do when something gets more expensive: do less of it.

The gas tax functions more like a carbon tax than does the VMT fee

It also turns out that the gas tax is more proportional to the physical damage vehicles cause to the roadway and to the environment. A gas tax functions very much as a carbon tax (albeit a very low one): the more you pollute, the more you pay. But shifting to a tax based solely on mileage, without regard to how much pollution a vehicle creates, would essentially tax hybrids to subsidize hummers.

The flat, undifferentiated VMT fee would be like a butcher that charged a single price per pound for every cut of meat in the shop. You’d quickly find that you would have long lines of people lining up to buy steak and you’d end up throwing out over-priced hamburger that no one would buy. A key part of a VMT fee should be its ability to signal to users how much travel costs to society as a whole, depending on when, where and how they do it. A flat fee per mile, whether it’s in Manhattan, New York or Manhattan, Kansas, or at 5am or 5pm, will do nothing to encourage people to use cars at more efficient times or places, or to choose to take transit, bike, or walk instead.

Getting a VMT fee right is going to become increasingly important, because the problem of mis-pricing and under-pricing road use is going to become much worse in the years ahead. The business models of Uber and other “ride-sharing” services are predicated on very low-cost access to the public right of way.

Already, there is evidence that the growth of Uber and other for-hire vehicles is putting further strains on the very limited street capacity.  In New York the number of for-hire vehicles in the city has grown 63 percent since 2011, and traffic speeds on Manhattan streets have fallen 9 percent since 2010. Slower traffic has resulted in slower rush hour bus service, and contributed to declining Manhattan bus ridership, which fell 5.8 percent last year.

It’s tempting to treat road pricing as just a way to raise more money for construction and maintenance. But that would be a huge mistake. If we get the prices right, we can make a significant dent in congestion by signalling to travelers how to make more efficient use of the roads we have, avoiding the need for expensive new capacity.

Already, we have good models of how this works with congestion pricing systems that vary by place and time of day in London, Stockholm and Singapore. San Francisco has implemented variable pricing for parking. New York, Los Angeles and Chicago are all actively exploring proposals to implement various forms for road pricing based on time of day. And the evidence from earlier experiments in Oregon is clear, while a flat VMT fee has very little impact on peak hour travel, a fee that ranges from .4 cents a mile (off peak) to 10 cents per mile during peak hours in the central city would reduce vehicle miles traveled by more than 20 percent.

Oregon now seems set to leapfrog the VMT fee. The 2017 state legislature directed the Oregon Department of Transportation to seek federal permission to implement value pricing in Interstate 5 and Interstate 205 and other freeways in the Portland metropolitan area. The state’s proposal due to the US DOT by year end. Based on the latest iteration of its long-awaited infrastructure proposal, the Trump Administration is favorably disposed to letting the states make the the call on tolling existing Interstates.

Our transportation problems are–at their root–a product of getting the prices wrong. If we adopt a new way of charging for public roads, we have a once in several generations chance to get the prices right. Let’s not blow it by failing to make sure that the way we price roads and travel sends the right signals to everyone about how, and when to use the roads.

Why aren’t we talking about Marietta, Georgia?

Imagine this: A city government takes $65 million in public money and buys up more 1,300 units of aging but affordable housing, which is home mainly to low income and minority residents. It demolishes the housing, and plans to sell the land to private developers for office and retail development.

A pretty cut-and-dried case of gentrification and displacement, wouldn’t you say?

Or maybe it’s a tale from the bad old days of “urban renewal” when cities fought poverty by bull-dozing “blighted” neighborhoods?

Actually this story is unfolding now, in one of the nation’s largest metro areas.

But while it seems that every move in the gentrification battles in Brooklyn and San Francisco is broadcast nationally, this egregious case of direct government displacement is being ignored. Maybe if it happens in the suburbs and doesn’t involve hipsters, it isn’t worthy of media attention.

Here are the details: Last month Mayor Steve Tumlin of Marietta, Georgia sat at the controls of an excavator and took the first swipe at knocking down the Woodlands Park Apartments. The city of Marietta, just outside Atlanta, has acquired – and demolished, or plans to demolish – four apartment complexes on Franklin Road containing more than 1,300 apartments. The demolition is funded by a bond issue approved by city voters in November 2013 by a 2,740 to 2,307 margin. The city has additional bond money and is in the process of acquiring more apartments, with plans to demolish them as well.

(Top: The entrance to Woodlands Park Apartments as it appeared in 2011. Bottom: The shuttered complex in May 2015. Source: Google Maps.)

Marietta officials take a dim view of the apartment complexes on Franklin Road on the city’s southeast side. They describe it as a blighted, high crime area. US Senator Johnny Isaakson said: “I go by Franklin Road as fast as I can every day.”

(If Marietta is a familiar name as a flashpoint for the problems of low income citizens living in suburbs, it should be. You may recall the case of Raquel Nelson, a single mother of three, who was convicted of manslaughter when she and her children were hit by a drunk driver when crossing a suburban highway from a bus-stop to their home.)

One Atlanta commentator described the project as removing ten “ancient” apartment complexes and “ushering” the residents to different locations. Most local citizens echo this view. The mayor sees it as clear cut opportunity to assemble land and develop new business. The city feels that it spends a disproportionate share of its tax revenues providing services to the neighborhood. One “benefit” of the demolitions, then, is lower enrollments at local schools. In just the past year, school officials reported a decline of 250 students from the Franklin Road area.

The project has produced little outcry. One of the few outspoken opponents is a local resident, Marty Heller, who argues that the demolitions are “class warfare”: “The people who voted for it want to eliminate the population on Franklin Road and raze the apartment complexes and replace it with commercial development. They want to eliminate the poor people on Franklin Road, they want to get the Hispanics out of the school system so that their test scores will go up, and it will make it easier for the school system.” The bond measure’s proponents respond that they are helping the poor who are now “trapped in high density crime-ridden slum like apartment complexes.”

What happens to the former residents of these apartments is far from clear. They will have to find housing elsewhere, and their children will have to be educated somewhere else. The demolitions are substantial, amounting to about 10 percent of all the multi-family housing in Marietta. The city says it will help relocate residents, but in press accounts at least, details are scant. Whether residents can continue to afford to live in Marietta, and whether students will end up in some other school district, doesn’t seem to be the city’s chief concern.

The apartments in question date from the 1960s, and when they were constructed were a desirable location for young couples and singles in suburban Atlanta. But as the region has sprawled and the the apartments have aged, they’ve gradually moved downmarket. Apartments.com reports that the Marquis Place complex – which the city plans to acquire and demolish – offers 1 to 3 bedroom apartments for rents of $660 to $940 monthly.

It’s interesting to look back at the history of the neighborhood along Franklin Road. We’ve assembled some data from Brown University’s Longitudinal Tract Database that tracks Census data from 1970 through 2010. We examined data for Census Tracts 304.11, 304.12, and 304.14, which include the apartments in question. In 1970, when the apartments (and most of the housing in the surrounding areas) were still quite new, this was a high income, predominantly white area. The poverty rate was just 4 percent, and the median household income was about 70 percent higher than the national average. In each successive decade, the economic status of the area has slipped. Today, the poverty rate in these tracts has increased to 28 percent – just shy of the 30 percent threshold we use to define neighborhoods of concentrated poverty and median household incomes are about 25 percent below the national average.

Over past four decades, the racial and ethnic composition of this neighborhood has changed even more dramatically. In 1980, the residents of these three Census Tracts were nearly 95 percent white. Today, only 14 percent of the residents are non-Hispanic whites. The area’s population is now four-fifths persons of color: about 52 percent black and about 30 percent Hispanic.

As we’ve pointed out before, public interest in gentrification seems to be highly focused in just a few large – and generally liberal – metropolitan areas. The poster children of gentrification are hipster neighborhoods in Brooklyn, Washington, San Francisco and Portland. The data and scholarly research on the subject show that even in these areas, displacement is far less than imagined, and previous residents are less likely to move away from gentrifying neighborhoods than non- gentrifying ones, and benefit from neighborhood improvement.

Still, the narrative about urban gentrification is full of vitriol and conspiracy theories: city officials, in league with banks and developers, look to exploit poor neighborhoods. Often these theories overlook, or entirely discount, the growing demand for urban living, and the shortage of housing and neighborhoods created by restrictive single-family zoning. So it’s a bit surprising that no one calls it “gentrification” when the demolition of affordable multi-family housing and the displacement of low income residents is the explicit, stated strategy of a local government.

That no one uses the term “gentrification” to describe Marietta’s plan to purposefully de-populate the low income residents of the Franklin Road apartments says a lot about how we think about poverty, class and place in urban areas. It’s apparently acceptable for suburbs to actively discourage – and in this case, actually relocate – low income renters. This is may be a by-product of our obsession with neighborhood change in just a handful of neighborhoods in New York, San Francisco and Chicago: we don’t even notice when the absolute worst-case scenario of low-income displacement for private development takes place in a major metropolitan area, because it doesn’t fit the sexy narrative we’re used to. By pretending this sort of thing only happens in Brooklyn or the Mission, we leave the low income households who used to live in these now-demolished Marietta apartments vulnerable to very real displacement.

What’s next for Franklin Road? Marietta officials are hoping to persuade the Atlanta Falcons to build a new practice facility for their professional soccer team on 50 acres formerly occupied by hundreds of apartments.

Paving Paradise

Vancouver and Seattle are regularly rated among the most environmentally conscious cities in North America. The Economist Intelligence Unit ranked them among the top five greenest cities in 2012. The State of Washington has enacted a law setting a goal of reducing greenhouse gas emissions by 25 percent from 1990 levels by 2035 (RCW 70.235.20); British Columbia’s government actually imposed a carbon tax. Clearly, this part of the world has a reputation for progressive environmental leadership. But is it deserved?

Even in Vancouver. Credit: Mark Woodbury, Flickr
Even in Vancouver. Credit: Mark Woodbury, Flickr

 

If we dig deeper, the reality is that when it comes to transportation policy, there’s a lot of asphalt in this part of Ecotopia. Both Washington State and British Columbia are bent on major highway building binges – while at the same time forcing investments in transit to go through a tortuous and uncertain approval process.

Two developments this week demonstrate that in spite of stated goals and sweeping rhetoric about climate change, when it comes time to lay their money down, policy decisions by state and provincial governments mean that Seattle and Vancouver are roaring ahead with investments in a car-centric, carbon-intensive transportation system.

This week, Washington’s legislature is on track to pass a $16 billion dollar state transportation package that provides $8.8 billion for new highways – plus an additional $2.8 billion to pay off debt on highways already under construction. The bill widens a major highway bridge connecting Seattle to its eastern suburbs, widens the I-5 and I-405 freeways in the Seattle area, builds a “Puget Sound Gateway” and widens roads to the airport. The environmental consequences are clear. As our friends at the Sightline Institute have documented, wider roads translate directly into greater carbon emissions: each additional lane mile of freeway produces an estimated 100,000 tons of carbon over fifty years.

To be sure, the Washington legislation also contains a transit component, but it takes a very different form than the highway spending authorization. The bill authorizes “Sound Transit” – the regional transportation agency for Seattle — to go to local voters and ask for a $15 billion local tax and fee increase over a 15 year period to expand the region’s light rail system. So highway projects get statewide funding without a vote of the people, but transit projects will be funded only from local revenue and only if local voters approve. What’s more, the legislature has ended the sales tax exemption of transit projects, meaning that the transit agency will end up paying some $500 million to the state in sales taxes, which – you guessed it – will end up subsidizing highways. In a final slap to the environment, the bill includes a so-called “poison pill” provision, prohibiting Washington’s Governor from promulgating regulations to lower the carbon content of fuels used in Washington State. As the Seattle Transit blog summarizes it: full speed ahead with highway expansion; transit will have to wait on another vote of the people to tax themselves.

And Seattle. Credit: SounderBruce, Flickr
And Seattle. Credit: SounderBruce, Flickr

 

A similar scenario is playing out just north of the border in Canada. It was just announced that local voters turned down a proposed increase in the half cent sales tax to fund a proposed 7.5 billion transit package. British Columbia’s provincial government forced local leaders in the Vancouver region to campaign for a tax increase to expand local rail and bus service. Voters rejected this measure by a margin of 62% to 38%. Votes were nearly evenly divided in the city of Vancouver, but the measure lost by a lop-sided margin in the suburbs.  (Detailed election returns are shown at Elections BC.)

As In Washington state, while transit has to be funded locally, and conditional on a referendum, the provincial government is more than happy to pour money into the highway system without a popular vote. On top of that, in BC’s case, they’re going into debt to do so. British Columbia has just finished a new $3 billion dollar crossing of the Fraser River. The new tolled 12-lane Port Mann Bridge replaces an 6-lane 1970s vintage predecessor, but so far carries less traffic. As a result, the new bond-financed bridge is losing about $80 million a year.  The new Golden Ears bridge nearby, facing the same issues, is losing about $45 million.  Both bridges had overly optimistic projections of traffic and toll revenues that haven’t come close to being realized.

And the provincial government is moving to double-down on its costly Fraser River bridge building spree, proposing to replace the existing George Massey highway tunnel 15 miles from the new Port Mann bridge with yet another $3 billion ten-lane bridge. This project would not be subject to popular vote, and will likely be financed by borrowing and tolls, too, running the same risks that have plagued the Province’s other projects.

Finally, as we mentioned earlier this week, Oregon’s own proposed highway expansion package failed after it was shown that the carbon reduction estimates for operational improvements (ramp metering, signal timing and the like) were overstated by a factor of five.

Even in Ecotopia, there’s a profound disconnect between the high-minded rhetoric of public leaders and the way that deals actually get done when it comes to allocating transportation investment. As Seattle Transit Blog editorialized: “Our allegedly climate-focused Governor either doesn’t grasp or doesn’t care about the link between highways and carbon emissions, and therefore fought hard for the highways.”  It’s one thing to take a pledge to reduce greenhouse gas emissions at some point in the future, its another to take the hard decisions that will change the path we we’re on.

 

Climate concerns crush Oregon highway funding bill

While headlines focus on the nearly-bankrupt federal Highway Trust Fund, state and local departments of transportation across the country are facing declining revenues, maintenance backlogs, and an insatiable desire for funding new projects. As a result, this summer, a number of states are working on new highway funding packages. So far in 2015, eight states have enacted revenue increases, but others are still struggling to do so. Michigan’s Legislature is contemplating a $3.4 billion plan that would be financed in part by cutting the state’s earned income tax credit. Minnesota couldn’t muster the support for a proposed $6 billion statewide program, settling instead for a “lights on” alternative that actually reduces statewide highway construction funds by about one-sixth.

I-5 in Portland. Credit: Doug Kerr, Flickr
I-5 in Portland. Credit: Doug Kerr, Flickr

 

And this past week, Oregon’s effort to fund a new transportation package imploded, after carbon reduction estimates prepared by the State Department of Transportation were shown to be exaggerated by a factor of five.

The Oregon Legislature was considering a $343 million program financed by a higher gas tax and vehicle registration fees—but a critical element in the compromise to get the needed support for the bill was a repeal of the state’s recently enacted “clean fuels” law, which would have required lower carbon content in the state’s fuels.

Advocates of the repeal (and the transportation package) had argued that provisions in the new program—including a series of investments in alternative fuel vehicles and electric vehicle charging stations, coupled with “operational improvements” in state highways—would reduce carbon emissions by as much as the clean fuels law. According to the state transportation department, measures like additional ramp meters, variable electronic speed limit signs, and travel advisory signs would lower carbon emissions.

But in dramatic testimony to the State Legislature on June 24, Oregon Department of Transportation (ODOT) Director Matt Garrett conceded that his staff had overstated carbon emissions savings by a factor of five, and that rather than saving more than 2 million tons of carbon over a decade, the measures would save only about 400,000 tons. This admission vindicated the opposition of environmental groups, and led Governor Kate Brown and legislative leaders to withdraw support for the bill.

It’s striking that environmental considerations played such a key role in the defeat of this transportation measure. There’s little question that carbon emissions from transportation are a major contributor to greenhouse gases and climate change, but policymakers in very few states have made the connection between added road building and more carbon pollution.

A critical question is whether operational improvements like ramp metering actually reduce emissions. Data cited by Oregon environmental groups cast serious doubt on that assertion. The Oregon Environmental Council produced a Federal Highway Administration report showing the evidence that transportation operations improvements reduce greenhouse gases is “largely inconclusive.” Yes, some measures may smooth out traffic flow, thereby reducing emissions, but they also lead to additional ramp idling.  What’s more, cars consume more gas per mile at higher speeds than lower ones. Some studies find that such projects actually increase net emissions.

As environmental groups like the Oregon Environmental Council and the Oregon League of Conservation Voters pointed out, even ODOT’s new, much-reduced estimate of carbon savings are extremely suspect.

The Federal Highway Administration commissioned the RAND Corporation to study the scientific literature on the efficacy of alternative investments—including ramp metering—in reducing greenhouse gas emissions. RAND’s 2012 review concluded that studies of the impact of “operations improvements” on vehicle emissions have produced mixed results, to say the least.

For example, according to RAND, one of the few evaluations of the effect of ramp metering on emissions was carried out in Oregon. The state deployed 16 ramp meters on a stretch of I-5 in and around downtown Portland in the AM and PM peaks, and estimated that overall, emissions fell by 1,000 tons per year. This implied that per meter savings (on this very heavily used stretch of urban freeway, with 1980s vintage gas guzzlers,) saved about 64 tons per meter per year. If you could duplicate that record (with today’s much more fuel efficient cars, and on roads with much lighter traffic), it would require metering 640 freeway ramps to achieve 400,000 tons of savings per year.

As it turns out, there aren’t anywhere near that many freeway ramps in the entire state. What’s more, ramp metering has already been installed on all of the most heavily traveled freeways in the Portland area, meaning that the segments that are left are likely to be much less “productive” in terms of carbon reductions.

But that’s just the visible tip of a much larger carbon emissions iceberg that this package represented. The $343 million bill earmarked $124 million dollars for a series of highway expansion projects, including widening the I-205 Freeway from 4 to 6 lanes in Portland’s suburbs. The scientific evidence on the effect of capacity increases on carbon emissions is unequivocal: providing more capacity generates “induced demand”—more traffic, longer trips, and greater sprawl—and therefore actually increases carbon emissions. As far as we can tell, ODOT’s modeling of HB 2801 made no allowance for the increased carbon emissions as a result of induced demand.

But here, the science is well established. One need go no further than Portland State University’s Transportation Research and Education Center. Two of its scientists, Alex Bigazzi and Miguel Figliozzi, in a paper published by the Transportation Research Board, showed that increasing capacity on congested roads to allow traffic to move faster and more smoothly actually increases total emissions.

As a society, we’re increasingly coming to understand that the threat of climate change is real, and we are also beginning to understand that it will necessitate a different approach to transportation investments than we’ve made in the past. It’s tempting—but simply wrong—to think that making cars move faster is a solution. This clash in Oregon is a harbinger that efforts to combat climate change and business-as-usual transportation spending are likely to be on a collision course in the years ahead.

Playing together is getting harder to do

 

In our CityReport, Less in Common, we explored a key symptom of the decline in social capital: Americans seem to be spending less time playing together. One major driver of this trend is a dramatic privatization of leisure space. Instead of getting together in public parks and pools (or just playing in the street), more of our recreation takes place in private backyards, private pools, and private gyms. Prior to World War II, for example, there were fewer than 2,500 homes in the US with in-ground private pools. Today, there are more than five million.

Everyone's got a pool in this southern California subdivision. Credit: Google Maps
Everyone’s got a pool in this southern California subdivision. Credit: Google Maps

 

While that may not seem like a big deal – isn’t it a good thing if people can swim in their own backyards? – pools are a particularly good example of the ways that the privatization of leisure space is tied up with the history of sprawl and racial segregation. When prohibiting black swimmers from enjoying public pools became illegal, many of them lost all or nearly all of their white patrons, or simply shut down. Their replacements sprouted in places where exclusion was easier: behind the fences of private yards or gated communities. The stakes were demonstrated as recently as last week, when white residents of a gated community in McKinney, Texas objected to black residents and their friends in their community pool. The police officers who showed up handcuffed, manhandled, and pulled a gun on the unarmed teens – many, if not all, of whom had a right to be there – all in the name of keeping their pool exclusive.

And what’s true of swimming pools is true of many other kinds of recreation: we’re spending more time playing apart in private places than playing together in public ones. For example, while regular exercise has become an important priority for many Americans, we increasingly exercise in private facilities, rather than public parks or community centers. The membership of private gyms has increased from 13 million in 1980 to more than 50 million today. While It’s great that more people are working out, the membership of private gyms skews heavily towards younger, wealthier and better educated demographics.

Moreover, the pattern of privatized recreation starts at an early age. Opportunities for children to serendipitously engage in unstructured (and largely unsupervised) play have diminished.

One of the iconic images of recreation in the U.S. is the pickup game, whether it’s half-court hoops on a public playground, or baseball in the proverbial sandlot, or soccer on a grassy field. Whoever shows up can play, and the games have their own, largely self-organizing and self-regulating character. While there’s no published data on this kind of informal activity,the privatization of recreational space means these games are harder and harder to play – and when they do happen, the players are more likely to be of the same racial or economic background. Even participation in organized sports has been in decline. The number of 6- to 17-year olds participating in the four most popular team sports – baseball, basketball, football and soccer – has declined 4 percent nationally in the past five years.

Credit: Shad A Hall, Flickr
Credit: Shad A Hall, Flickr

 

More generally, it has become increasingly rare for younger children to walk or cycle away from their homes and away from constant parental supervision. The few parents who promote greater independence are treated as eccentric for raising “free range kids.” Recently, in the suburbs of Washington DC, parents were taken to court for letting their children ages 6 and 10, walk three blocks to a local park unaccompanied. The combination of physical distance and paranoia limit the amount of time kids spend unsupervised in the public realm.

And helicopter parents don’t have helicopters – they rely on SUVs to haul children from place to place. The result is that parents spent a not inconsiderable amount of time and money transporting children to and from school, play dates and other social activities – the kinds of trips that in earlier times (and denser communities) kids could have taken on their own. Todd Litman estimates the “chauffeuring burden” accounts for between 5 and 15 percent of all vehicle travel, and imposes costs greater than the high end estimates of congestion time loss. Plus, the added time spent traveling in private cars is time spent cocooned in a vehicle, out of the public realm.

In sprawling suburbs, the closest park or schoolyard may be too far away to walk or bike. The tendency toward building larger elementary and secondary schools, coupled with lower residential densities means schools are further from the average household (even though they may have ample open space).

The average size of an elementary school has increased about 20 percent in the past two decades, chiefly as older, smaller schools are closed, and newer schools tend to be larger.

In 1982 the average elementary school had 399 students, but by 2010 had grown to 470.

This pattern has been reinforced by policies that make it hard to retain or renovate small schools, and which produce bigger schools on the urban fringe. Nationally adopted standards for school size mandate that new elementary schools have a minimum of ten acres, with the result that fewer, bigger schools are built, typically on the periphery of communities where large sites are available, and where land is cheap.

The lessened ability of kids to bike and walk to school, and to travel independently and their growing dependence on adults to be their chauffeurs, chaperones and social directors has been identified as a major contributing factor to the rapid growth of childhood obesity. But it seems equally likely that inactivity and isolation is also contributing to the widespread malady of decreased social capital.

As one old saying goes, “the family that plays together, stays together.” The same might well be said of communities. Looking for opportunity to create more ways that we can play together in the public realm is likely to be an important strategy for reducing the erosion of our shared social capital.

The new trend in homeownership: Gerontrification

Two major reports in the last week have painted a stark picture of the future of the US housing market. Last week’s report from the Urban Institute predicted that the decline in homeownership over the past seven years will be “the new normal.” Then, on June 24, Harvard’s Joint Center on Housing Studies released its own report, “The State of the Nation’s Housing 2015,” echoing many of the same themes.

The bottom line: the single family homeownership market is not coming back for the foreseeable future. And the reasons are as much about demographics as economics.

Credit: Christopher Sessums, Flickr
Credit: Christopher Sessums, Flickr

 

RENTING UP; OWNING, NOT SO MUCH. As we’ve noted at City Observatory, the shift to renting has been strong over the last several years: since 2007, the number of rental households in the US has increased by 17 million, while the number of owner occupied homes has declined.

The two studies agree that this shift to renting will continue for the foreseeable future. According to the Urban Institute, 13 million of the 22 million net new households formed between 2010 and 2030 will be renters. For its part, the Joint Center for Housing Studies predicts that a majority of those under 30 today will form rental households during the decade ahead.

This trend toward rental housing has become well-established in the past few years. According to the Harvard report, The overall homeownership rate has fallen from a peak of 69% during the housing bubble, to 64% in 2014. The decline since 2007 has erased all of the gains in homeownership of the past two decades, and the national homeownership rate is now down to the same level it was in the early 1990s. (So much, apparently for the much ballyhooed efforts to create an ownership society through housing).

As the Joint Center on Housing Studies (JCHS) report makes clear, there are lots of economic reasons for these trends. Most importantly, real inflation-adjusted household incomes are below the levels they were in the 1990s. Households with less income can afford less housing, and as a result are less likely to be homeowners. Also, today’s young adults are much more likely to be burdened by student debt: The JCHS reports that 41% of 25-34 year old renters have student debt, up from 30% a decade ago, and that debt averages more than $30,000 – up 50 percent from a decade earlier. Finally – and for very good reasons – lending standards are much tougher today, and households with weak credit can’t qualify for home loans as easily as they could in the era of NINJA (no income, no job or assets) lending during the housing bubble. But the growth in renting isn’t just a story of relatively impoverished millennials: JCHS shows that renting levels are up for those 45 to 64, and that households in the top half of the income distribution – which are far more likely to own – accounted for 43% of the growth in the rental housing occupancy.

THE “GERONTIFICATION” OF HOMEOWNERSHIP. The other major implication of these two studies hasn’t gotten much attention. The aggregate statistics conceal deeper changes in the pattern of homeownership by age. Over the next two decades, the typical homeowner will be older than today – much older, because all of the net growth in homeownership will be among households whose head is 65 years or older – and the number of homeowners under 45 will decline. The Urban Institute’s estimates are that by 2030, this will produce a pronounced “generation gap” in housing tenure: 34 million homes owned by those over 65 (up from 15 million in 1990) and just 22 million homes owned by those under 45 (down from 24 million in 1990).

Much of the gain in home ownership during the 1990s and early 2000s was the product of demographic forces – the Baby Boom generation maturing fully into its maximum home-owning years. As the JCHS data show, the only age cohort that has higher home ownership today than 20 years ago is those 65 and over. For all age groups under 65, homeownership rates are 3 to 5 percentage points lower today than they were in the early 1990s.

The Urban Institute predicts that the trend toward older homeowners will continue through 2030. All of the net increase in homeownership from 2010 through 2030 will be in households aged 65 and over. The net increase will be about 9 million more homeowners between 2010 and 2030.  Homeowning households aged 65+ will increase by 13.6 million rover those two decades. The number of homeowners aged 45 and under will decline by almost a million between 2010 and 2020, and rebound only about 360,000 over the following decade. Thus there will be no NET increase in homeownership by young homeowners over the two decades 2010-2030.

 

 

The trends in homeownership are heavily driven by the aging of the US population.  Nearly all of the growth in households between 2010 and 2030 will come in households headed by seniors, according to the JCHS projections. Households headed by those under 45 will increase by about 5 million through 2030; households with heads aged 45 to 64 will be nearly flat; and the number of households headed by those 65 and older will increase by about 21 million (JCHS, Table 6).

Rather than gentrification, maybe we need to be thinking about “gerontrification.” The shift in the age profile of the typical homeowner and the growing generation gap between renters and owners is likely to pose big challenges to housing policy.

Will older homeowners want to age in place? Will we experience a housing size and tenure mismatch, with smaller and older households owning homes and larger and younger households primarily renting? These two studies just scratch the surface of these important questions. Stay tuned: these are interesting times for the housing market.

Show Your Work: Getting DOT Traffic Forecasts Out of the Black Box

  • Traffic projections used to justify highway expansions are often wildly wrong
  • The recent Wisconsin court case doesn’t substitute better models, but it does require DOTs to show their data and assumptions instead of hiding them

Highway23

The road less traveled:  Wisconsin Highway 23

There’s a lot of high-fiving in the progressive transportation community about last month’s Wisconsin court decision that stopped a proposed highway widening project. The reason? The state Department of Transportation (DOT) used inadequate traffic projections to justify the project.

The plaintiffs in the case were in a celebratory mood. Steve Hiniker, Executive Director of 1000 Friends of Wisconsin said “We have known for years that the state DOT has been using artificially high traffic forecasts to justify a number of highway expansion projects.  Now a federal court has validated our claims.” Over at CityLab, Eric Jaffe calls it a court-ordered vindication of the peak car argument: “How Wisconsin residents cried peak car and won.

But while the decision is hugely encouraging, it’s important to understand that 1,000 Friends of Wisconsin v. US DOT wasn’t a conclusive win for better traffic projections — the case was actually decided on different, much narrower grounds.

The federal district court ruling is really a take down of the opaque “black box” approach most state DOTs use in transportation forecasting. The project in question was a 20-mile long widening–from two lanes to four–of a stretch of state highway 23 between Sheboygan and Fond du Lac.  The environmental group sued, charging that the Environmental Impact Statement prepared to justify the project and evaluate alternatives was based on faulty and outdated forecasts that overstated future traffic levels.

The court made it clear that it wasn’t in the business of adjudicating competing claims about the reasonableness of models or modeling assumptions.  And it didn’t rule that 1,000 Friends of Wisconsin’s arguments about declining traffic or peak car or lower population projections trumped or invalidated Wisconsin DOTs modeling.  What the court did do, however, was say that WisDOT failed to explain how its model worked in a way that the public (and the court) could understand.  Essentially, the court ruled that Wisconsin DOT couldn’t use a “black box” to generate its projections — instead it had to present its data, assumptions and methodology in such a way that the public and outsiders could see how the results were produced.  Judge Lynn Adelman wrote:

“In short, a reader of the impact statement and the administrative record has no idea how WisDOT applied TAFIS and TDM to produce the actual traffic projections that appear in the impact statement.”  (page 12)

The court was unpersuaded by vague and repetitive blandishments offered in defense by the DOT about techniques and the mechanics of modeling methodology.  The court specifically found that the DOT staff failed to explain how they arrived at the projected traffic volumes that appear in the impact statement, which seem to conflict with the recent trend of declining traffic volumes. And it found that:

“. . .  the defendants repeated and elaborated on their general discussion of how TAFIS and TDM work and did not explain how those tools were applied to arrive at the specific traffic projections that appear in the impact statement.” (page 13).

It appears that the DOT’s position foundered over its inability to answer very basic questions about how a decline in population forecasts and a decline in recorded traffic levels squared with its modeling of future traffic levels.  The Wisconsin DOT didn’t explain to the court’s satisfaction why it was sticking with the same level of traffic predicted for 2035, when population growth rate forecasts–which were supposedly a key input to the model–were reduced by two-thirds.

As a legal matter, the court went out of its way to state that it wasn’t about to second guess the methodology and assumptions chosen by the state DOT.  Here the court ruled, as other courts have, that unless the methodology is “irrational,” it’s not in violation of the National Environmental Policy Act (NEPA).

While it falls short of  a legal vindication of the “peak car” argument, requiring DOTs to open up their “black box” forecasts is still likely to be a devastatingly important ruling.  Official DOT traffic forecasts are frequently presented as the product of a special kind of technical alchemy.  While model results are clothed with the illusion of precision (“this road will carry 184,200 cars in 2035”), there’s really much, much more ambiguity in the results.  To pass muster under NEPA, the process used for calculating future traffic levels will now likely be laid bare.

Those who’ve worked with traffic models know that they’re clumsier, clunkier and more malleable than the precise, hyper-technical image that traffic engineers (or politically appointed transportation agency officials) typically paint of them in the introductions to environmental impact statements.  The numerical outputs from computer simulations, for example, are often subjected to “post-processing” — the preferred euphemism for manually changing predicted traffic levels based on the judgment of the modeler (or the desires of the modeler’s client.)

And there’s lots of room for manipulation. In his book, “Toll Road Traffic and Revenue Forecasts” Rob Bain, a pre-eminent international expert on traffic forecasting, lists 21 different ways modelers can inflate traffic forecasts and concludes “it is perfectly possible to inflate the numbers for clients who want inflated numbers” (page 75).

In practice, DOTs have often used traffic forecasts as a sales tool or a rationalization for new projects.  Once the traffic modeling generates a sufficiently high number to justify additional capacity, the agencies stick with it in spite of evidence to the contrary.  For the proposed $3 billion Columbia River Crossing between Oregon and Washington, the two state DOTs stuck with exaggerated vintage 2005 forecasts in a final environmental impact statement issued in 2013; ignoring actual declines in traffic that had occurred in the intervening years.  And as in Wisconsin, they offered no explanation as to why the modelling didn’t change.

For years, we’ve known that DOT traffic forecasting models are frequently wrong and that they regularly over-estimate future traffic and congestion.  Multi-billion dollar projects are often predicated on traffic forecasts that fail repeatedly to be borne out by reality.  The Sightline Institute showed that for Washington’s SR-520 floating bridge project, the state always forecast a big increase in traffic, even though traffic levels continually declined.

trust_wsdot_proj

The political acceptance of these kinds of errors is rampant in the industry.  The State Smart Transportation Institute analyzed an aggregation of state traffic forecasts prepared annually by the US DOT showed that the 20-year projections overestimated future traffic volumes in every single year the reports could be compared against data on actual miles driven by Americans.

SSTI_Overshoot

A big part of the reason these flawed forecasts have continued to be made–and not corrected–is that the forecasting process is opaque to outsiders.  The federal district court’s ruling in 1000 Friends of Wisconsin v. U.S. DOT should make it much more difficult for highway builders to continue justifying projects based on this kind of “black box” modeling. As the old saying goes:  sunlight is often the best disinfectant. Greater transparency in the data and assumptions that underlie traffic forecasts could lead to much wiser decisions about where to invest scarce transportation resources.

 

 

Playing Apart

Our City Observatory report, Less in Common, catalogs the ways that we as a nation have been growing increasingly separated from one another.  Changes in technology, the economy and society have all coalesced to create more fragmentation and division.

As Robert Putnam described this trend in his 2000 book, we are “Bowling Alone.”  And while work, housing and shopping have become more stratified and dispersed, there still ought to be the opportunity for us to play together. Sports fandom is one of the few countervailing trends: within metropolitan areas popular support for the “home team” whether in pro-sports or college athletics is cuts across demographic and geographic boundaries.

But in our personal lives our recreation is becoming more isolated, chiefly through the privatization of leisure.

Consider:  instead of going to public parks and playgrounds, more children play in the copious backyards of suburban homes. This trend is amplified by helicopter parents.  Free range children are an anomaly, and the combination of sprawl and insecurity adds to the chauffering burden of adults–which in turn means spending more time in cocooned private vehicles. And as we know, the decline in physical exercise among the nations children has been a key factor in the explosive growth of juvenile obesity.

One of the hallmarks of the decline in the public recreational commons is swimming.  In the early part of the 20th century, swimming pools were almost exclusively in the public domain.  Prior to World War II it was estimated that there were fewer than 2,500 homes with private, in-ground swimming pools.  Today, there are more than 5 million.

That’s one of the reasons we found Samsung’s television commercial “A Perfect Day” so compelling. It highlighted the adventures of a group of kids, cycling around New York City, and ending up spending time at a public pool.  Its encouraging that a private company can make our aspirations for living life in public a central part of its marketing message.

That’s certainly a contrast to the trend of commoditization of leisure. Increasingly, we pay to play, and play in the private realm.  The number of persons who belong to private gyms has increased from about 13 million in 1981 to more than 50 million today.  While gyms provide a great experience for those who join, they tend to draw disproportionately from wealthier and younger demographic groups–again contributing to our self-segregation by common background and interest.

Over just the past five years, the number of Americans classified as “physically inactive”–not participating in sports, recreation or exercise, has increased from 75 million to 83 million, according to the Physical Activity Council.  And youth participation in the most common team sports — soccer, basketball, football and baseball — has declined 4 percent since 2008.

As we think about ways to strengthen and restore the civic commons, we will probably want to place special emphasis on parks and recreation.  Public parks are one of the places where people of different races, ethnicities and incomes can come together and share experiences.

Is gentrification a rare big city malady?

  • Gentrification is a big issue in a few places, and not an issue at all elsewhere.
  • Big cities with expensive housing are the flashpoint for gentrification.

The city-policy-sphere is rife with debate on gentrification. Just in the past weeks, we have a French sociologist’s indictment of bourgeois movement to the central city, the Mayor of Washington and the Secretary of Housing and Urban Development pointing to 300 new units of affordable rental housing as a bulwark against gentrification in DC’s fast-changing Shaw neighborhood, and continued debate over the merits of a moratorium on new housing development as a means of stemming change in San Francisco’s Mission District.

Most of the stated concern about gentrification revolves around the belief that neighborhood improvement automatically produces widespread displacement of the existing population.  But how widespread is the problem?

Outside a big cities with tight housing markets, the effects of gentrification may be much more benign. In an essay entitled “Is gentrification different in legacy cities?” Todd Swanstrom argues that in most of the nation’s metros, the effects of gentrification are more muted, and on balance positive. Because housing prices are low and there is a lot of slack in the housing market, the movement of better educated and higher income people into cities is far less likely to result in the displacement of the existing population.

The variety of opinions about the effects of gentrification are apparent when one talks to mayors. Consider the results of a 2014 survey of the nation’s mayors undertaken by Boston University. The survey explored mayoral attitudes about gentrification, asking them whether they agreed, disagreed or neither agreed nor disagreed with the proposition that “rising property values are good for a neighborhood.” Overall, of the 70 mayors surveyed, 45 percent agreed and 30 percent disagreed with this statement. The pattern of responses is highly correlated with property values: mayors of cities with median home values in the bottom and middle third of the national distribution agreed by a more than two to one margin that rising values are good, compared to only about 20 percent of the mayors of cities with the most expensive homes.

Mayoral Opinion on “Rising Property Values”

mayor_survey

Source: Boston University Initiative on Cities, Mayor’s Leadership Survey

 

Another way of tracking public awareness of the issue is through data on Internet searches. Google data confirm that public interest in gentrification is increasing. The increase has mostly been strong and steady, with a very strong spike coinciding with Spike Lee’s famous anti-gentrification rant at the Pratt Institute in Brooklyn in February, 2014.

The Google data also show a distinctive geographic pattern to the interest in gentrification. Google Trends reports the metropolitan areas with the greatest relative propensity to search for specific terms, including gentrification. Searches for gentrification come disproportionately from a handful of large metropolitan areas, corresponding to some the nation’s largest and most liberal cities: New York, Austin, Chicago, San Francisco, and Washington head the list. And 32 of the 51 largest US metropolitan areas have reported values of zero for searches related to gentrification.

Top Metropolitan Areas for Gentrification Searches

gentry_pagerank_redblue

Source: Google Trends, Page Rank Index for “Gentrification” Relative to Top Metro (New York).  Metros color-coded based on statewide presidential vote in 2012, blue-democratic, red-republican.

All of the other metropolitan areas in the country have an index value of zero in Google Trends, indicating almost no interest in the subject.

Fifteen of the nineteen metropolitan areas on this list, including 12 of the top 13 are located in blue states (based on statewide vote for president in 2012). It’s been argued elsewhere that liberals have done a lousy job of fighting gentrification, and these data at least superficially support this argument.

A common factor in gentrification is a surging demand for urban living in the face of a limited supply of urban housing.  We haven’t undertaken a detailed analysis of the housing markets in the cities where gentrification interest is strongest, but they map to largest cities with robust housing markets (with the exception of Detroit).  In smaller markets, and where housing is relatively inexpensive, gentrification doesn’t seem to register as an issue, as measured either by Google Trends or mayoral opinion.

Its instructive to look at the relationship between metro area population, housing prices and interest in gentrification.  The following table stratifies the nation’s 51 largest metropolitan areas – all those with a population of one million or more – by population size, and looks at average home prices (reported by Zillow) in metros with, and without, a reported interest in gentrification (as indicated by the Google Trends data discussed above).

Several findings stand out.  First, interest in gentrification is universal among the 12 largest metropolitan areas, but decreases rapidly as metro area population falls:  half of the second quartile of large metros, two the next quartile and none of the last quartile had a measurable interest in gentrification, according to the Google search results.  Second, home values tend to be much higher in metros with an interest in gentrification:  average prices are about 50 percent higher in the second quartile, and about three times higher in the third quartile.   As the final column suggests, home prices tend to be higher in larger metros, but the smaller metros that have an interest in gentrification have average home prices that are higher than in the largest 12 markets.  Interest in gentrification is strongly related to market size and to high home pricesAs John Buntin speculated in Slate, the high interest in gentrification in pricey coastal real estate markets may have more to do with middle class concerns about affording real estate than about the displacement of the poor.

Average Home Value in Markets by Interest in Gentrification
Average Home Value
Market Size Number Interested Interested Not Interested Value All Markets
12 Largest 12 297,267 NA 297,267
13th-24th 6 308,850 205,467 257,158
25th-36th 2 550,600 154,720 220,700
37th-51st 0 NA 164,513 164,513
Source:  Google (Gentrification Interest), Zillow (Home Prices)

Most cities have strong limitations on dense development, particularly in the most desirable neighborhoods, so when housing demand surges, it leads to price increases and development pressure that is felt in lower income neighborhoods.  In sprawling markets where the housing supply is relatively elastic (Atlanta, Houston, Dallas) gentrification is far less of an issue; and as noted above, it seems to be a non-issue in most Sunbelt cities (for example Phoenix, Jacksonville, Tampa, Nashville, Charlotte).

These data show that interest in gentrification, while growing is still a highly localized issue:  it tends to be a concern in large cities, not small; in expensive housing markets, not affordable ones, and is disproportionately of interest in common in blue states, and relatively rare in red ones.

 

The Convention Center Business Turns Ugly

There’s probably no better example of the faddish, “me too” approach to urban economic development than the pursuit by cities of every size for a slice of the convention and trade show business. Cities have built and expanded convention centers for decades, and in the past few years it’s become increasingly popular to publicly subsidize the construction of “headquarters hotels” near convention centers in hopes of drumming up further business.

As Heywood Sanders pointed out in a commentary at City Observatory in April, the consultant reports prepared to justify these convention centers and hotels are brimming with Pollyanna-ish optimism. But occasionally, even upbeat prose and glossy presentation can’t conceal some of the bitter truths about this industry.

The latest report we’ve seen was prepared for a proposed Milwaukee convention center expansion project by the Chicago-based firm of Hunden Strategic Partners (HSP). Copies of the report are available on the website of the Greater Milwaukee Committee, the study’s sponsors. It’s interesting reading–full of data about the convention centers of more than a dozen cities around the country, as well as the economics of the convention business. Here are some of the highlights, from our perspective.

The sales pitch has changed from hope to fear.

Traditionally, the rationale for building convention centers was to tap into the supposed motherlode of the growing convention center business. By getting a larger share of the growing market for conventions, the theory went, a city could create new jobs and generate additional income and tax revenue.  Today, however, the message is much more grim: cities have to throw money at convention centers (and accompanying headquarters hotels) to avoid losing businesses to others. As Urban Milwaukee’s Bruce Murphy argues, the rationale for public investment is increasingly reduced to mimicking what other “peer” cities are doing. The sales pitch is now: expand your facilities because other cities are doing the same. There’s no expectation of growing profits or jobs; it’s all about avoiding losses and keeping up with the competition.

The convention market isn’t growing.  

The key reason for the grim outlook in this business is that the overall market for conventions and exhibitions is simply stagnant.  The peak year for the US convention business was 2000. Two recessions, and several generations of social media technology later (fifteen years ago, there was no Skype, Facebook, Twitter or Instagram), the market for exhibition space hasn’t grown at all.

ConCtrMkt

The HSP report tries to put a good face on the data, claiming “Exhibit space supply has increased every year since 1999, however paid exhibit space rises and falls with the economy.” Since 2000, however, the “falls” have more than offset the “rises,” and as a result, while the supply of exhibit space has expanded by more than 45 percent, the demand for space in 2014 (blue line on the chart above) was still about 5 percent lower than it was 14 years earlier. As the report dourly concedes “the supply demand gap gives meeting and event planners an edge in negotiations.” Too much space chasing too few conventions is the real reason that so many cities are finding the convention business a consistent money-loser.

The public cost of the headquarters hotels are now measured in billions.

Cities around the country are throwing scarce public resources into subsidizing headquarters hotels as a way to try and drum up more business for struggling convention centers.  According to the report, cities have put more than $1.4 billion in public funds into the construction of almost 20,000 hotel rooms in 28 projects around the country.

HQ_Hotel_List
Source: Hunden Strategic Partners

 

Cities are being conned by clearly unrealistic consultant studies into believing that their money-losing convention center, with just a bit more space or a somewhat newer or somewhat larger “headquarters” hotel, can turn things around. But outside of a handful of places like Orlando and Las Vegas — cities that dominate the market for big national conventions — the convention business is, by and large, a municipal money loser.  Caveat Emptor!

 

 

 

New evidence on integration and economic mobility

It’s unusual to flag an economics article as a “must-read” for general audiences: but if you care about cities and place, and about the prospects for the American Dream in the 21st Century, you owe it to yourself to read this new article by Raj Chetty, Nathaniel Hendren, “The Impacts of Neighborhoods on Intergenerational Mobility: Childhood Exposure Effects and County-Level Estimates.”  (The Executive Summary is just six pages long—you can download it here.)

This work strongly confirms the growing belief that the kind of community you grow up in has a huge impact on your lifetime economic opportunities.  Specifically, the Chetty-Hendren study shows that some communities do a much better job of helping kids from low income families achieve economic success than do others.  And these communities tend to be ones that have low levels of economic and racial segregation, better schools, less violent crime, and fewer single-parent families.  An important part of how we assure opportunity to all hinges on how we build communities.

Seattle ranks as one of the most mobility-friendly metropolitan areas. Credit: Jonathan Miske, Flickr
Seattle ranks as one of the most mobility-friendly metropolitan areas. Credit: Jonathan Miske, Flickr

This study is remarkable for a number of reasons:  it’s clearly and simply presented, based on an extraordinarily large and powerful database, provides detailed findings (down to the county level), and provides strong evidence that its findings are cause-and-effect, not mere correlation.

One of the big bugaboos of economic research is that, unlike other scientific inquiries, economists are not generally allowed to run random selection controlled experiments on human beings—which we would all probably agree is a good idea.  Economic research, like most social science, typically must rely on statistical inference from sample data often gathered for other purposes, with its attendant margins of error.  And secondary statistical data make it especially difficult to make definitive cause-and-effect statements: For example, did a community’s environment cause children to have particularly high levels of economic mobility, or did the unseen choices of some parents to move in and out of particular neighborhoods lead “natural” high achievers to locate in some places and “natural” low achievers to locate elsewhere.

Using the combination of a massive, long-term longitudinal data set (created from anonymized tax return data), and data from the Federal Government’s quasi-experimental “Moving to Opportunity” program which gave low income families vouchers to enable them to move to non-poor neighborhoods.

It’s highly unusual in the world of economics to use the word “causal” to describe one’s reported findings, but in this new report you’ll see this term used early and often to describe the findings.  The use of data on siblings and exploiting the differential effects for boys and girls is clever and impressive.  One of the criticisms levied of other work is that it can’t control for the fact that intergenerational mobility for some families may represent selection effects: the most energetic, ambitious families are more likely to move away from worse environments and to better ones.  It is very rare in social science to be able to make this kind of strong claims about causality.

And New Orleans ranks as one of the worst. Credit: Chuck Coker, Flickr
And New Orleans ranks as one of the worst. Credit: Chuck Coker, Flickr

 

The great thing about the Chetty-Hendren research is that you can drill down to the county level to see what impact the local community has on economic outcomes for kids.  And the measure of success couldn’t be clearer: they show how much each additional year spent growing up in a particular neighborhood is likely to influence a child’s income as an adult.  As they explain in their report:

Every extra year spent in the city of Baltimore reduces a child’s earnings by 0.86% per year of exposure, generating a total earnings penalty of approximately 17% for children who grow up there from birth.

The differences among metropolitan areas are substantial: a poor child growing up in Seattle would be expected to earn about $29,000 (about $3,000 or 12 percent more than the national average for children in the bottom quintile of the population), while a poor child growing up in New Orleans would be expected to earn a little more than $22,000 at the same age, ($3,800 or almost 15 percent less than the national average.)  You can see data for individual counties and for commuting zones (metropolitan areas and their surrounding hinterlands) at the New York Times website.

To provide a quick snapshot for large metropolitan areas, we’ve created a graphic showing the Chetty-Hendren estimates for central counties (the county that includes the first-named city in a metropolitan area) and for the surrounding commuting zone.  These data show how much more (or less) than the national average a child in a family in the lowest quartile of the income distribution growing up in the central county or commuting area would make at age 26.  (Orange dots represent commuting zones; blue dots represent central counties.)

A couple of patterns are apparent: in general, central counties have lower rates of economic mobility for poor children than in commuting zones.  Central counties tend, on average, to have more concentrated poverty, lower-performing schools, and higher rates of single-head households—all of which are correlates of low economic mobility.

In a companion paper, Chetty and Hendren and Harvard Economist Larry Katz re-examine an important, and previously discouraging set of findings from the Moving to Opportunity (MTO) project.  MTO was a federal project that gave poor families vouchers to move from poor neighborhoods to middle income neighborhoods.  The previously reported results found that the moves produced little economic improvement for adults, and modest results for children.  In their re-analysis, Chetty, Hendren and Katz, show that when children moved made a huge difference:  those who moved as very young children (under age five) showed significant gains, while those who moved at an older age showed few if any gains.  Consistent with their larger analysis of inter-neighborhood moves, the gains to moving to better neighborhoods were directly correlated to how long children were exposed to better conditions.  (For a more detailed review of these studies and their import, it’s worth reading Justin Wolfers’ commentary.)

At City Observatory, we think these findings are the strongest evidence yet that addressing neighborhood and urban development is critical to promoting equal opportunity for all.  As Chetty and Hendren conclude that while the evidence shows that some children can gain opportunity by moving to a new, better neighborhood that this isn’t a scaleable solution for everyone; as a result:

. . . one must also find methods of improving neighborhood environments in areas that currently generate low levels of mobility. . . our findings provide support for policies that reduce segregation and concentrated poverty in cities (e.g., affordable housing subsidies or changes in zoning laws) as well as efforts to improve public schools.

The Civic Commons & City Success

Urban housing sprawl.

Why we wrote “Less in Common,” our latest CityReport.

We’ve come increasingly to understand the role of social capital in the effective function of cities and urban economies.  The success of both local and national economies hinges not just on machines and equipment, skilled workers, a financial system and the rule of law, but also on widespread norms of reciprocity and a sense of connectedness and mutual obligation and respect —  a combination of factors that has come to be called “social capital.”

Robert Putnam’s work—Making Democracy Work, Bowling Alone, and most recently Our Kids—deserves considerable credit for popularizing the term social capital.  Nobel economist Douglass North argued that social capital is one of the keys to the adaptive efficiency that enables economies to progress.  In his book Triumph of the City, Ed Glaeser describes how this process plays out in particular places: “Humans,” he says, “are a social species, and our greatest achievements are all collaborative. Cities are machines for making collaboration easier.”

The latest research from Raj Chetty, Nathan Hendren, and their colleagues, reinforces the critical role of place-based social ties in shaping the economic opportunities of the poor.  They found that metro areas with high levels of racial and economic segregation—a key correlate of declining social capital—also had far lower rates of economic mobility for the children of the poor.

The stakes are high.  In the pursuit of overall economic well-being and widespread opportunity for success, social capital in cities is critical.

Less in Common explores the ways in which the social fabric—the network of connections that tie us together in communities—has become generally thinner and more frayed over the past several decades.

Less_in_Common_Cover

In assembling this report, we purposely set out to be eclectic both in our scope and in the kinds of data and indicators we assembled.  Here you’ll find measures of everything from stated social trust, to the numbers of security guards, to the numbers of library books we borrow, to the numbers of swimming pools and farmers markets in our neighborhoods.  Unlike Gross Domestic Product, measures of social capital don’t have a single common denominator that enable them to easily be summed and compared.  Many, if not all, of these trends play out in cities, and have profound implications for city success.

While there are some counter-currents, the overall pattern of change is an ominous one.  Stated trust is declining.  Income segregation is increasing.  We are more isolated individually, and our governments and civic institutions are more fragmented and balkanized.  We spend less time in the shared public spaces that are open to people different from ourselves.

There is compelling evidence that the connective tissue that binds us together in cities is coming apart.   As we’ve spent more time in isolation and less time socializing with our neighbors, participation in the civic commons has suffered. Rebuilding social capital in America will require innovative approaches to spur community engagement.

How do we reinvigorate the civic commons?  While some solutions may be national in scope, many of the best opportunities for strengthening social capital will be in individual neighborhoods.

There’s no single-minded policy solution that can accomplish the task alone — no –whether infrastructure improvement, human capital investment, regulation or deregulation, tax or tax break –that can easily or comprehensively address the problem.  There are many facets to this issue and consequently many dimensions along which we can pursue solutions.

We offer Less in Common as a rough portrait of some of the trends that have been playing out, and as one contribution to the discussion about how we can strengthen and rebuild social capital in our neighborhoods, our cities and our nation.  We look forward to the conversation.

The real welfare Cadillacs have 18 wheels

  • Truck freight movement gets a subsidy of between $57 and $128 billion annually in the form of uncompensated social costs, over and above what trucks pay in taxes, according to the Congressional Budget Office.
  • If trucking companies paid the full costs associated with moving truck freight, we’d have less road damage and congestion, fewer crashes, and more funding to pay for the transportation system.

 

Screen Shot 2015-06-01 at 2.02.10 PM

During National Infrastructure Week earlier this month, we again endured what has become a common refrain of woe about crumbling bridges, structurally deficient roads, and a lack of federal funding for infrastructure. This call for alarm was quickly followed by yet another Congressional band-aid for the nearly bankrupt highway trust fund – and this one will hold for just sixty days.

It’s clear that our transportation finance system is broken. To make up the deficit, politicians frequently call for increased user fees – through increased taxes on gasoline, vehicle miles traveled, or even bikes. All the while, one of the biggest users of the transportation network – the trucking industry – has been rolling down the highway fueled by billions in federal subsidies.

A new report from the Congressional Budget Office estimates that truck freight causes more than $58 to $129 billion annually in damages and social costs in the form of wear and tear on the roads, crashes, congestion and pollution – an amount well above and beyond what trucking companies currently pay in taxes.

CBO doesn’t report that headline number, instead computing that the external social costs of truck freight on a “cents per ton mile basis” range between 2.62 and 5.86 cents per ton mile. For the average heavy truck, they estimate that the cost works out to about 21 to 46 cents per mile travelled.

That might not sound like a lot, but the nation’s 10.6 million trucks travel generate an estimated 2.2 trillion ton miles of travel per year (Table A-1, page 32). When you multiply the per ton mile cost of 2.52 to 5.86 cents per mile times 2.2 trillion ton-miles, you get an annual cost of between $57 and $128 billion per year.

Unfortunately, trucking companies don’t pay these costs. They are passed along to the rest of us in the form of damaged roads, crash costs, increased congestion and air pollution. Because they don’t pay the costs of these negative externalities, the firms that send goods by truck don’t have to consider them when deciding how and where to ship goods. This translates into a huge subsidy for the trucking industry of of between 21 and 46 cents per mile.

For comparison, CBO looked at the social costs associated with moving freight by rail. Railroads have much lower social costs, for two reasons: first, rail transport is much more energy efficient and less polluting per ton mile of travel; second, because railroads are built and maintained by their private owners, most of the cost of wear and tear is borne by private users, not the public. Railroad freight does produce social costs associated with pollution and crashes, but the social costs of moving freight by rail are about one-seventh that for truck movements: about 0.5 to 0.8 cents per ton mile, compared to 2.52 to 5.86 per ton mile for trucks.

Screen Shot 2015-06-01 at 2.02.36 PM

As we always point out, getting the prices right – whether for parking or road use – is critically important to creating an efficient transportation system. When particular transportation system users don’t pay their full costs, demand is too high, and supply is too low.  In this case, large federal subsidies for trucking encourages too much freight to be moved by truck, worsening congestion, pollution and road wear, while the fees and taxes paid by trucking companies aren’t enough to cover these costs. The classic solution for these currently unpriced external costs is to impose an offsetting tax on trucks that makes truck freight bear the full cost associate with road use, crashes and environmental damage. The CBO report considers a number of policies that could “internalize” these external costs associated with trucking – including higher diesel taxes, a tax on truck VMT, and even a higher tax on truck tires.

The revenues produced would be considerable: a VMT tax that internalized social costs of trucking would generate an estimated $68 billion per year. To put that number in context, consider that in 2014, total public spending – federal, state and local – on roads and highways was $165 billion. In addition, the higher tax would reduce freight moving by road – mostly by shifting cargo to rail – and lead to benefits of lower pollution, less congestion and less wear and tear on roads. We’d also save energy: net diesel fuel consumption for freight transportation would fall by 670 million gallons per year – a savings of about $2 billion annually at current prices.

There are good reasons to believe that the CBO report is conservative, and if anything, understates the social costs associated with trucking. For example, the report estimates that social costs associated with carbon emissions at somewhere between $5 and $45 per ton. Other credible estimates – from British economist Nicholas Stern – suggest that the cost today is about $32 to $103 per ton, rising to $82 to 260 per ton over the next two decades.

The external social costs of truck and rail freight, per ton mile, are estimated as follows:

Screen Shot 2015-06-01 at 2.03.03 PM
Source:  Congressional Budget Office, 2015

 

Such a tax would make truck freight more expensive, but other costs – now borne by the rest of us – would go down by a comparable amount. And there would be important savings in costs for freight either moved by other modes (especially rail, which is about two-thirds cheaper), or sourced from closer locations.

There’s a clear lesson here: It may seem like we have a shortage of infrastructure, or lack the funding to pay for the transportation system, but the fact that truck freight is so heavily subsidized means that there’s a lot more demand (and congestion) on the the roads that there would be if trucks actually paid their way. On top of that, there’d be a lot more money to cover the cost of the system we already have.

So the next time someone laments the sad state of the road system, or wonders why we can’t afford more investment, you might want to point out some 18-wheelers who are now getting a one heck of a free ride, at everyone’s expense.

View the full report: “Pricing Freight Transport to Account for External Costs: Working Paper 2015-03

The real welfare Cadillacs have 18 wheels

  • Truck freight movement gets a subsidy of between $57 and $128 billion annually in the form of uncompensated social costs, over and above what trucks pay in taxes, according to the Congressional Budget Office.
  • If trucking companies paid the full costs associated with moving truck freight, we’d have less road damage and congestion, fewer crashes, and more funding to pay for the transportation system.

 

Screen Shot 2015-06-01 at 2.02.10 PM

What with all the speculation about a possible trillion dollar spending package for infrastructure, we’ve been hearing a lot about about crumbling bridges, structurally deficient roads, and the need for more highway capacity.

It’s clear that our transportation finance system is broken. To make up the deficit, politicians frequently call for increased user fees – through increased taxes on gasoline, vehicle miles traveled, or even bikes. All the while, one of the biggest users of the transportation network – the trucking industry – has been rolling down the highway fueled by billions in federal subsidies.

A 2015 report from the Congressional Budget Office estimates that truck freight causes more than $58 to $129 billion annually in damages and social costs in the form of wear and tear on the roads, crashes, congestion and pollution – an amount well above and beyond what trucking companies currently pay in taxes.

CBO doesn’t report that headline number, instead computing that the external social costs of truck freight on a “cents per ton mile basis” range between 2.62 and 5.86 cents per ton mile. For the average heavy truck, they estimate that the cost works out to about 21 to 46 cents per mile travelled.

That might not sound like a lot, but the nation’s 10.6 million trucks travel generate an estimated 2.2 trillion ton miles of travel per year (Table A-1, page 32). When you multiply the per ton mile cost of 2.52 to 5.86 cents per mile times 2.2 trillion ton-miles, you get an annual cost of between $57 and $128 billion per year.

Unfortunately, trucking companies don’t pay these costs. They are passed along to the rest of us in the form of damaged roads, crash costs, increased congestion and air pollution. Because they don’t pay the costs of these negative externalities, the firms that send goods by truck don’t have to consider them when deciding how and where to ship goods. This translates into a huge subsidy for the trucking industry of of between 21 and 46 cents per mile.

For comparison, CBO looked at the social costs associated with moving freight by rail. Railroads have much lower social costs, for two reasons: first, rail transport is much more energy efficient and less polluting per ton mile of travel; second, because railroads are built and maintained by their private owners, most of the cost of wear and tear is borne by private users, not the public. Railroad freight does produce social costs associated with pollution and crashes, but the social costs of moving freight by rail are about one-seventh that for truck movements: about 0.5 to 0.8 cents per ton mile, compared to 2.52 to 5.86 per ton mile for trucks.

Screen Shot 2015-06-01 at 2.02.36 PM

As we always point out, getting the prices right – whether for parking or road use – is critically important to creating an efficient transportation system. When particular transportation system users don’t pay their full costs, demand is too high, and supply is too low.  In this case, large federal subsidies for trucking encourages too much freight to be moved by truck, worsening congestion, pollution and road wear, while the fees and taxes paid by trucking companies aren’t enough to cover these costs. The classic solution for these currently unpriced external costs is to impose an offsetting tax on trucks that makes truck freight bear the full cost associate with road use, crashes and environmental damage. The CBO report considers a number of policies that could “internalize” these external costs associated with trucking – including higher diesel taxes, a tax on truck VMT, and even a higher tax on truck tires.

The revenues produced would be considerable: a VMT tax that internalized social costs of trucking would generate an estimated $68 billion per year. To put that number in context, consider that in 2014, total public spending – federal, state and local – on roads and highways was $165 billion. In addition, the higher tax would reduce freight moving by road – mostly by shifting cargo to rail – and lead to benefits of lower pollution, less congestion and less wear and tear on roads. We’d also save energy: net diesel fuel consumption for freight transportation would fall by 670 million gallons per year – a savings of about $2 billion annually at current prices.

There are good reasons to believe that the CBO report is conservative, and if anything, understates the social costs associated with trucking. For example, the report estimates that social costs associated with carbon emissions at somewhere between $5 and $45 per ton. Other credible estimates – from British economist Nicholas Stern – suggest that the cost today is about $32 to $103 per ton, rising to $82 to 260 per ton over the next two decades.

The external social costs of truck and rail freight, per ton mile, are estimated as follows:

Screen Shot 2015-06-01 at 2.03.03 PM
Source:  Congressional Budget Office, 2015

 

Such a tax would make truck freight more expensive, but other costs – now borne by the rest of us – would go down by a comparable amount. And there would be important savings in costs for freight either moved by other modes (especially rail, which is about two-thirds cheaper), or sourced from closer locations.

There’s a clear lesson here: It may seem like we have a shortage of infrastructure, or lack the funding to pay for the transportation system, but the fact that truck freight is so heavily subsidized means that there’s a lot more demand (and congestion) on the the roads that there would be if trucks actually paid their way. On top of that, there’d be a lot more money to cover the cost of the system we already have.

So the next time someone laments the sad state of the road system, or wonders why we can’t afford more investment, you might want to point out some 18-wheelers who are now getting a one heck of a free ride, at everyone’s expense.

View the full report: “Pricing Freight Transport to Account for External Costs: Working Paper 2015-03

Fake city, flawed thinking

There’s little question that technology is important to cities.  Without elevators and electricity, for example, it would be almost inconceivable that we could have dense urban centers.  So thinking about how advances in technology are likely to affect city success is critically important.  And while technology captures our imagination, sometimes we become so fixated on the technical details that we lose sight of the ultimate value of technology, which is that it should make people’s lives better.  A recent story illustrates that point.

Last week, the Atlantic reported that plans were afoot to build a $1 billion dollar fake city in the New Mexico desert. The purpose of the city – which will have no human residents, but will have buildings, roads, sewer and water lines, electricity and telecommunications – will be to serve as a facility for testing new technologies. Hence its name, “The Center for Innovation, Testing and Evaluation,” or CITE.

Big city coming here. Credit: Andrew E. Larson, Flickr
Big city coming here. Credit: Andrew E. Larson, Flickr

 

The proposal is actually a warmed over version of a project originally announced four years ago – with a $200 million price tag – which has apparently never made it past the press release stage.

There are so many problems with this story, it’s hard to know where to begin. First, there’s simply a question of the credibility of the project itself: the project is proposed by a company called “Pegasus Global Holdings” – a corporate moniker that sounds like it could have been drawn from a Roger Moore-era James Bond movie. The company’s website indicates that its management team has been active in trying to develop satellite-based communication systems, but it’s not clear how this expertise will  translated into building a billion dollar city in the New Mexico desert.

What’s equally surprising is that none of the journalists at the Atlantic apparently bothered to examine whether Pegasus Global Holdings has the financial or technical capability to carry out such a project. While Pegasus has some big dreams, there’s little evidence its tackled anything of this scale or complexity. According to the Department of Defense contracts database, Pegasus General has received 2 contracts to provide radio jammers for the Navy, one for $37 million and another for $7 million.

It’s also questionable whether the market for the kind of testing that would go on in a fake city would ever cover the rent on a $1 billion investment (or even a $200 million one). In 2011, a journalist who asked about likely customers for the facility reported: “Pegasus Global did not immediately respond to our request for comment about whether specific companies and organizations have already expressed interest in using the facility.” And the city manager of the first town expected to host the project told the Atlantic: “When we started pressing for details, that’s when they decided to look elsewhere.”

But there’s a far deeper problem here. A city without people is certain to perform poorly at helping us solve real world problems.

In fact, many of our urban problems stem directly for optimizing cities for technology, instead of people. For example, we’ve long prioritized rapid movement of vehicles on city streets–with devastating consequences for pedestrian and bike safety and urban livability. Nationwide, adopted engineering standards require that we make wide, gently curved suburban streets. Ostensibly, these standards improve safety by minimizing conflicts, improving visibility and eliminating obstacles – but they actually make streets less safe by encouraging faster driving.

 

Unsafe at high speeds. Credit: Google Streetview
Unsafe at high speeds. Credit: Google Streetview

 

Ultimately, cities are about people. We have to think about how urban spaces make living and interaction easier and better for people – not for technology and inanimate objects. It’s tempting to view the city as just a collection of pavement, pipes, wires and buildings, but to do so misses the real social and human characteristics that underpin city success.

Cities are created by and for people. Viewing urban problems in narrow technical terms and crafting policies and solutions to achieve engineering efficiency, with no regard to how this will affect humans – and how human behavior will change and adapt in response – is a recipe for failure. Which is why, at City Observatory, we think there’s a lot more to be learned from the experience of real cities that can be a guide to how we harness policy, technology, markets and people towards achieving the goal of making more successful cities.

City of ideas, and the idea of cities

Cities have always been about bringing people together and creating new ideas

Editor’s Note:  City Observatory Director Joe Cortright will be giving the Harold Vatter Memorial Lecture in Economics at Portland State University on Thursday, May 2.  His theme will be “Cities in the Knowledge Economy.” As a prelude to that lecture, we’re offering this 2015 commentary based on a visit to Athens.

Though the local economy is still in turmoil, Athens is still awash in the steady tramping of tourists.  Compared to your correspondent’s last visit to this city three decades ago, the distinguishing mark of tourism is no longer the long lines of foreigners looking to exchange deutsche marks, yen, pounds and travelers checks for drachmas, but rather the parade of latter day Narcissus, with smart phones appended from selfie-sticks dutifully capturing the ancient sights as backdrops of a digitized odyssey.

3217797375_5dd352ccde_o

Credit: Galería de Faustino, Flickr

As ever, the Parthenon looms over the city, a visible reminder of the sublime accomplishments of the Ancient Greeks.  The building is a ravaged remnant of its former self, but is nonetheless majestic, and its image, like so much of what the Greeks created is still deeply imprinted on our collective conscious—which is probably why the city remains such a compelling destination.

The Greek original seems so familiar because it has been so widely copied. In my own American hometown, as in countless others, the headquarters of the local bank copies in proportions, details and material the design of the Parthenon—it is the temple of money.

But the imprint of Greek culture on current life, of course, runs much deeper than architecture.  So many of the concepts that guide or define modern life were either devised or at least given names here:  our notions of democracy, the polity, the agora, as well as enduring contributions to art, science and mathematics.  And Greece was fundamentally structured as a series of city-states.  The Greek nation was a purely modern invention; Ancient Greece was always a constantly shifting, often warring set of cities, occasionally–but always temporarily–welded together by invading empires (or the threat of invasion).

As Ed Glaeser says in Triumph of the City, cities are mankind’s greatest creation.  And much, though certainly not all, of what we treasure about cities can trace its routes to the city-states of this region, most notably Athens.  But there was a flowering of cities in this part of the world two to three millennia ago.

Some of the earliest known human settlements trace their roots to this region, in fact.  As Jane Jacobs famously relates in her book The Economy of Cities, the excavation of one particularly ancient settlement Catal Huyuk, in Asia Minor—modern day Turkey—provides a compelling insight into the importance of cities to civilization.

The generally received wisdom about cities is that urbanization and permanent settlements were the accidental, or perhaps incidental by-product of improvements in agriculture:  that our hunter-gatherer ancestors stumbled upon or made some advances in crop raising that led them to settle permanently in some locations, and that as agricultural productivity improved, people had more time for alternate pursuits and managed to develop other skills.  Jacobs turns this agriculture-led creation myth on its head.

In her imaginative tale of how things could have happened, Jacobs describes the development of a settlement she calls New Obsidian, which begins as a place of assembly for nomadic groups where bartering of diverse commodities and crafts, leads to the establishment of a permanent settlement. The settlement then becomes a place, not just for trade, but also for animal husbandry, inadvertent cross-pollination of grains, the refinement of crafts and tool making, and ultimately increasingly sophisticated production. In Jacobs’ story, the creation of “new work” in cities leads to higher productivity in agriculture and stimulates development.

Ultimately, that process of developing more advanced technology, more complex economies, successively larger cities and the institutions needed to organize and govern them led to the city-states whose remnants we see today, in places like Athens. And that process continues apace today.  Cities are even today steadily creating the “new work” that propels economic growth and improves our standard of living.  There’s little reason to believe this process has reached its culmination.

As Paul Romer provocatively argues, we should be thinking about new cities as a way to tackle the problems of improving the living standards of the billions of people who still lag the most advanced nations. In his proposal for “Charter Cities” Romer argues for innovative institutional arrangements that would allow for experimentation with policies to deal with development, transportation, criminal justice and global warming.  Romer comes to cities from an interesting perspective:  in the 1980s, he authored two seminal papers on “New Growth Theory” that pointed to our ability to continually create new ideas as the driving force behind long term economic growth.  In recent years, he’s turned his attention to cities as the venue for devising new institutions that can easily give rise to and apply new ideas.  You can read a recent interview with Romer here.

There’s much glib talk of “Smart Cities” with an excessive focus on how new technologies – from the so-called “Internet of Things” to autonomous vehicles will reshape city life.  It is likely that these things will emerge, evolve  and be applied in cities.  And it will be the cities that bring people together, that promote the free flow of ideas, that exhibit a certain democracy in their affairs, that are likely to be the most successful in realizing these technologies.  And while the technologies would amaze the ancient Greeks, the kind of values that such a community would embody would not seem unfamiliar to them.

City of ideas, and the idea of cities

3217797375_5dd352ccde_o
Credit: Galería de Faustino, Flickr

Notes from your far flung correspondent, in the shadow of the Acropolis.

Though the local economy is still in turmoil, Athens is still awash in the steady tramping of tourists.  Compared to your correspondent’s last visit to this city three decades ago, the distinguishing mark of tourism is no longer the long lines of foreigners looking to exchange deutsche marks, yen, pounds and travelers checks for drachmas, but rather the parade of latter day Narcissus, with smart phones appended from selfie-sticks dutifully capturing the ancient sights as backdrops of a digitized odyssey.

As ever, the Parthenon looms over the city, a visible reminder of the sublime accomplishments of the Ancient Greeks.  The building is a ravaged remnant of its former self, but is nonetheless majestic, and its image, like so much of what the Greeks created is still deeply imprinted on our collective conscious—which is probably why the city remains such a compelling destination.

The Greek original seems so familiar because it has been so widely copied. In my own American hometown, as in countless others, the headquarters of the local bank copies in proportions, details and material the design of the Parthenon—it is the temple of money.

But the imprint of Greek culture on current life, of course, runs much deeper than architecture.  So many of the concepts that guide or define modern life were either devised or at least given names here:  our notions of democracy, the polity, the agora, as well as enduring contributions to art, science and mathematics.  And Greece was fundamentally structured as a series of city-states.  The Greek nation was a purely modern invention; Ancient Greece was always a constantly shifting, often warring set of cities, occasionally–but always temporarily–welded together by invading empires (or the threat of invasion).

As Ed Glaeser says in Triumph of the City, cities are mankind’s greatest creation.  And much, though certainly not all, of what we treasure about cities can trace its routes to the city-states of this region, most notably Athens.  But there was a flowering of cities in this part of the world two to three millennia ago.

Some of the earliest known human settlements trace their roots to this region, in fact.  As Jane Jacobs famously relates in her book The Economy of Cities, the excavation of one particularly ancient settlement Catal Huyuk, in Asia Minor—modern day Turkey—provides a compelling insight into the importance of cities to civilization.

The generally received wisdom about cities is that urbanization and permanent settlements were the accidental, or perhaps incidental by-product of improvements in agriculture:  that our hunter-gatherer ancestors stumbled upon or made some advances in crop raising that led them to settle permanently in some locations, and that as agricultural productivity improved, people had more time for alternate pursuits and managed to develop other skills.  Jacobs turns this agriculture-led creation myth on its head.

In her imaginative tale of how things could have happened, Jacobs describes the development of a settlement she calls New Obsidian, which begins as a place of assembly for nomadic groups where bartering of diverse commodities and crafts, leads to the establishment of a permanent settlement. The settlement then becomes a place, not just for trade, but also for animal husbandry, inadvertent cross-pollination of grains, the refinement of crafts and tool making, and ultimately increasingly sophisticated production. In Jacobs’ story, the creation of “new work” in cities leads to higher productivity in agriculture and stimulates development.

Ultimately, that process of developing more advanced technology, more complex economies, successively larger cities and the institutions needed to organize and govern them led to the city-states whose remnants we see today, in places like Athens. And that process continues apace today.  Cities are even today steadily creating the “new work” that propels economic growth and improves our standard of living.  There’s little reason to believe this process has reached its culmination.

As Paul Romer provocatively argues, we should be thinking about new cities as a way to tackle the problems of improving the living standards of the billions of people who still lag the most advanced nations. In his proposal for “Charter Cities” Romer argues for innovative institutional arrangements that would allow for experimentation with policies to deal with development, transportation, criminal justice and global warming.  Romer comes to cities from an interesting perspective:  in the 1980s, he authored two seminal papers on “New Growth Theory” that pointed to our ability to continually create new ideas as the driving force behind long term economic growth.  In recent years, he’s turned his attention to cities as the venue for devising new institutions that can easily give rise to and apply new ideas.  You can read a recent interview with Romer here.

There’s much glib talk of “Smart Cities” with an excessive focus on how new technologies – from the so-called “Internet of Things” to autonomous vehicles will reshape city life.  It is likely that these things will emerge, evolve  and be applied in cities.  And it will be the cities that bring people together, that promote the free flow of ideas, that exhibit a certain democracy in their affairs, that are likely to be the most successful in realizing these technologies.  And while the technologies would amaze the ancient Greeks, the kind of values that such a community would embody would not seem unfamiliar to them.

On Baltimore: Concentrated Poverty, Segregation, and Inequality

Yet again, a black citizen dies at the hands of the police. This event and the ensuing riots in Baltimore are a painful reminder of the deep divisions that cleave our cities.  There’s little we can add to this debate, except perhaps to say that there’s a strong evidence for a point made by Richard Florida:

The real problem in Baltimore is race & class division – persistent concentrated poverty.

We’ve chronicled the persistence and spread of concentrated poverty in our recent reports and blog posts at City Observatory.  Our Lost in Place report tracked the change in neighborhoods of concentrated poverty in the nation’s largest metro areas over the past four decades.  Our dashboard for Baltimore shows that the number of high poverty neighborhoods in Baltimore increased from 38 in 1970 to 55 in 2010.  And high poverty neighborhoods have hemorrhaged population.  Only one census tract in Baltimore saw its poverty rate fall from above 30 percent in 1970 to less than 15 percent in 2010.

Balt_Map

 

And as our map shows, the Baltimore has experienced persistent–and growing–concentrated poverty in many of its urban neighborhoods.  Concentrated poverty remains rooted in the neighborhoods adjacent to the central business district–and has spread outward in the decades since 1970.

Baltimore_Map

Earlier this month, we highlighted the connection between racial segregation and black white income disparities in the nation’s cities.  Those places with the greatest levels of segregation regularly also had the biggest differences in incomes between black and white households.  Segregation appears to be an important contributor to racial income disparities.  These data show that Baltimore is somewhat more segregated than the typical large US metro, with a black-white dissimilarity index of 64, ranking about 20st highest (most segregated) of the largest metropolitan areas in the country.  And on average black incomes in Baltimore were about 28 percent lower than white incomes, a slightly greater disparity than in the typical large metropolitan area.  So while somewhat more severe than average, the levels of racial segregation and income differentials in Baltimore are hardly unusual in large metro areas.

Sadly, concentrated poverty is a problem which only becomes visible to many Americans when it erupts in the violence we’ve seen in the past few days in Baltimore.  We hope the data provided here give everyone a sense of the depth and seriousness of the problem.

There’s no such thing as a Free-Way*

(*  with apologies to Donald Shoup)

A new report from Tony Dutzik, Gideon Weissman and Phineas Baxandall confirms, in tremendous detail, a very basic fact of transportation finance that’s widely disbelieved or ignored: drivers don’t come close to paying the costs of the roads they use. Published jointly by the Frontier Groups and U. S. PIRG Education Fund, Who Pays for Roads exposes the “user pays” myth.

Screen Shot 2015-05-12 at 3.32.45 PM

The report documents that the amount that road users pay through gas taxes now accounts for less than half of what we spend to maintain and expand the road system. The shortfall is made up from other sources of tax revenue at the state and local level. This subsidization of car users costs the typical household about $1,100 per year – over and above what they pay in gas taxes, tolls and other user fees.

While recent congressional bailouts of the Highway Trust Fund have made the subsidy more apparent, it has actually never been the case that road users paid their own way. Not only that, but the amount of their subsidy has steadily increased in recent years. The share of the costs paid from road user fees has dropped from about 70 percent in the 1960s to less than half today, according to the study.

There are good reasons to believe that the methodology of Who Pays for Roads, if anything, considerably understates the subsidies to private vehicle operation. It doesn’t examine the hidden subsidies associated with the free public provision of on-street parking, or the costs imposed by nearly universal off-street parking requirements, that drive up the cost of commercial and residential development. It also ignores the indirect costs that come to auto and non-auto users alike from the increased travel times and travel distances that result from subsidized auto oriented sprawl. And it also doesn’t look at how the subsidies to new capacity in some places undermine the viability of older communities (a point explored by Chuck Marohn at length in in his Strong Towns initiative.)

These facts put the widely agreed proposition that increasing the gas tax is politically impossible in a new light: What it really signals is car users don’t value the road system highly enough to pay for the cost of operating and maintaining it. Road users will make use of roads, especially new ones, but only if their cost of construction is subsidized by others.

The conventional wisdom of road finance is that we have a shortfall of revenue: we “need” more money to pay for maintenance and repair and for new construction. But the huge subsidy to car use has another equally important implication: because user fees are set too low, and because, in essence, we are paying people to drive more, we have excess demand for the road system. If we priced the use of our roads to recover even the cost of maintenance, driving would be noticeably more expensive, and people would have much stronger incentives to drive less, and to use other forms of transportation, like transit and cycling. The fact that user fees are too low not only means that there isn’t enough revenue, but that there is too much demand. One value of user fees would be that they would discourage excessive use of the roads, lessen wear and tear, and in many cases obviate the need for costly new capacity.

And these subsidies to car travel have important spillovers that affect other aspects of the transportation system. There’s a good argument to be made that part of the reason that subsidies to transit are as large as they are is that motorists are being paid not to use the transit system in the form of artificially low prices for road use and (thank you Don Shoup) parking.

Credit: David Gallagher
Credit: David Gallagher

There’s another layer to this point about roads not paying for themselves: Most of these calculations are done on a highly aggregated basis, and look at the total revenue for the road system, and the total cost of maintaining the road system. What the study doesn’t explore is whether particular elements of the road system pay for themselves or not.

Think about air travel for a moment. Airlines don’t simply look at whether their total revenue from passengers (fares and all those annoying fees) covers the total cost of jets, crews, and fuel (although the stock market pays attention to this). Airlines look at each individual flight and each route, and examine whether the number of travelers and the amount of fares that will be paid cover the cost of providing that service—when not enough passengers use a route, they discontinue air service (as many small market cities know too well). While this calculus is routine and well-accepted in air travel and the private market, it’s unknown for public roads.

The Frontier Group/US PIRG study also significantly understates the economic cost of the transportation system. Their analysis looks only at how much we are actually spending to maintain and expand the current system. This is problematic for two reasons. First, there’s abundant evidence that we’re not spending enough to keep the system in repair, and there’s a growing hidden cost in higher future repair bills from the added deterioration of the system. These hidden costs are accumulating and not reflected in what users pay now. Second, we’re doing nothing to recognize the economic value of the existing road system: the replacement cost of the current road system –what it would take to rebuild the existing asset—is likely on the order of tens of trillions of dollars. Current road users get free use of that inherited, paid for (but depreciating) asset. Again, this is unlike other forms of transportation: just because United Airlines may have long since paid off the purchase price of the 737 you are riding in, doesn’t mean that they don’t charge you for the capital value of using that asset.

The real question for transportation public finance is whether new roads—additional capacity—pays for itself. Does the volume of traffic using a new bridge or additional lanes of freeway capacity pay for the road they use in their road taxes? New projects are so expensive–$100 million or more for a mile of urban freeway–that the road users who pay the equivalent of 2-3 cents per mile of travel in gas taxes (depending on the tax rate and vehicle fuel efficiency) never contribute enough money to recoup the costs of the new capacity.

Credit: Richard Masoner, cyclelicio.us
Credit: Richard Masoner, cyclelicio.us

The surprising evidence from road pricing demonstrations (tolled HOT lanes) is that the revenue gathered from tolling often fails to cover the costs of collecting the tolls and operating the toll collection system: they never come close to paying for the roadway. (To be sure, tolling improves the efficiency of use of the freeway—traffic flows more smoothly, capacity is increased—but the tolls don’t pay for constructing, or even maintaining the pavement).

But again, the highly visible toll collection mechanism, like the very visible gas tax, creates the illusion that user fees are paying the cost of the system.

As the Transit Center demonstrated in its recent report, Subsidizing Congestion, the $7.3 billion federal tax break for commuter parking costs encourages additional peak hour car commuting which has the effect of causing greater congestion.  The systematic under-pricing of roads has the same effect, with the result that taxpayers subsidize car use through higher taxes, and also face greater congestion than they would if road users paid their way.

To be sure, these same questions can, and should be raised about transit, biking and walking projects.  And for transit projects, close financial scrutiny is far more common than for roads.  A key difference with these other forms of transportation is that they arguably have big net social benefits–lower congestion, less pollution greater safety, and they support important equity objectives by making transportation available to those who don’t own or can’t operate a motor vehicle.  The problem with hidden subsidies is that often that they’re hidden:  if we made them explicit, and considered our alternatives we would likely choose differently and more wisely.

The problem of pricing roads correctly is one that will grow in importance in the years ahead. Its now widely understood that improvements in vehicle fuel efficiency and the advent of electric vehicles is eroding the already inadequate contribution of the gas tax to covering road costs. The business model of companies like UBER and LYFT likewise hinges on paying much less for the use of the road system than it costs to operate. The problem is likely to be even larger if autonomous self-driving vehicles ever become widespread—in larger cities it may be much more economical for them to simply cruise “free” public streets than to stop and have to pay for parking.

As we’ve pointed out before, the root of many of our transportation problems is that the price is wrong.  Puncturing the widely held myth that cars pay their own way makes this report required reading for those thinking about transportation finance reform.

Young People are Buying Fewer Cars

Cars_Revised

Will somebody teach the Atlantic and Bloomberg how to do long division?

In this post, we take down more breathless contrarian reporting about how Millennials are just as suburban and car-obsessed as previous generations. Following several stories drawing questionable inferences from flawed migration data claiming that Millennials are disproportionately choosing the suburbs (they’re not) come two articles in quick succession from Bloomberg and the Atlantic, purporting to show the Millennials’ newfound love of automobiles.

Bloomberg wrote “Millennials Embrace Cars, Defying Predictions of Sales Implosion.” Hot on its heels came a piece from Derek Thompson at the Atlantic (alternately titled “The Great Millennial Car Comeback” and “Millenials not so cheap after all”) recanting an earlier column that predicted Millennials would be less likely than previous generations to own cars.

The Atlantic and Bloomberg stories are both based on new estimates of auto sales produced by JD Power and Associates. The data for this report are shown below.  We also examined a JD Power released a press release making broadly similar claims last summer; we relied on that to better understand their methodology and definitions.

The headline finding is that in 2014, Millennials (the so-called Gen Y) bought about 3.7 million cars, while their older GenX peers bought only 3.3 million.  (We extracted these numbers from the charts in the Atlantic story).  Superficially, that seems to be evidence that Millennials are in fact buying more cars.

But there’s a huge problem with this interpretation:  there are way, way more people in the so-called “GenY” than there are in “GenX.” Part of the reason is that the GenY group–also often called the “echo boom”–were born in years when far more children were born in the US.  The bigger, and less obvious problem is the arbitrary and varying periods used to define “generations.”   According to the definitions used by JD Power, GenY includes people born from 1977 to 1994 (a 17-year cohort), while GenX includes those born between 1965 and 1976–just an 11-year cohort.  As a result, these definitions put nearly 78 million people in Gen Y and about 49 million in GenX.  There are nearly 29 million more GenXers than GenY.*  Hardly surprising, and not at all meaningful, that this very much larger group buys about 10 percent more cars than the very much smaller group.

This is where long division comes in.  Let’s look at the rate of car buying on a per person basis for each of these two groups.  By normalizing the data to account for the different number of people in each group, we get a much more accurate picture of the behavioral differences of individuals in each group–this is dead simple standard fare in statistical analysis.  The 78 million GenYers bought about 3.7 million cars, or about 47.5 cars per 1,000 persons in the generation.  Meanwhile, 45 million GenXers bought 3.3 million cars, or about 67.1 cars per 1,000.  Rather than being just as likely or more likely than GenX to buy cars, the typical member GenY is actually 29 percent less likely to buy a car than the previous generation.

Characteristic Gen Y Gen X Boomers
Birth Year 1977-1994 1965-76 1946-64
Age in 2013 19-36 37-48 49-67
Birth Years in Cohort 17 11 18
Persons, 2013 77,970,996 49,211,709 75,900,696
Cars Bought 2015 3,700,000 3,300,000 5,100,000
Market Share 27% 24% 38%
Cars Purchased per 1,000 47.5 67.1 67.2

Once you go to the trouble of normalizing the sales data to reflect the very different sizes of these “generations,” you get results that are pretty much exactly the opposite of what’s claimed in both the Bloomberg and Atlantic stories.  Today, Millenials are buying new cars at a rate far lower than older generations.  That’s consistent with other data we have showing Millenials being less likely to get drivers licenses, and when they do, driving fewer miles per year than previous generations.

To be fair, a really good answer to this question would require a bit more data sleuthing:  Because automobile purchasing patterns vary over a person’s life cycle, you can’t accurately gauge the generational change in buying habits by comparing the current year buying habits of Millennials (average age, late 20s) with GenX (average age early 40s). The more interesting question to answer would be whether the average 25-year-old Millennial today is more or less likely to purchase a vehicle today than someone who was 25 in 2005, or in 1995 or in 1985.  Unfortunately, we don’t have access to that data. However, if the folks at JD Power would be willing to dip into their considerable archives, we’d gladly do the computations.

No doubt this kind of story generates lots of clicks and tweets—witness the Natural Resources Defense Council’s panicky “Uh-oh” retweet of this story.  Clearly that is the coin of the realm in journalism these days, but it’s just plain irresponsible to make an utterly phony claim based on data that hasn’t been adjusted to reflect the size of different groups in question. As Paul Krugman said in a simpler time, “don’t be making claims that can be disproved with a copy of the statistical abstract and a pocket calculator.”  There’s even less excuse for this today.

A couple of technical notes:  Our estimates of population by birth year are from the Census Bureau:  Annual Estimates of the Resident Population by Sex, Single Year of Age, Race, and Hispanic Origin for the United States: April 1, 2010 to July 1, 2013.  The car sales data are from JD Power for 2014, as reflected in the charts shown in the Atlantic article and confirmed by data provided by JDPower.  Our table above omits data for sales to “pre-boomers” which make up approximately 10 percent of car sales, and explains why the total market share doesn’t add to 100%.  We use the terms “GenY” and “millenials interchangeably in this post.

_____________

* – Towards the end of his article, Derek Thompson acknowledges the big discrepancy in the sizes of GenX and GenY, allowing that there are “15 to 20 million” more Millenials than GenXers. Not only is the actual difference almost 29 million, it begs the question of whether why Thompson didn’t find the time to do the very basic long division normalization that would have given a much more reasonable, and much different answer to the question posed by his article.

Revised and Corrected April 23.

We’ve corrected and updated this post.  Our original version had a math error which understated the number of persons in “GenX”.  I inadvertently assigned those born in 1965 to the Baby Boom Generation rather than GenX.  The correct number of persons in GenX (born between 1965 and 1976) is 49.2 million not the 44.8 million I originally reported.  This changes the number of cars purchased per 1,000 persons by this Generation from 73.7 I originally reported to the correct number of 67.1.  This means that GenY is about 29% less likely than GenX to have purchased a car in 2014.  We’ve revised the text to reflect these corrections. My apologies for this error.

Also, JDPower and Associates graciously provided the data that served as the basis for the Bloomberg story.  It is shown below

  2010 2011 2012 2013 2014 2015YTD
Percent of Retail Sales            
Y 18% 21% 23% 25% 27% 28%
X 23% 24% 24% 24% 25% 24%
Baby Boomer 43% 41% 40% 39% 37% 37%
Pre Baby Boomer 16% 14% 13% 12% 11% 11%
             
Retail Sales (MM)            
Y 1.7 2.2 2.7 3.2 3.7 0.9
X 2.1 2.5 2.9 3.1 3.3 0.8
Baby Boomer 3.9 4.2 4.7 5.0 5.1 1.2
Pre Baby Boomer 1.5 1.4 1.5 1.5 1.4 0.3

Our six month anniversary!

It’s spring in the city

On October 20 of last year, just six months ago, we launched City Observatory, a website and think tank devoted to data-driven analysis of cities and the policies that shape them. We are delighted to have participated in ongoing national discussions about a number of important policy issues facing cities. It’s been a whirlwind–and here’s what we’ve been up to:

To date, we’ve released three national reports.

The Young and Restless detailed the migration patterns of educated 25- to 34-year-olds to the close-in neighborhoods of the nation’s large metropolitan areas and compared how cities across the country were faring in attracting them.

Lost in Place tracked the persistence and spread of concentrated poverty, and showed how poverty—not gentrification—is our biggest urban challenge.

Surging City Center Job Growth showed how urban populations are are growing faster than suburban ones, and that jobs are coming back to the center of cities with this increase.

Over on our blog, we’ve been continuing to provide commentary about a variety of subjects, from biotech to McMansions to the looming threat that “Cappuccino Congestion” poses to the nation’s economic productivity. We’re also weighing in with our views on the important issues confronting the nation’s cities.  Learn why we think that, contrary to some assertions in the media, young adults are increasingly moving to the nation’s urban centers, and how some of the measures of gentrification are misleading and wrong.  And be sure to take a look at our latest post showing the close connection between segregation and the racial income gap.

We’re pleased with the reception that City Observatory’s work has gotten.  In addition to those who’ve visited our website, we’ve gotten terrific coverage in the media, including the New York Times, Washington Post, The Economist, and USA Today.

Our aim is to be open-source and data-driven, which is while you’ll find all the detailed data behind each of our analyses freely available on our website—we have a data page that provides data downloads and spells out methodology.  In addition, we’ve constructed a series of dashboards that let you check to see how your city is performing in attracting talented young workers, addressing concentrated poverty, and growing city center jobs.

This month, we welcome a new face to the City Observatory staff: Daniel Kay Hertz. You may already have come across Daniel’s insightful writing on his blog City Notes, or you may be already following him on Twitter, but feel free to hop on over to our blog and check out his contributions. We’re thrilled to have Daniel on board.

We’re grateful to the John S. and James L. Knight Foundation for supporting our work, and we’re especially grateful to those who follow and commnet on the discussions here at City Observatory. Our work is only as good as the commentary and discussion we provoke. Please comment on our blog, connect with us on Twitter or Facebook or just email us to tell us what you think. Your continued interest, thoughts, and feedback push the conversation forward and make our work worth doing.

More evidence of surging city job growth

In February, we released our latest CityReport Surging City Center Job Growth, presenting evidence showing employment growing faster in the city centers of the nation’s largest metros since 2007. Another set of analysts has, independent of our work, produced findings that point to renewed job growth in the nation’s inner city neighborhoods.

A new report issued by the Federal Reserve Bank of Cleveland, using similar data but different definitions reaches many of the same conclusions. The analysis, prepared by Fed Economist Daniel Hartley and Nikhil Kaza and T. William Lester of the University of North Carolina, is entitled Are America’s Inner Cities Competitive?  Evidence from the 2000s.  The Fed study divides metropolitan areas  into three parts:  the central business district (CBD)–a series of tracts that form the core of the commercial area in each metro’s largest city–the inner city–tracts within a principal city but outside the central business district (CBD), and the suburbs–the remainder of metro area.

Between 2002 and 2011, Hartley, Kaza and Lester report that inner cities have added 1.8 million jobs.  They also echo one of our key findings:  that job growth in city centers was stronger in the post-recession period than it was earlier in the decade.  In the aggregate, inner cities recorded relatively robust job growth over the past decade (up 6.1% between 2002 and 2011) compared to suburbs (6.9%), and that particularly since the end of the recession (i.e. 2009) have recorded faster job growth (3.6%) than either suburbs (3.0%) or central business districts (2.6%).

To get a sense of how the geography of job growth has shifted over the past decade, its useful to divide the data roughly in half, comparing growth trends in the 2002-07 period (during the height of the housing bubble) with the growth from 2007-11 (the period representing the collapse of the bubble, and the impact of the Great Recession, and the first years of recovery).  These were the time periods used in our Surging City Center Job Growth report, and we’ve recalculated the Fed data to make it directly comparable to our analysis.  The chart below shows the data from the Fed report and computes the average annual growth rate of jobs for central business districts, inner cities, and suburbs for these two time periods.

These data show that in the earlier time period, suburbs were outperforming cities; inner cities were growing about half as fast as suburbs and CBD employment was actually declining.  From 2002 to 2007, the further you were from the center, the faster you grew.  This relationship reversed in the latter 2007-11 period.  Cities outperformed suburbs–suburbs saw a net decline in employment–and job growth was actually somewhat faster in the CBD than in inner cities.  Despite the recession, CBD job growth was much stronger in the 2007-11 period (+0.3%) than it was in the earlier 2002-07 period (-0.7%).  (Note that percentage figures in the following graph represent annualized growth rates.)

Hartley_Jobs

There are some key differences between the Fed study and our recent City Observatory report. Our definition of “city center” included all those businesses within three miles of the center of the central business district.  Both studies are based on geographically detailed employment data from the Census Bureau’s Local Employment and Housing Dynamics (LEHD) program.  The new Fed study reports data for 281 US metropolitan areas (our report looked at 41 of the largest metropolitan areas).

The authors conclude that while it is too soon to term this an urban renaissance, its a noticeable change from the long term trend of employment decentralization.  Though not universal, the pattern of strong inner city growth is widespread, with two-fifths (120 out of 281 metros) recording gains in overall employment and share of employment in inner cities.  The traditional decentralizing pattern of employment still holds for some metropolitan areas, like Houston and Dallas, but inner cities are flourishing in some unlikely places, like heavily suburbanized Los Angeles and San Antonio.

As we did in our report, the authors of the Federal Reserve study examine the industrial dimensions of job change.  Manufacturing jobs continue to suburbanize, and inner cities have been relatively more competitive for jobs in “eds and meds” education services and health care.  They also identify a key role for the consumer city and population-led theories of urban growth.  Within inner cities, job growth is positively associated with transit access and distance to the CBD, and seems to be driven more by population-serving businesses (like restaurants) than businesses dependent on infrastructure (manufacturing and distribution).

The full report has many more details, and identifies the metros with competitive inner cities (i.e. those places where inner city areas gained share of total metro employment between 2002 and 2011).

We’re expecting to get data for 2012 and 2013, to be able to judge whether these trends persisted as the US economy continued to recover.  If you’re keenly interested in urban economies, as we are, you’ll be eagerly awaiting the new numbers.  In the mean time, the Cleveland Fed study is a “must read.”

Hartley, Daniel A., Nikhil Kaza, and T. William Lester, 2015. “Are America’s Inner Cities Competitive? Evidence from the 2000s,” Federal Reserve Bank of Cleveland, working paper no 15-03.  https://www.clevelandfed.org/en/Newsroom%20and%20Events/Publications/Working%20Papers/2015%20Working%20Papers/WP%2015-03%20Are%20Americas-Inner-Cities-Competitive-Evidence-from-the-2000s.aspx

 

More evidence on city center job growth

In February, we released our latest CityReport documenting a remarkable turnaround in the pattern of job growth within metropolitan areas.  After decades of steady job decentralization, the period 2007-2011 marked the first time that city centers in the nation’s largest metropolitan areas recorded faster job growth than their surrounding peripheries.  Much of that rebound seemed to be associated with the movement of talented young workers back to cities, and the industrial composition of growth in cites–with high-skilled service and software firms choosing urban locations.

Perhaps no where is this trend more in evidence than in San Francisco.  The city is in the midst of a boom in both population and employment growth.

All the controversy about the “Google Bus” and other corporate shuttles that ferry San Francisco residents to jobs in Silicon Valley, an hour or so to the south, miss the burgeoning growth of high tech firms in the city itself.  The growing desire of young well-educated workers to live in cities is making the central city location much more advantageous for tech firms, relative to the traditional Silicon Valley office parks, than in decades past.  As a result, in the past several years, technology firms have increasingly started, expanded or relocated in San Francisco.

A recent report from the City of San Francisco’s Planning Office chronicles the growth of tech jobs–in software, telecommunications, information services and related sectors of the economy– in San Francisco.  Over just the past four years, employment in the city’s tech sector has increased about 90 percent, from 19,700 jobs in 2009 to 37,600 in 2013.

SF_Tech_Job_Chart

The tech industry’s growth has been highly concentrated in the city’s fast-changing South of Market area.   CM Commercial Real Estate has mapped the significant leasing deals by tech firms over the past three years.  You can see the size and timing of these developments on their animated map (click on the image below to visit their website).

SF_Tech_Leasing

Data from CM Commercial Real Estate

The concentration of talented workers in San Francisco and the tight clustering of tech firms is a reminder of the real power of agglomeration effects in our knowledge-based economy.  Building on the strength of urban amenities to attract and retain well-educated workers with choices creates a strong talent base that leads firms to gain economic advantage by locating nearby.  San Francisco is now at a point where these two trends are mutually reinforcing:  the base of talent attracts more firms; the abundance of employment opportunities attracts more workers.  The key limiting factor going forward is the supply of housing in San Francisco.  As we’ve argued before, what this really illustrates is our shortage of great urban spaces.  As more Americans seek urban living, and as the firms that need to employ talented workers cluster nearby, the demand for housing in cities surges, and unless housing supply keeps pace, rising prices and affordability problems will likely worsen.

Want to close the Black/White Income Gap? Work to Reduce Segregation.

 

Nationally, the average black household has an income 42 percent lower than average white household. But that figure masks huge differences from one metropolitan area to another. And though any number of factors may influence the size of a place’s racial income gap, just one of them – residential segregation – allows you to predict as much as 60 percent of all variation in the income gap  from city to city. Although income gaps between whites and blacks are large and persistent across the country, they are much smaller in more integrated metropolitan areas and larger in more segregated metropolitan areas.  The strength of this relationship strongly suggests that reducing the income gap will necessarily require reducing racial segregation.

To get a picture of this relationship, we’ve assembled data on segregation and the black/white earnings gap for the largest U.S. metropolitan areas. The following chart shows the relationship between the black/white earnings disparity (on the vertical axis), and the degree of black/white segregation (on the horizontal axis).   Here, segregation is measured with something called the dissimilarity index, which essentially measures what percent of each group would have to move to create a completely integrated region. (Higher numbers therefore indicate more segregated places.) To measure the black-white income gap, we first calculated per capita black income as a percentage of per capita white income, and then took the difference from 100. (A metropolitan area where black income was 100% of white income would have no racial income gap, and would receive a score of zero; a metro area where black income was 90% of white income would receive a score of 10.)

The positive slope to the line indicates that as segregation increases, the gap between black income and white incomes grows as black incomes fall relative to white incomes. On average, each five-percentage-point decline in the dissimilarity index is associated with an three-percentage-point decline in the racial income gap (The r2 for this relationship is .59, suggesting a close relationship between relative income and segregation).

What’s less clear is which way the causality goes, or in what proportions. That is to say: there are good reasons to believe that high levels of segregation impair the relative economic opportunities available to black Americans. Segregation may have the effect of limiting an individual’s social networks, lowering the quality of public services, decreasing access to good schools, and increasing risk of exposure to crime, all of which may limit or reduce economic success.  This is especially true in neighborhoods of concentrated poverty, which tend to be disproportionately neighborhoods of color.

But there are also good reasons to believe that in places where black residents have relatively fewer economic opportunities, they will end up more segregated than in places where there are more opportunities. Relatively less income means less buying power when it comes to real estate, and less access to the wealthier neighborhoods that, in a metropolitan area with a large racial income gap, will be disproportionately white. A large difference between white and black earnings may also suggest related problems – like a particularly hostile white population – that would also lead to more segregation.

The data shown here is consistent with earlier and more recent research of the negative effects of segregation.  Glaeser and Cutler found that higher levels of segregation were correlated with worse economic outcomes for blacks.   Likewise, racial and income segregation was one of several factors that Raj Chetty and his colleagues found were strongly correlated with lower levels of inter-generational economic mobility at the metropolitan level.

Implications

To get a sense of how this relationship plays out in particular places, consider the difference between two Southern metropolitan areas: Birmingham and Raleigh.  Birmingham is more segregated (dissimilarity 65) than Raleigh (dissimilarity 41).  The black white income gap is significantly smaller in Raleigh (blacks earn 17 percent less than whites) than it is in Birmingham (blacks earn 29 percent less than whites).

The size and strength of this relationship point up the high stakes in continuing to make progress in reducing segregation as a means of reducing the racial income gap.   If Detroit had the same levels of segregation as the typical large metro (with an dissimilarity index of 60, instead of 80), you would expect its racial gap to be  12 percentage points smaller, which translates to $3,000 more in annual income for the average black resident.

These data presented here and the other research cited are a strong reminder that if we’re going to address the persistent racial gap in income, we’ll most likely need to make further progress in reducing racial segregation in the nation’s cities.

The correlations shown here don’t dispose of the question of causality:  this cross sectional evidence doesn’t prove that segregation causes a higher black-white income gap.  It is entirely possible that the reverse is true:  that places with smaller income gaps between blacks and whites have less segregation, in part because higher relative incomes for blacks afford them greater choices in metropolitan housing markets.  It may be the case that causation runs in both directions.   In the US, there are few examples of places that stay segregated that manage to close the income gap; there are few places that have closed the income gap that have not experienced dramatically lower levels of segregation.   Increased racial integration appears to be at least a corollary, if not a cause of reduced levels of income disparity between blacks and whites in US metropolitan areas.

If we’re concerned about the impacts of gentrification on the well-being of the nation’s African American population, we should recognize that anything that promotes greater racial integration in metropolitan areas is likely to be associated with a reduction in the black-white income gap; and conversely, maintaining segregation is likely to be an obstacle to diminishing this gap.

Though provocative, these data don’t control for a host of other factors that we know are likely to influence the economic outcomes of individuals, including the local industrial base and educational attainment.  It would be helpful to have a regression analysis that estimated the relationship between the black white earnings gap and education.  It may be the case that the smaller racial income gap in less segregated cities may be attributable to higher rates of black educational attainment in those cities.  For example, the industry mix in Raleigh may have lower levels of racial pay disparities and employment patterns than the mix of industries in Birmingham.  But even the industry mix may be influenced by the segregation pattern of cities; firms that have more equitable practices may gravitate towards, or grow more rapidly in communities with lower levels of segregation.

Brief Background on Racial Income Gaps and Segregation

Two enduring hallmarks of race in America are racial segregation and a persistent gap between the incomes of whites and blacks.  In 2011, median household income for White, Non-Hispanic Households was $55,412; for Blacks $32,366 (Census Bureau, Income, Poverty, and Health Insurance Coverage in the United States: 2011, Table A-1).  For households, the racial income gap between blacks and whites is 42 percent.  Census Bureau data shows on average, black men have per capita incomes that are about 64 percent that of Non-Hispanic White men.  This gap has narrowed only slightly over the past four decades: in the early 1980s the income of black men was about 59 percent that of Non-Hispanic whites.

Because the advantage of whites’ higher annual incomes compounds over time, racial wealth disparities are even greater than disparities in earnings.  Lifetime earnings for African-Americans are about 25 percent less than for similarly aged Non-Hispanic White Americans.   The Urban Institute estimated that the net present value of lifetime earnings for a non-hispanic white person born in late 1940s would be about $2 million compared to just $1.5 million for an African-American born the same year.

In the past half century, segregation has declined significantly.  Nationally, the black/non-black dissimilarity index has fallen from an all-time high of 80 in 1970 to 55 in 2010, according to Glaeser and Vigdor .  The number of all-white census tracts has declined from one in five to one in 427. Since 1960, the share of African-Americans living in majority-non-black areas increased from less than 30 percent to almost 60 percent.  Still, as noted in our chart, their are wide variations among metropolitan areas, many of which remain highly segregated.

Technical Notes

We measure the racial income gap by comparing the per capita income of blacks in each metropolitan area with the per capita income of whites in that same metropolitan area.  These data are from Brown University’s US 2010 project, and have been compiled from the 2005-09 American Community Survey.  The Brown researchers compiled this data separately for the metropolitan divisions that make up several large metropolitan areas (New York, Chicago, Miami, Philadelphia, San Francisco, Seattle, Dallas and others).  For these tabulations we report the segregation and racial income gaps reported for the most populous metropolitan division in each metropolitan area.

How Racial Segregation Leads to Income Inequality

Less Segregated Metro Areas Have Lower Black/White Income Disparities

Income inequality in the United States has a profoundly racial dimension.  As income inequality has increased, one feature of inequality has remained very much unchanged:  black incomes remain persistently lower than white incomes.  But while that pattern holds for the nation as a whole, its interesting to note that in some places the black/white income gap is much smaller. One characteristic of these more equal places is a lower level of racial segregation.

Nationally, the average black household has an income 42 percent lower than average white household. But that figure masks huge differences from one metropolitan area to another. And though any number of factors may influence the size of a place’s racial income gap, just one of them – residential segregation – allows you to predict as much as 60 percent of all variation in the income gap  from city to city. Although income gaps between whites and blacks are large and persistent across the country, they are much smaller in more integrated metropolitan areas and larger in more segregated metropolitan areas.  The strength of this relationship strongly suggests that reducing the income gap will necessarily require reducing racial segregation.

To get a picture of this relationship, we’ve assembled data on segregation and the black/white earnings gap for the largest U.S. metropolitan areas. The following chart shows the relationship between the black/white earnings disparity (on the vertical axis), and the degree of black/white segregation (on the horizontal axis).   Here, segregation is measured with something called the dissimilarity index, which essentially measures what percent of each group would have to move to create a completely integrated region. (Higher numbers therefore indicate more segregated places.) To measure the black-white income gap, we first calculated per capita black income as a percentage of per capita white income, and then took the difference from 100. (A metropolitan area where black income was 100% of white income would have no racial income gap, and would receive a score of zero; a metro area where black income was 90% of white income would receive a score of 10.)

The positive slope to the line indicates that as segregation increases, the gap between black income and white incomes grows as black incomes fall relative to white incomes. On average, each five-percentage-point decline in the dissimilarity index is associated with an three-percentage-point decline in the racial income gap (The r2 for this relationship is .59, suggesting a close relationship between relative income and segregation).

What’s less clear is which way the causality goes, or in what proportions. That is to say: there are good reasons to believe that high levels of segregation impair the relative economic opportunities available to black Americans. Segregation may have the effect of limiting an individual’s social networks, lowering the quality of public services, decreasing access to good schools, and increasing risk of exposure to crime, all of which may limit or reduce economic success.  This is especially true in neighborhoods of concentrated poverty, which tend to be disproportionately neighborhoods of color.

But there are also good reasons to believe that in places where black residents have relatively fewer economic opportunities, they will end up more segregated than in places where there are more opportunities. Relatively less income means less buying power when it comes to real estate, and less access to the wealthier neighborhoods that, in a metropolitan area with a large racial income gap, will be disproportionately white. A large difference between white and black earnings may also suggest related problems – like a particularly hostile white population – that would also lead to more segregation.

The data shown here is consistent with earlier and more recent research of the negative effects of segregation.  Glaeser and Cutler found that higher levels of segregation were correlated with worse economic outcomes for blacks.   Likewise, racial and income segregation was one of several factors that Raj Chetty and his colleagues found were strongly correlated with lower levels of inter-generational economic mobility at the metropolitan level.

Implications

To get a sense of how this relationship plays out in particular places, consider the difference between two Southern metropolitan areas: Birmingham and Raleigh.  Birmingham is more segregated (dissimilarity 65) than Raleigh (dissimilarity 41).  The black white income gap is significantly smaller in Raleigh (blacks earn 17 percent less than whites) than it is in Birmingham (blacks earn 29 percent less than whites).

The size and strength of this relationship point up the high stakes in continuing to make progress in reducing segregation as a means of reducing the racial income gap.   If Detroit had the same levels of segregation as the typical large metro (with an dissimilarity index of 60, instead of 80), you would expect its racial gap to be  12 percentage points smaller, which translates to $3,000 more in annual income for the average black resident.

These data presented here and the other research cited are a strong reminder that if we’re going to address the persistent racial gap in income, we’ll most likely need to make further progress in reducing racial segregation in the nation’s cities.

The correlations shown here don’t dispose of the question of causality:  this cross sectional evidence doesn’t prove that segregation causes a higher black-white income gap.  It is entirely possible that the reverse is true:  that places with smaller income gaps between blacks and whites have less segregation, in part because higher relative incomes for blacks afford them greater choices in metropolitan housing markets.  It may be the case that causation runs in both directions.   In the US, there are few examples of places that stay segregated that manage to close the income gap; there are few places that have closed the income gap that have not experienced dramatically lower levels of segregation.   Increased racial integration appears to be at least a corollary, if not a cause of reduced levels of income disparity between blacks and whites in US metropolitan areas.

If we’re concerned about the impacts of gentrification on the well-being of the nation’s African American population, we should recognize that anything that promotes greater racial integration in metropolitan areas is likely to be associated with a reduction in the black-white income gap; and conversely, maintaining segregation is likely to be an obstacle to diminishing this gap.

Though provocative, these data don’t control for a host of other factors that we know are likely to influence the economic outcomes of individuals, including the local industrial base and educational attainment.  It would be helpful to have a regression analysis that estimated the relationship between the black white earnings gap and education.  It may be the case that the smaller racial income gap in less segregated cities may be attributable to higher rates of black educational attainment in those cities.  For example, the industry mix in Raleigh may have lower levels of racial pay disparities and employment patterns than the mix of industries in Birmingham.  But even the industry mix may be influenced by the segregation pattern of cities; firms that have more equitable practices may gravitate towards, or grow more rapidly in communities with lower levels of segregation.

Brief Background on Racial Income Gaps and Segregation

Two enduring hallmarks of race in America are racial segregation and a persistent gap between the incomes of whites and blacks.  In 2011, median household income for White, Non-Hispanic Households was $55,412; for Blacks $32,366 (Census Bureau, Income, Poverty, and Health Insurance Coverage in the United States: 2011, Table A-1).  For households, the racial income gap between blacks and whites is 42 percent.  Census Bureau data shows on average, black men have per capita incomes that are about 64 percent that of Non-Hispanic White men.  This gap has narrowed only slightly over the past four decades: in the early 1980s the income of black men was about 59 percent that of Non-Hispanic whites.

Because the advantage of whites’ higher annual incomes compounds over time, racial wealth disparities are even greater than disparities in earnings.  Lifetime earnings for African-Americans are about 25 percent less than for similarly aged Non-Hispanic White Americans.   The Urban Institute estimated that the net present value of lifetime earnings for a non-hispanic white person born in late 1940s would be about $2 million compared to just $1.5 million for an African-American born the same year.

In the past half century, segregation has declined significantly.  Nationally, the black/non-black dissimilarity index has fallen from an all-time high of 80 in 1970 to 55 in 2010, according to Glaeser and Vigdor .  The number of all-white census tracts has declined from one in five to one in 427. Since 1960, the share of African-Americans living in majority-non-black areas increased from less than 30 percent to almost 60 percent.  Still, as noted in our chart, their are wide variations among metropolitan areas, many of which remain highly segregated.

Technical Notes

We measure the racial income gap by comparing the per capita income of blacks in each metropolitan area with the per capita income of whites in that same metropolitan area.  These data are from Brown University’s US 2010 project, and have been compiled from the 2005-09 American Community Survey.  The Brown researchers compiled this data separately for the metropolitan divisions that make up several large metropolitan areas (New York, Chicago, Miami, Philadelphia, San Francisco, Seattle, Dallas and others).  For these tabulations we report the segregation and racial income gaps reported for the most populous metropolitan division in each metropolitan area.

How important is proximity to jobs for the poor?

More jobs are close at hand in cities.  And on average the poor live closer to jobs than the non-poor.

One of the most enduring explanations for urban poverty is the “spatial mismatch hypothesis” promulgated by John Kain in the 1960s.  Briefly, the hypothesis holds that as jobs have increasingly suburbanized, job opportunities are moving further and further away from the inner city neighborhoods that house most of the poor. In theory, the fact that jobs are becoming more remote may make them more difficult to get, especially for the unemployed. How important is proximity to getting and keeping a job?

A new Brookings Institution report from Elizabeth Kneebone and Natalie Holmes, The Growing Distance Between People and Jobs  sheds some light on this old question.  Their data show that between 2000 and 2012, jobs generally decentralized in U.S. metropolitan areas, with the result that on average, people live further from jobs than they did a decade ago.  Put another way:  there are fewer jobs within the average commute distance of the typical metropolitan resident.

While job access has diminished for most Americans, the report notes that the declines in job access have been somewhat greater for the poor and for racial and ethnic minorities than for non-poor and white metropolitan residents.  This, in the report’s view, has exacerbated the spatial mismatch between the poor and jobs.

The Kneebone/Holmes findings emphasize the change in job access over time.  As jobs decentralized, the average American had about 7 percent fewer jobs within a typical commuting radius in 2012 than in 2000.  But its illuminating to look at the level of job access.  Certain patterns emerge:

People who live in large metropolitan areas have access to many, many more jobs, than do residents of smaller metropolitan areas.  The typical New Yorker is has just shy of a million  jobs within commuting distance; the typical Memphian, only 150,000.  This is what economists are talking about when they describe “thick” urban labor markets.

Dig deeper, and it turns out that within metropolitan areas, cities have much better job access than suburbs.  We’ve taken the Brookings data for 2012 and computed the relative job accessibility of cities compared to their to suburbs for each of the nation’s 50 largest metro areas.  For example, an average city resident in Charlotte has about 320,000 jobs within typical commuting distance.  The average suburban resident in the Charlotte metro has just 70,000.  (Metro level data are shown in the table below).  This means that a Charlotte city resident has about 4.6 times as many jobs within commuting distance of her home than does her suburban counterpart.  For the typical large metro area, city residents have about 2.4 times as many jobs within commuting distances as their suburban neighbors.  This pattern of higher job accessibility in cities holds for every large metro area in the country–save one:  Las Vegas.

At first this may seem counter-intuitive, but consider:  even though jobs may have been decentralizing, central locations are often better able to access jobs in any part of the region.  Its also the case that despite decentralization, job density–the number of jobs per square mile–still tends to be noticeably higher in urban centers than on the fringe.  Its also interesting to note that the difference in job accessibility between cities and suburbs (+140 percent) dwarfs the average decline in job accessibility (-7%) over the past decade.  While aggregate job accessibility may have decreased slightly, individuals have wide opportunity to influence their access to jobs in every metropolitan area based on whether they choose to live in cities or suburbs.

Perhaps even more surprisingly, on average the poor and ethnic minorities generally are closer to jobs than their white and non-poor counterparts.  We can do the same computation to compare relative job accessibility within each metro area for poor and non-poor populations, and to compare job accessibility for blacks and whites.  Despite job decentralization, and the fact that poorer neighborhoods often themselves support fewer local businesses and jobs, the poor residents of the typical large metropolitan area have about 20 percent more jobs within typical commuting distance than do their non-poor counterparts.  The black residents of large U.S. metropolitan areas are have on average about 50 percent more jobs within typical commuting distance than their white counterparts in the same metropolitan area.  Again, this pattern holds for virtually all large metropolitan areas.  Data showing relative job accessibility for poor and non-poor persons and black and white persons by metropolitan area are shown in the two right hand columns of the table above.

Of course, a pure distance-based measure of job accessibility may not fully reflect the transportation accessibility to particular jobs–especially for poor persons who are disproportionately more likely to not have access to automobiles for commute trips.  But the data show that city residents have strikingly better access to a large number of jobs, and other forms of transportation–transit, cycling and walking–generally work better in cities.  The density and proximity of jobs in cities, plus the availability of transit is one reason why poor persons disproportionately concentrate in cities, according to research by Ed Glaeser and his colleagues.

The very much higher level of physical job accessibility in cities, and the relative proximity that poor people and black Americans enjoy to employment opportunities is a signal that physical employment mismatch is at best only a partial explanation for persistent urban poverty.  Other important barriers, particularly a lack of education, concentrated poverty, and continued discrimination are also important factors.

We’re deeply appreciative of our friends at Brookings undertaking this analysis, and making their methodology and findings accessible and transparent.  The metro-by-metro data they present add a new dimension to our understanding of urban land use and evolving labor markets.  While we strongly encourage everyone to explore this data, we offer an observation. In measuring job accessibility, Kneebone and Holmes chose to use separate and locally customized estimates of local commute distance.  For example, the average intra-metropolitan commute (according to data from the LEHD program) in Houston is 12.2 miles, while in New Orleans it is 6.2 miles.  This means that a big part of the difference in measured job accessibility between these two metropolitan areas reflects the fact the typical commute shed for Houston cover a far larger area than for New Orleans.  While this may be an accurate reflection of typical commuting behavior in each cities, it makes direct comparisons between different metropolitan areas problematic.

Walkability rankings: One step forward, one step back

To begin, let’s be clear about one thing:  we’re huge fans of Walk Score–the free Internet based service that rates every residential address in the United States (and a growing list of other countries) of a scale of 0 to 100, based on their proximity to a series of common destinations.  The concept and implementation of Walk Score are brilliant, transparent, and well-documented: not only can you see the score for your house or any other, Walk Score shows you which destinations were used in calculating that score.  And did we mention, it’s free.

The power of Walk Score is its market-moving value:  Americans are increasingly looking to live in vibrant, walkable communities, and Walk Score gives home buyers (and now apartment renters) a clear and simple tool for assessing the relative merits of different locations. (Which is undoubtedly why it was acquired by real estate website Redfin.com last year).  While there’s a lot more to walkability than just proximity to destinations–urban design, the quality of the built environment and pedestrian infrastructure matter too–Walk Score has substantially advanced the conversation about how to measure and make walkable places.  To their credit, the team at Walk Score has responded to criticism and continued to refine and extend their product, incorporating a Street Smart algorithm to track the street grid rather than relying on straight line distances, and adding measures for transit and bike access.  All this is exciting and useful: We think that giving consumers better information about their choices is pretty much an unalloyed public good.

And–full disclosure–in 2009 we got the cooperation of the team at Walk Score to provide data for a research project looking at the connection between walkability and real estate values.  Our research–done independently from Walk Score–showed that in 14 of 15 cities that we examined, walkability was positively correlated with home values, even after controlling for a host of other observable factors (like neighborhood income, numbers of bedrooms and bathrooms, home size, distance to jobs) that we know influence home values.  You can read the study “Walking the Walk” here.

Yesterday, Walk Score released its latest analysis rating the of walkability of major US cities.  Scores are produced by averaging the walk scores for different parts of the city, weighted by the population.  According to Walk Score, New York is the most walkable large city in the U.S., with an average walk score of 87.6, followed by San Francisco (83.9) and Boston (79.5).  The complete rankings are here.

Because they’ve been gathering data for a number of years, Walk Score is now in a position to report the change in walkability at the city level.  This should be a key indicator for mayors, planners and citizens.  Becoming more walkable is likely to be a proxy for an improving local economy, and suggests a city is becoming more accessible to its residents.  Walk Score reports that several cities have notably better walkability than a few years ago year ago:  Miami’s city-wide Walk Score increased by  more than 3 points, Detroit saw an increase of 2.2 points, and New Orleans recorded an increase of 0.7 points.  These particular results are a bit muddled by some changes to the Walk Score algorithm since 2011, but going forward, this promises to be an important tool for tracking progress at the city and neighborhood level.  Kudos to Redfin and Walk Score for making this information available.

But enamored as we are of Walk Score, we’re compelled to point out one glaring flaw in their rankings: the use of municipal geographies to compute scores for ranking purposes.  Their methodology looks at the average level of walkability only for addresses located within the city limits of each city.  Because municipal boundaries are so varied from place to place, municipalities are a poor unit for comparison, particularly for this kind of spatial data.  Using municipal boundaries for comparative work inevitably ends up comparing apples to acorns, and produces rankings that are at best misleading, and at worst, arguably wrong.

Chicago and Miami provide a case in point.  According to the Walk Score ranking, Miami is more walkable than Chicago–Miami’s city-wide walk score is 75.6, edging out Chicago’s 74.8–a finding that immediately struck our colleagues who have lived in the two cities as counter-intuitive, to put it mildly.  But the problem isn’t a flaw in Walk Score, it’s that these two municipalities represent wildly different chunks of their respective metropolitan areas.  The City of Miami encompasses only a small portion of the Miami-Ft. Lauderdale metropolitan area (the densest parts of downtown Miami and close-in urban neighborhoods); Chicago covers a much larger swath. The City of Miami is just the most densely housed 400,000 people in South Florida; while the City of Chicago is 2.7 million people.  There’s little doubt if we measured the walkability of the Chicago neighborhoods that were home to that region’s 400,000 or so most densely housed residents, we’d find a much higher Walk Score.

It turns out that metropolitan areas are a much more sensible basis for making comparisons and presenting rankings.  While municipal units may be a valid geography for some comparisons (related say, to elections or public finance) they can easily be misleading or wrong for making comparisons that involve economics and geography.  Look for a future CityCommentary digging deeper into this problem–and outlining how to avoid it.

In the mean time, here’s an unsolicited suggestion for the team at Walk Score:  can you use your database to create a count of the number of persons living in homes and apartments with a Walk Score of 80 or higher “very walkable” and “walkers paradise” in each metro area?  This would be a much more compelling indicator of how metro’s stacked up as walkable places than a single average score for a city–or a metro.

So, in the end, its one step forward (another year’s worth of data and the promise of tracking changes in walkability over time) and one step back (using municipal boundaries for comparisons).  This last glitch is easily fixed–and knowing the team at Walk Score, it certainly will be. In the mean time, their excellent and informative Walk Score data for individual properties is performing a vital public service and helping move markets.

The Cappuccino Congestion Index

April First falls on Saturday, and that’s a good reason to revisit an old favorite, the Cappuccino Congestion Index

We’re continuing told that congestion is a grievous threat to urban well-being. It’s annoying to queue up for anything, but traffic congestion has spawned a cottage industry of ginning up reports that transform our annoyance with waiting in lines into an imagined economic calamity. Using the same logic and methodology that underpins these traffic studies, its possible to demonstrate another insidious threat to the nation’s economic productivity: costly and growing coffee congestion.

cappuccino_line

Yes, there’s another black fluid that’s even more important than oil to the functioning of the U.S. economy: coffee. Because an estimated 100 million of us American workers can’t begin a productive work day without an early morning jolt of caffeine, and because one-third of these coffee drinkers regularly consume espresso drinks, lattes and cappuccinos, there is significant and growing congestion in coffee lines around the country. That’s costing us a lot of money. Consider these facts:

  • Delays waiting in line at the coffee shop for your daily latte, cappuccino or mocha cost U.S. consumers $4 billion every year in lost time;
  • The typical coffee drinker loses more time waiting in line at Starbucks than in traffic congestion;
  • Delays in getting your coffee are likely to increase because our coffee delivery infrastructure isn’t increasing as fast as coffee consumption.

Access to caffeine is provided by the nation’s growing corps of baristas and coffee bars. The largest of these, Starbucks, operates some 12,000 locations in the U.S. alone. Any delay in getting this vital beverage is going to impact a worker’s start time–and perhaps their day’s productivity. It’s true that sometimes, you can walk right up and get the triple espresso you need. Other times, however, you have to wait behind a phalanx ordering double, no-whip mochas with a pump of three different syrups, or an orange-mocha frappuccino. These delays in the coffee line are costly.

To figure out exactly how costly, we’ve applied the “travel time index” created by the Texas Transportation Institute to measure the economic impact of this delay on American coffee drinkers. For more than three decades TTI has used this index to calculate the dollar cost of traffic delays–here we use the same technique to figure the value of “coffee delays.”

The travel time index is the difference in time required for a rush hour commute compared to the same trip in non-congested conditions. According to Inrix, the travel tracking firm, the travel time index for the United States in July 2014  was 7.6, meaning that a commute trip that took 20 minutes in off-peak times would take an additional 91 seconds at the peak hour.

We constructed data on the relationship between customer volume and average service times for a series of Portland area coffee shops.  We used the 95th percentile time of 15 seconds as our estimate of “free flow” ordering conditions—how long it takes to enter the shop and place an order.  In our data-gathering, as the shop became more crowded, customers had to queue up. The time to place orders rose from an average of 30 to 40 seconds, to two to three minutes in “congested” conditions. The following chart shows our estimate of the relationship between customer volume and average wait times.

Coffee_Speed_Volume

Following the TTI methodology, we treat any additional time that customers have to spend waiting to place their order beyond what would be required in free flow times (i.e. more than 15 seconds) as delay attributable to coffee congestion.

Based on our observations and of typical coffee shops and other data, we were able to estimate the approximate flow of customers over the course of a day. We regard a typical coffee shop as one that has about 650 transactions daily. While most transactions are for a single consumer, some are for two or more consumers, so we use a consumer per transaction factor of 1.2. This means the typical coffee shop provides beverages (and other items) for about 750 consumers. We estimate the distribution of customers per hour over the course of the day based on overall patterns of hourly traffic, with the busiest times in the morning, and volume tapering off in the afternoon.

We then apply our speed/volume relationship (chart above) to our estimates of hourly volume to estimate the amount of delay experienced by customers in each hour.  When you scale these estimates up to reflect the millions of Americans waiting in line for their needed caffeine each day, the total value of time lost to cappuccino congestion costs consumers more than $4 billion annually. (Math below).


 

This is—of course—our April First commentary, and savvy readers will recognize it is tongue in cheek, but only partly so.  (The data are real, by the way!) The real April Fools Joke here is the application of this same tortured thinking to a description and a diagnosis of the nation’s traffic problems.

The Texas Transportation Institute’s  best estimate is that travel delays cost the average American between one and two minutes on their typical commute trip. While its possible–as we’ve done here–to apply a wage rate to that time and multiply by the total number of Americans to get an impressively large total, its not clear that the few odd minutes here and there have real value. This is why for years, we and others have debunked the TTI report. (The clumping of reported average commute times in the American Community Survey around values ending in “0” and “5” shows Americans don’t have that precise a sense of their average travel time anyhow.)

The “billions and billions” argument used by TTI to describe the cost of traffic congestion is a rhetorical device to generate alarm. The trouble is, when applied to transportation planning it leads to some misleading conclusions. Advocates argue regularly that the “costs of congestion” justify spending added billions in scarce public resources on expanding highways, supposedly to reduce time lost to congestion. There’s just no evidence this works–induced demand from new capacity causes traffic to expand and travel times to continue to lag:  Los Angeles just spent a whopping billion dollars to widen Interstate 405, with no measurable impact on congestion or traffic delays.

No one would expect to Starbucks to build enough locations—and hire enough baristas—so that everyone could enjoy the 15 second order times that you can experience when there’s a lull. Consumers are smart enough to understand that if you want a coffee the same time as everyone else, you’re probably going to have to queue up for a few minutes.

But strangely, when it comes to highways, we don’t recognize the trivially small scale of the expected time savings (a minute or two per person) and we don’t consider a kind of careful cost-benefit analysis that would tell us that very few transportation projects actually generate the kinds of sustained travel time savings that would make them economically worthwhile.

Ponder that as you wait in line for your cappuccino.  We’ll be just ahead of you ordering a double-espresso macchiato (and holding a stopwatch).


Want to know more?

Here’s the math:  We estimate that a peak times (around 10am) the typical Starbucks makes about 100 transactions, representing about 120 customers.  The average wait time is about two and one-half minutes–of which about two minutes and 15 second represents delay, compared to free flow conditions.  We make a similar computation for each hour of the day (customers are fewer and delays shorter at other hours).  Collectively customers at an typical store experience about 21 person hours of delay per day (that’s an average of a little over 90 seconds per customer).  We monetize the value of this delay at $15 per hour, and multiply it by 365 days and 12,000 Starbucks stores.  Since Starbucks represents about 35 percent of all coffee shops in the US, we scale this up to get a total value of time lost to coffee service delays of slightly more than $4 billion.

The Cappuccino Congestion Index

The Cappuccino Congestion Index shows how you can show how anything costs Americans billions and billions

We’re continuing told that congestion is a grievous threat to urban well-being. It’s annoying to queue up for anything, but traffic congestion has spawned a cottage industry of ginning up reports that transform our annoyance with waiting in lines into an imagined economic calamity. Using the same logic and methodology that underpins these traffic studies, its possible to demonstrate another insidious threat to the nation’s economic productivity: costly and growing coffee congestion.

cappuccino_line

Yes, there’s another black fluid that’s even more important than oil to the functioning of the U.S. economy: coffee. Because an estimated 100 million of us American workers can’t begin a productive work day without an early morning jolt of caffeine, and because one-third of these coffee drinkers regularly consume espresso drinks, lattes and cappuccinos, there is significant and growing congestion in coffee lines around the country. That’s costing us a lot of money. Consider these facts:

  • Delays waiting in line at the coffee shop for your daily latte, cappuccino or mocha cost U.S. consumers $4 billion every year in lost time;
  • The typical coffee drinker loses more time waiting in line at Starbucks than in traffic congestion;
  • Delays in getting your coffee are likely to increase because our coffee delivery infrastructure isn’t increasing as fast as coffee consumption.

Access to caffeine is provided by the nation’s growing corps of baristas and coffee bars. The largest of these, Starbucks, operates some 12,000 locations in the U.S. alone. Any delay in getting this vital beverage is going to impact a worker’s start time–and perhaps their day’s productivity. It’s true that sometimes, you can walk right up and get the triple espresso you need. Other times, however, you have to wait behind a phalanx ordering double, no-whip mochas with a pump of three different syrups, or an orange-mocha frappuccino. These delays in the coffee line are costly.

To figure out exactly how costly, we’ve applied the “travel time index” created by the Texas Transportation Institute to measure the economic impact of this delay on American coffee drinkers. For more than three decades TTI has used this index to calculate the dollar cost of traffic delays–here we use the same technique to figure the value of “coffee delays.”

The travel time index is the difference in time required for a rush hour commute compared to the same trip in non-congested conditions. According to Inrix, the travel tracking firm, the travel time index for the United States in July 2014  was 7.6, meaning that a commute trip that took 20 minutes in off-peak times would take an additional 91 seconds at the peak hour.

We constructed data on the relationship between customer volume and average service times for a series of Portland area coffee shops.  We used the 95th percentile time of 15 seconds as our estimate of “free flow” ordering conditions—how long it takes to enter the shop and place an order.  In our data-gathering, as the shop became more crowded, customers had to queue up. The time to place orders rose from an average of 30 to 40 seconds, to two to three minutes in “congested” conditions. The following chart shows our estimate of the relationship between customer volume and average wait times.

Coffee_Speed_Volume

Following the TTI methodology, we treat any additional time that customers have to spend waiting to place their order beyond what would be required in free flow times (i.e. more than 15 seconds) as delay attributable to coffee congestion.

Based on our observations and of typical coffee shops and other data, we were able to estimate the approximate flow of customers over the course of a day. We regard a typical coffee shop as one that has about 650 transactions daily. While most transactions are for a single consumer, some are for two or more consumers, so we use a consumer per transaction factor of 1.2. This means the typical coffee shop provides beverages (and other items) for about 750 consumers. We estimate the distribution of customers per hour over the course of the day based on overall patterns of hourly traffic, with the busiest times in the morning, and volume tapering off in the afternoon.

We then apply our speed/volume relationship (chart above) to our estimates of hourly volume to estimate the amount of delay experienced by customers in each hour.  When you scale these estimates up to reflect the millions of Americans waiting in line for their needed caffeine each day, the total value of time lost to cappuccino congestion costs consumers more than $4 billion annually. (Math below).


 

This is—of course—our regular April First commentary, and savvy readers will recognize it is tongue in cheek, but only partly so.  (The data are real, by the way!) The real April Fools Joke here is the application of this same tortured thinking to a description and a diagnosis of the nation’s traffic problems.

The Texas Transportation Institute’s  best estimate is that travel delays cost the average American between one and two minutes on their typical commute trip. While its possible–as we’ve done here–to apply a wage rate to that time and multiply by the total number of Americans to get an impressively large total, its not clear that the few odd minutes here and there have real value. This is why for years, we and others have debunked the TTI report. (The clumping of reported average commute times in the American Community Survey around values ending in “0” and “5” shows Americans don’t have that precise a sense of their average travel time anyhow.)

The “billions and billions” argument used by TTI to describe the cost of traffic congestion is a rhetorical device to generate alarm. The trouble is, when applied to transportation planning it leads to some misleading conclusions. Advocates argue regularly that the “costs of congestion” justify spending added billions in scarce public resources on expanding highways, supposedly to reduce time lost to congestion. There’s just no evidence this works–induced demand from new capacity causes traffic to expand and travel times to continue to lag:  Los Angeles just spent a whopping billion dollars to widen Interstate 405, with no measurable impact on congestion or traffic delays.

No one would expect to Starbucks to build enough locations—and hire enough baristas—so that everyone could enjoy the 15 second order times that you can experience when there’s a lull. Consumers are smart enough to understand that if you want a coffee the same time as everyone else, you’re probably going to have to queue up for a few minutes.

But strangely, when it comes to highways, we don’t recognize the trivially small scale of the expected time savings (a minute or two per person) and we don’t consider a kind of careful cost-benefit analysis that would tell us that very few transportation projects actually generate the kinds of sustained travel time savings that would make them economically worthwhile.

Ponder that as you wait in line for your cappuccino.  We’ll be just ahead of you ordering a double-espresso macchiato (and holding a stopwatch).


Want to know more?

Here’s the math:  We estimate that a peak times (around 10am) the typical Starbucks makes about 100 transactions, representing about 120 customers.  The average wait time is about two and one-half minutes–of which about two minutes and 15 second represents delay, compared to free flow conditions.  We make a similar computation for each hour of the day (customers are fewer and delays shorter at other hours).  Collectively customers at an typical store experience about 21 person hours of delay per day (that’s an average of a little over 90 seconds per customer).  We monetize the value of this delay at $15 per hour, and multiply it by 365 days and 12,000 Starbucks stores.  Since Starbucks represents about 35 percent of all coffee shops in the US, we scale this up to get a total value of time lost to coffee service delays of slightly more than $4 billion.

The Cappuccino Congestion Index

The Cappuccino Congestion Index shows how you can show how anything costs Americans billions and billions

We’re continuing told that congestion is a grievous threat to urban well-being. It’s annoying to queue up for anything, but traffic congestion has spawned a cottage industry of ginning up reports that transform our annoyance with waiting in lines into an imagined economic calamity. Using the same logic and methodology that underpins these traffic studies, its possible to demonstrate another insidious threat to the nation’s economic productivity: costly and growing coffee congestion.

cappuccino_line

Yes, there’s another black fluid that’s even more important than oil to the functioning of the U.S. economy: coffee. Because an estimated 100 million of us American workers can’t begin a productive work day without an early morning jolt of caffeine, and because one-third of these coffee drinkers regularly consume espresso drinks, lattes and cappuccinos, there is significant and growing congestion in coffee lines around the country. That’s costing us a lot of money. Consider these facts:

  • Delays waiting in line at the coffee shop for your daily latte, cappuccino or mocha cost U.S. consumers $4 billion every year in lost time;
  • The typical coffee drinker loses more time waiting in line at Starbucks than in traffic congestion;
  • Delays in getting your coffee are likely to increase because our coffee delivery infrastructure isn’t increasing as fast as coffee consumption.

Access to caffeine is provided by the nation’s growing corps of baristas and coffee bars. The largest of these, Starbucks, operates some 12,000 locations in the U.S. alone. Any delay in getting this vital beverage is going to impact a worker’s start time–and perhaps their day’s productivity. It’s true that sometimes, you can walk right up and get the triple espresso you need. Other times, however, you have to wait behind a phalanx ordering double, no-whip mochas with a pump of three different syrups, or an orange-mocha frappuccino. These delays in the coffee line are costly.

To figure out exactly how costly, we’ve applied the “travel time index” created by the Texas Transportation Institute to measure the economic impact of this delay on American coffee drinkers. For more than three decades TTI has used this index to calculate the dollar cost of traffic delays–here we use the same technique to figure the value of “coffee delays.”

The travel time index is the difference in time required for a rush hour commute compared to the same trip in non-congested conditions. According to Inrix, the travel tracking firm, the travel time index for the United States in July 2014  was 7.6, meaning that a commute trip that took 20 minutes in off-peak times would take an additional 91 seconds at the peak hour.

We constructed data on the relationship between customer volume and average service times for a series of Portland area coffee shops.  We used the 95th percentile time of 15 seconds as our estimate of “free flow” ordering conditions—how long it takes to enter the shop and place an order.  In our data-gathering, as the shop became more crowded, customers had to queue up. The time to place orders rose from an average of 30 to 40 seconds, to two to three minutes in “congested” conditions. The following chart shows our estimate of the relationship between customer volume and average wait times.

Coffee_Speed_Volume

Following the TTI methodology, we treat any additional time that customers have to spend waiting to place their order beyond what would be required in free flow times (i.e. more than 15 seconds) as delay attributable to coffee congestion.

Based on our observations and of typical coffee shops and other data, we were able to estimate the approximate flow of customers over the course of a day. We regard a typical coffee shop as one that has about 650 transactions daily. While most transactions are for a single consumer, some are for two or more consumers, so we use a consumer per transaction factor of 1.2. This means the typical coffee shop provides beverages (and other items) for about 750 consumers. We estimate the distribution of customers per hour over the course of the day based on overall patterns of hourly traffic, with the busiest times in the morning, and volume tapering off in the afternoon.

We then apply our speed/volume relationship (chart above) to our estimates of hourly volume to estimate the amount of delay experienced by customers in each hour.  When you scale these estimates up to reflect the millions of Americans waiting in line for their needed caffeine each day, the total value of time lost to cappuccino congestion costs consumers more than $4 billion annually. (Math below).


 

This is—of course—our regular April First commentary, and savvy readers will recognize it is tongue in cheek, but only partly so.  (The data are real, by the way!) The real April Fools Joke here is the application of this same tortured thinking to a description and a diagnosis of the nation’s traffic problems.

The Texas Transportation Institute’s  best estimate is that travel delays cost the average American between one and two minutes on their typical commute trip. While its possible–as we’ve done here–to apply a wage rate to that time and multiply by the total number of Americans to get an impressively large total, its not clear that the few odd minutes here and there have real value. This is why for years, we and others have debunked the TTI report. (The clumping of reported average commute times in the American Community Survey around values ending in “0” and “5” shows Americans don’t have that precise a sense of their average travel time anyhow.)

The “billions and billions” argument used by TTI to describe the cost of traffic congestion is a rhetorical device to generate alarm. The trouble is, when applied to transportation planning it leads to some misleading conclusions. Advocates argue regularly that the “costs of congestion” justify spending added billions in scarce public resources on expanding highways, supposedly to reduce time lost to congestion. There’s just no evidence this works–induced demand from new capacity causes traffic to expand and travel times to continue to lag:  Los Angeles just spent a whopping billion dollars to widen Interstate 405, with no measurable impact on congestion or traffic delays.

No one would expect to Starbucks to build enough locations—and hire enough baristas—so that everyone could enjoy the 15 second order times that you can experience when there’s a lull. Consumers are smart enough to understand that if you want a coffee the same time as everyone else, you’re probably going to have to queue up for a few minutes.

But strangely, when it comes to highways, we don’t recognize the trivially small scale of the expected time savings (a minute or two per person) and we don’t consider a kind of careful cost-benefit analysis that would tell us that very few transportation projects actually generate the kinds of sustained travel time savings that would make them economically worthwhile.

Ponder that as you wait in line for your cappuccino.  We’ll be just ahead of you ordering a double-espresso macchiato (and holding a stopwatch).


Want to know more?

Here’s the math:  We estimate that a peak times (around 10am) the typical Starbucks makes about 100 transactions, representing about 120 customers.  The average wait time is about two and one-half minutes–of which about two minutes and 15 second represents delay, compared to free flow conditions.  We make a similar computation for each hour of the day (customers are fewer and delays shorter at other hours).  Collectively customers at an typical store experience about 21 person hours of delay per day (that’s an average of a little over 90 seconds per customer).  We monetize the value of this delay at $15 per hour, and multiply it by 365 days and 12,000 Starbucks stores.  Since Starbucks represents about 35 percent of all coffee shops in the US, we scale this up to get a total value of time lost to coffee service delays of slightly more than $4 billion.

The Cappuccino Congestion Index

The Cappuccino Congestion Index shows how you can show how anything costs Americans billions and billions

We’re continuing told that congestion is a grievous threat to urban well-being. It’s annoying to queue up for anything, but traffic congestion has spawned a cottage industry of ginning up reports that transform our annoyance with waiting in lines into an imagined economic calamity. Using the same logic and methodology that underpins these traffic studies, its possible to demonstrate another insidious threat to the nation’s economic productivity: costly and growing coffee congestion.

cappuccino_line

Yes, there’s another black fluid that’s even more important than oil to the functioning of the U.S. economy: coffee. Because an estimated 100 million of us American workers can’t begin a productive work day without an early morning jolt of caffeine, and because one-third of these coffee drinkers regularly consume espresso drinks, lattes and cappuccinos, there is significant and growing congestion in coffee lines around the country. That’s costing us a lot of money. Consider these facts:

  • Delays waiting in line at the coffee shop for your daily latte, cappuccino or mocha cost U.S. consumers $4 billion every year in lost time;
  • The typical coffee drinker loses more time waiting in line at Starbucks than in traffic congestion;
  • Delays in getting your coffee are likely to increase because our coffee delivery infrastructure isn’t increasing as fast as coffee consumption.

Access to caffeine is provided by the nation’s growing corps of baristas and coffee bars. The largest of these, Starbucks, operates some 12,000 locations in the U.S. alone. Any delay in getting this vital beverage is going to impact a worker’s start time–and perhaps their day’s productivity. It’s true that sometimes, you can walk right up and get the triple espresso you need. Other times, however, you have to wait behind a phalanx ordering double, no-whip mochas with a pump of three different syrups, or an orange-mocha frappuccino. These delays in the coffee line are costly.

To figure out exactly how costly, we’ve applied the “travel time index” created by the Texas Transportation Institute to measure the economic impact of this delay on American coffee drinkers. For more than three decades TTI has used this index to calculate the dollar cost of traffic delays–here we use the same technique to figure the value of “coffee delays.”

The travel time index is the difference in time required for a rush hour commute compared to the same trip in non-congested conditions. According to Inrix, the travel tracking firm, the travel time index for the United States in July 2014  was 7.6, meaning that a commute trip that took 20 minutes in off-peak times would take an additional 91 seconds at the peak hour.

We constructed data on the relationship between customer volume and average service times for a series of Portland area coffee shops.  We used the 95th percentile time of 15 seconds as our estimate of “free flow” ordering conditions—how long it takes to enter the shop and place an order.  In our data-gathering, as the shop became more crowded, customers had to queue up. The time to place orders rose from an average of 30 to 40 seconds, to two to three minutes in “congested” conditions. The following chart shows our estimate of the relationship between customer volume and average wait times.

Coffee_Speed_Volume

Following the TTI methodology, we treat any additional time that customers have to spend waiting to place their order beyond what would be required in free flow times (i.e. more than 15 seconds) as delay attributable to coffee congestion.

Based on our observations and of typical coffee shops and other data, we were able to estimate the approximate flow of customers over the course of a day. We regard a typical coffee shop as one that has about 650 transactions daily. While most transactions are for a single consumer, some are for two or more consumers, so we use a consumer per transaction factor of 1.2. This means the typical coffee shop provides beverages (and other items) for about 750 consumers. We estimate the distribution of customers per hour over the course of the day based on overall patterns of hourly traffic, with the busiest times in the morning, and volume tapering off in the afternoon.

We then apply our speed/volume relationship (chart above) to our estimates of hourly volume to estimate the amount of delay experienced by customers in each hour.  When you scale these estimates up to reflect the millions of Americans waiting in line for their needed caffeine each day, the total value of time lost to cappuccino congestion costs consumers more than $4 billion annually. (Math below).


 

This is—of course—our regular April First commentary, and savvy readers will recognize it is tongue in cheek, but only partly so.  (The data are real, by the way!) The real April Fools Joke here is the application of this same tortured thinking to a description and a diagnosis of the nation’s traffic problems.

The Texas Transportation Institute’s  best estimate is that travel delays cost the average American between one and two minutes on their typical commute trip. While its possible–as we’ve done here–to apply a wage rate to that time and multiply by the total number of Americans to get an impressively large total, its not clear that the few odd minutes here and there have real value. This is why for years, we and others have debunked the TTI report. (The clumping of reported average commute times in the American Community Survey around values ending in “0” and “5” shows Americans don’t have that precise a sense of their average travel time anyhow.)

The “billions and billions” argument used by TTI to describe the cost of traffic congestion is a rhetorical device to generate alarm. The trouble is, when applied to transportation planning it leads to some misleading conclusions. Advocates argue regularly that the “costs of congestion” justify spending added billions in scarce public resources on expanding highways, supposedly to reduce time lost to congestion. There’s just no evidence this works–induced demand from new capacity causes traffic to expand and travel times to continue to lag:  Los Angeles just spent a whopping billion dollars to widen Interstate 405, with no measurable impact on congestion or traffic delays.

No one would expect to Starbucks to build enough locations—and hire enough baristas—so that everyone could enjoy the 15 second order times that you can experience when there’s a lull. Consumers are smart enough to understand that if you want a coffee the same time as everyone else, you’re probably going to have to queue up for a few minutes.

But strangely, when it comes to highways, we don’t recognize the trivially small scale of the expected time savings (a minute or two per person) and we don’t consider a kind of careful cost-benefit analysis that would tell us that very few transportation projects actually generate the kinds of sustained travel time savings that would make them economically worthwhile.

Ponder that as you wait in line for your cappuccino.  We’ll be just ahead of you ordering a double-espresso macchiato (and holding a stopwatch).


Want to know more?

Here’s the math:  We estimate that a peak times (around 10am) the typical Starbucks makes about 100 transactions, representing about 120 customers.  The average wait time is about two and one-half minutes–of which about two minutes and 15 second represents delay, compared to free flow conditions.  We make a similar computation for each hour of the day (customers are fewer and delays shorter at other hours).  Collectively customers at an typical store experience about 21 person hours of delay per day (that’s an average of a little over 90 seconds per customer).  We monetize the value of this delay at $15 per hour, and multiply it by 365 days and 12,000 Starbucks stores.  Since Starbucks represents about 35 percent of all coffee shops in the US, we scale this up to get a total value of time lost to coffee service delays of slightly more than $4 billion.

On the Road Again

The last few months have witnessed a notable rebound in vehicle miles traveled. The U.S. Department of Transportation reports that for the year ended December, 2014, American’s drove 3.015 trillion miles, up about 1.7 percent from the previous year–the first noticeable increase in driving in more than a decade. The upward trend has led the highway lobby to excitedly claim that “demand on the roadway system is returning to historical trends of increased freight traffic and more overall use of passenger vehicles.”

But is that really the case? What is behind the increase in driving–and do the last few months of data really signal a return to a period of increasing driving?

It’s very clear what’s behind the surge in VMT. The big news of the energy market last year was the collapse in oil prices. Oil that had averaged roughly $100 a barrel for more than five years suddenly dropped to less than $50, taking gasoline prices down with it. According to the Energy Information Administration, the average price of a gallon of regular gas fell from $3.26 in January 2014, to $1.99 in February 2015.

The data for the last calendar year show that the rebound in driving roughly corresponds to the last quarter of the year–the time when gas prices dropped the most. The blue line shows the price of a gallon of gas in dollars, by week, and the orange line shows the percentage change in vehicle miles traveled compared to the same month one year earlier.

It would be extremely surprising if the lower price of gasoline didn’t prompt Americans to drive more. It’s quite clear that the run up in gas prices since 2004 was a major factor in reducing the amount of automobile travel. Academic studies suggest that the short-run elasticity (price responsiveness) of driving is about -0.1 to -0.2, meaning that a 10 percent increase (decrease) in fuel prices will result in a 1-2 percent decrease (increase) in miles driven.

As our colleague Clark Williams-Derry at the Sightline Institute has pointed out, the increase in driving is about what you’d expect given the historic relationship of gas prices and driving. The increase in driving is likely to lead to more traffic congestion and more crashes–externalities of automobile use that have been easing in recent years.

Lower gas prices also influence vehicle purchasing patterns. Sales of light trucks increased about 3.6 percent, year-over-year from December 2012 to December 2013. Truck sales accelerated briskly in 2014, recording a 12.4% increase between December 2013 to December 2014, to a total of more than 9 million light trucks.

The net effect was to slow the rate of improvement in the average fuel economy of newly purchased cars. The University of Michigan computes the sales-weighted average fuel economy of newly purchased cars on a monthly basis. Over the past seven years, Americans have been purchasing progressively more fuel efficient vehicles–average fuel economy of newly purchased cars has increased from 20.8 miles per gallon in model year 2007 to 25.3 miles per gallon last year.

In the last quarter of 2014, the year-over-year rate of improvement in vehicle fuel efficiency declined sharply. For the first three quarters of the year, new cars sold in each month averaged about 0.7 more miles per gallon than cars sold in the same month in 2013. For the last quarter of 2014 (when gas prices were dropping precipitously) the fuel economy of new cars averaged only about 0.3 more miles per gallon that cars sold in the same month in the previous year.

There’s a lot of emphasis put on the raw mileage number–3 trillion miles–but a better way of thinking about whether we’re driving more or less as individuals is to adjust these gross mileage numbers by population. Nearly half of the 1.7 percent increase in miles driven is due simply to having more people. Adjusted for population, the increase isn’t an impressive, about 0.9%. Per capita driving is still well below its 2005 peak– we’re driving about as much as we did in 1998.

fred2

You can see the underlying data for this chart here.

The big question going forward is whether this uptick is the harbinger of a reversal in the decade-long decline in driving, or whether it’s just a temporary blip. The evidence so far is simply too fragmentary to draw hard conclusions. But the recent rebound in gas prices–up 18% to $2.36 per gallon, according to the Energy Information Administration– may mean that the boost to driving that was provided by cheap gas is already ebbing.

The Cappuccino Congestion Index

cappuccino_line

City Observatory, April 1. 2015

A new City Observatory analysis reveals a new and dangerous threat to the nation’s economic productivity: costly and growing coffee congestion.

Yes, there’s another black fluid that’s even more important than oil to the functioning of the U.S. economy: coffee. Because an estimated 100 million of us American workers can’t begin a productive work day without an early morning jolt of caffeine, and because one-third of these coffee drinkers regularly consume espresso drinks, lattes and cappuccinos, there is significant and growing congestion in coffee lines around the country. That’s costing us a lot of money. Consider these facts:

  • Delays waiting in line at the coffee shop for your daily latte, cappuccino or mocha cost U.S. consumers $4 billion every year in lost time;
  • The typical coffee drinker loses more time waiting in line at Starbucks than in traffic congestion;
  • Delays in getting your coffee are likely to increase because our coffee delivery infrastructure isn’t increasing as fast as coffee consumption.

Access to caffeine is provided by the nation’s growing corps of baristas and coffee bars. The largest of these, Starbucks, operates some 12,000 locations in the U.S. alone. Any delay in getting this vital beverage is going to impact a worker’s start time–and perhaps their day’s productivity. It’s true that sometimes, you can walk right up and get the triple espresso you need. Other times, however, you have to wait behind a phalanx ordering double, no-whip mochas with a pump of three different syrups, or an orange-mocha frappuccino. These delays in the coffee line are costly.

To figure out exactly how costly, we’ve applied the “travel time index” created by the Texas Transportation Institute to measure the economic impact of this delay on American coffee drinkers. For more than three decades TTI has used this index to calculate the dollar cost of traffic delays–here we use the same technique to figure the value of “coffee delays.”

The travel time index is the difference in time required for a rush hour commute compared to the same trip in non-congested conditions. According to Inrix, the travel tracking firm, the travel time index for the United States in July 2014 (the latest month for which they’ve released this data) was 7.6, meaning that a commute trip that took 20 minutes in off-peak times would take an additional 91 seconds at the peak hour.

We constructed data on the relationship between customer volume and average service times for a series of Portland area coffee shops.  We used the 95th percentile time of 15 seconds as our estimate of “free flow” ordering conditions—how long it takes to enter the shop and place an order.  In our data-gathering, as the shop became more crowded, customers had to queue up. The time to place orders rose from an average of 30 to 40 seconds, to two to three minutes in “congested” conditions. The following chart shows our estimate of the relationship between customer volume and average wait times.

Coffee_Speed_Volume

Following the TTI methodology, we treat any additional time that customers have to spend waiting to place their order beyond what would be required in free flow times (i.e. more than 15 seconds) as delay attributable to coffee congestion.

Based on our observations and of typical coffee shops and other data, we were able to estimate the approximate flow of customers over the course of a day. We regard a typical coffee shop as one that has about 650 transactions daily. While most transactions are for a single consumer, some are for two or more consumers, so we use a consumer per transaction factor of 1.2. This means the typical coffee shop provides beverages (and other items) for about 750 consumers. We estimate the distribution of customers per hour over the course of the day based on overall patterns of hourly traffic, with the busiest times in the morning, and volume tapering off in the afternoon.

We then apply our speed/volume relationship (chart above) to our estimates of hourly volume to estimate the amount of delay experienced by customers in each hour.  When you scale these estimates up to reflect the millions of Americans waiting in line for their needed caffeine each day, the total value of time lost to cappucino congestion costs consumers more than $4 billion annually. (Math below).


 

This is—of course—our April First commentary, and savvy readers will recognize it is tongue in cheek, but only partly so.  (The data are real, by the way!) The real April Fools Joke here is the application of this same tortured thinking to a description and a diagnosis of the nation’s traffic problems.

The Texas Transportation Institute’s  best estimate is that travel delays cost the average American between one and two minutes on their typical commute trip. While its possible–as we’ve done here–to apply a wage rate to that time and multiply by the total number of Americans to get an impressively large total, its not clear that the few odd minutes here and there have real value. This is why for years, we and others have debunked the TTI report. (The clumping of reported average commute times in the American Community Survey around values ending in “0” and “5” shows Americans don’t have that precise a sense of their average travel time anyhow.)

The “billions and billions” argument used by TTI to describe the cost of traffic congestion is a rhetorical device to generate alarm. The trouble is, when applied to transportation planning it leads to some misleading conclusions. Advocates argue regularly that the “costs of congestion” justify spending added billions in scarce public resources on expanding highways, supposedly to reduce time lost to congestion. There’s just no evidence this works–induced demand from new capacity causes traffic to expand and travel times to continue to lag:  Los Angeles just spent a whopping billion dollars to widen Interstate 405, with no measurable impact on congestion or traffic delays.

No one would expect to Starbucks to build enough locations—and hire enough baristas—so that everyone could enjoy the 15 second order times that you can experience when there’s a lull. Consumers are smart enough to understand that if you want a coffee the same time as everyone else, you’re probably going to have to queue up for a few minutes.

But strangely, when it comes to highways, we don’t recognize the trivially small scale of the expected time savings (a minute or two per person) and we don’t consider a kind of careful cost-benefit analysis that would tell us that very few transportation projects actually generate the kinds of sustained travel time savings that would make them economically worthwhile.

Ponder that as you wait in line for your cappuccino.  We’ll be just ahead of you ordering a double-espresso macchiato (and holding a stopwatch).


Want to know more?

Here’s the math:  We estimate that a peak times (around 10am) the typical Starbucks makes about 100 transactions, representing about 120 customers.  The average wait time is about two and one-half minutes–of which about two minutes and 15 second represents delay, compared to free flow conditions.  We make a similar computation for each hour of the day (customers are fewer and delays shorter at other hours).  Collectively customers at an typical store experience about 21 person hours of delay per day (that’s an average of a little over 90 seconds per customer).  We monetize the value of this delay at $15 per hour, and multiply it by 365 days and 12,000 Starbucks stores.  Since Starbucks represents about 35 percent of all coffee shops in the US, we scale this up to get a total value of time lost to coffee service delays of slightly more than $4 billion.

Misleading Medians & the McMansion Mirage

A story published by the Washington Post’s Wonkblog last week made the headline claim that “The McMansion is back, and bigger than ever.”  The article says that new homes are an average of 1,000 feet larger than in 1982, and that the “death of the McMansion” has been highly exaggerated, as have claims that development is shifting to smaller, more urban and more walkable development. The Wonkblog article echoes an 2014 post in CityLab –“The Increasingly Bloated American Dream”–which claimed that “American homes are getting bigger and bigger.”

While the data seem to superficially support this argument, a closer reading shows that the apparent surge in McMansions is actually a bit of a statistical mirage. These analysts have overlooked a key limitation of the reported data. It’s actually the case that American homes are only getting bigger if one believes that people living in multi-family housing either aren’t Americans or don’t have homes.

If instead of looking at the median, we look at the actual number of houses built, a different story emerges. As with all single-family housing, the market for big houses remains depressed—housing starts of 4,000 square feet or more are down 59 percent from the peak and are lower now than they were in 2001.  Homebuilders built 137,000 of these huge homes in 2006, but only 56,000 in 2013, according to the Census Bureau.

The only reason these big houses have increased as a share of total new housing is because the market for affordable, smaller single family homes has done even worse. The smaller yet still catastrophic decline in McMansions is hardly evidence of a growing, or even a continuing consumer love-affair with big houses.

Medians are funny measures—they’re highly dependent on the composition of the population being measured. If the housing market were so bad that only Bill Gates had the wherewithal to build a house, the “median” new home would balloon to 66,000 square feet (the size of his Lake Washington mansion). While that’s an extreme example, that’s the kind of thing that has happened to the U.S. housing market since the bubble days of last decade.

When the housing market collapsed, the bottom fell out. The big decline has been in smaller houses. The apparent popularity of the McMansion is a statistical artifact of the misleading median in a still very depressed housing sector. If anything, the rising median size of new homes is more a testament to the continued growth of income inequality in the U.S., coupled with tougher (i.e. more realistic) lending standards by banks.

This becomes apparent when you look at the actual number of new houses built in the U.S. The growth in the share of new single family homes is not due to some burgeoning increase in the demand for McMansions—rather, it represented the bottom falling out of market for single-family homes. Since the housing bubble peaked in 2007, single-family housing construction is down 66 percent. The construction of 4,000 square foot and larger homes—the McMansions—is down 59 percent. Smaller single-family homes under 1,800 square feet are down 75 percent. Meanwhile, the number of multi-family homes constructed has been increasing steadily, and is now back to pre-recession levels. Multi-family housing now makes up 40 percent of new home starts, up from 20 percent a decade ago. If we recalculated the median new home size including both multi- and single-family homes, the increase in the McMansion share would look much smaller.

We’re far from having what by historical standards would be considered a “healthy” housing market. Total housing constructed over the past five years is lower than any five-year period in the past 50 years. Does anyone believe that if the single-family housing market boomed back to 1.5 million housing starts, that the demand would come proportionately from McMansions? Of course not: the only way to get unit growth in single family housing is by getting households of more modest means back into homeownership—if that ever happens. They will be buying smaller houses.

Unlike the old days of NINJA (no income, no job or assets) lending, where even those with poor credit could qualify for loans, today’s credit standards are much higher. The other key factor has been the demise of the trade-up market. Because most people buy their new homes in significant part with the accumulated appreciation on their existing home, the decline in home values meant that very few middle-income households were in any position to trade-up in the real estate market.

There’s another problem with this median measure: it only looks at single-family housing, not all housing. The one bright spot in the housing market is not in single-family homes, but in multi-family units. By excluding the smaller multi-family homes, this automatically biases the median measure upward.

So in large measure, the only healthy segment of the single-family market is for those with very high incomes. Even here, “health” is a relative thing. Compared to the peak of the housing bubble years, sales of McMansions were lower in 2013 than any year since 2001.

If anything, the growth of the median size of new houses is evidence of the continued and growing impact of income inequality. With growth in incomes occurring mostly among those with the highest incomes, it figures that to the extent there is demand for housing, it’s coming disproportionately from those in the highest income brackets who can afford larger homes, and who qualify for credit.

An accurate measure of the popularity of McMansions would look at the extent to which high-income households are buying large new houses. We don’t have a good annual public data series on wealth by household, but a number of private firms estimate the number of high-net-worth households that form the market for these very large single-family homes. The Spectrem Group has estimated the number of U.S. households with net financial worth of $5 million or more (exclusive of the value of their principal home). By their reckoning there are about 1.24 million such households in the U.S. The number fluctuates from year to year, chiefly due to changes in financial markets.

We can get a good contemporaneous gauge of the popularity of McMansions by dividing the number of new 4,000 plus square foot homes sold by the number of households with a net worth of $5 million or more: call it the McMansion/Multi-Millionaire ratio. (There’s no universally accepted definition of McMansion, but since the Census Bureau reports the number of newly completed single-family homes of 4,000 square feet or larger, most researchers take this as a proxy for these over-sized homes.)

The McMansion to Multi-Millionaire ratio started at about 12.5 in 2001 (the oldest year in the current Census home size series)—meaning that the market built 12 new 4,000 square foot-plus homes for every 1,000 households with a net worth of $5 million or more. The ratio fluctuated over the following few years, and was at 12.0 in 2006—the height of the housing bubble. The ratio declined sharply thereafter as housing and financial markets crashed.

Even though the number of high-net-worth households has been increasing briskly in recent years (it’s now at a new high), the rebound in McMansions has been tepid (still down 59 percent from the peak, as noted earlier). The result is that the McMansion/Multi-Millionaire ratio is still at 4.5–very near its lowest point. Relative to the number of high-net-worth households, we’re building only about a third as many McMansions as we did 5 or 10 years ago. These data suggest that even among the top one or two percent, there’s a much-reduced interest in super-large houses.

There are a couple of key lessons here for thinking about the state of the U.S. housing market. Don’t be fooled by the misleading median, and don’t overlook the big rebound in multi-family housing.

Twenty-somethings are choosing cities. Really.

Over at 538, Ben Casselman offers up a provocative, contrarian article “Think Millennials prefer cities?  Think Again.” He claims that newly released census data show that, contrary to the “all the hipsters are moving to cities” meme, millennials–like previous generations–are actually migrating towards the suburbs.

This is a case where we think the usually reliable 538 gets it wrong.

Here’s the key problem:  Casselman’s data looks only the subset of migration between suburbs and cities in metropolitan areas–that is he only counts people who move from the suburbs of a metro to the principal city of a metro (and vice versa). He ignores the people that move to city centers from the city centers of other metropolitan areas, from non-metropolitan areas and from abroad. So Casselman’s tabulation only looks at whether people are moving from Scarsdale or Bethesda to Brooklyn (or vice versa).  A young adult moving from the central city of another metro (like Washington DC or Portland to Brooklyn) or from a rural area or another country doesn’t count in this tabulation. As it turns out, this makes a big difference.

To get a more comprehensive picture of migration, we’ve pulled together data showing all the twenty-something migrants to principal cities and all the migrants to suburbs, and classified them by place of origin (where they lived in the previous year).  The top panel of our table shows the complete data on moves to cities and suburbs; the bottom panel presents an addenda showing only the data on city-suburb moves that 538 used.  (Like Casselman, we’ve excluded city-to-city and suburb-to-suburb moves within a metropolitan area, and non-metro to non-metro moves).

Movers to Principal Cities and to Suburbs, 2013-14

     Age Group
Destination of Move Origin of Move 20-24 25-29
To principal city From own metro suburb 436 302
From other metro suburb 118 124
From other metro principal city 277 331
From non-metro area 107 82
From abroad 104 106
All moves to principal cities 1,042 945
To suburb From own metro principal city 479 390
From other metro principal city 242 138
From other metro suburb 136 193
From non-metro area 109 74
From abroad 56 85
All moves to suburbs 1,022 880
Net migration suburb to principal city 20 65
Addenda:  City-Suburb/Suburb-City Moves Only (538 Analysis)
To principal city From own metro suburb 436 302
From other metro suburb 118 124
Suburb-to-city moves 554 426
To suburb From own principal city 479 390
From other metro principal city 242 138
City-to-suburb moves 721 528
Net migration suburb to principal city -167 -102

Source:  Current Population Survey, 2014.  Table 16.  Metropolitan Mobility, by Sex, Age, Race and Hispanic Origin, Relationship to Householder, Educational Attainment, Marital Status, Nativity, Tenure, and Poverty Status:  2013 to 2014.  Numbers in Thousands.

These data show that there was actually a net inflow of about 85,000 20 to 29 year-olds into principal cities in 2014, in contrast to Casselman’s data showing a net outflow of more than one quarter million. The difference stems from the fact that young adults moving into a metropolitan area from some other metro, or a non-metro area, or from abroad, were much more likely to live in the principal city than young adults moving within a metropolitan area.

Let’s focus for a moment on 25 to 29 year-olds moving into principal cities. It turns out that more of them come from principal cities in other metropolitan areas (331,000) than move to the principal city from suburbs in the same metropolitan area (302,000). So inter-metropolitan moves are actually more important to this demographic shift than are within metro moves.  Also notice that principal city residents moving to a different metro are about two and a half times as likely to move to a principal city in that new metro as they are to move to a suburb in the new metro–331,000 residents of principal cities in other metros moved to principal cities in a new metro; only 138,000 residents of principal cities in other metros moved to suburbs in a new metro.  Among migrants from other metropolitan areas, only those who previously lived in suburbs were more likely to move to the suburb in a new metro (and this group was far less numerous). Movers from non-metro areas and from abroad were more likely to move to the principal city than to its suburbs.

The 538 result is skewed by the fact that principal city residents are much more likely to move, period, than are suburban residents.  Among 25 to 29 year olds living in principal cities in 2013, about 9.7% moved compared to only about 6.2% of 25 to 29 year-olds living in suburbs.  Fewer suburban residents move to cities simply because fewer suburban residents move anywhere.

By looking at only a subset of movers, 538 misses several important sources of migration of young adults to central cities.  This is important because central cities often serve as a kind of port-of-entry or “Ellis Island” in metropolitan areas.  New migrants to a region from other metropolitan areas other states and other countries seem disproportionately to settle, at least initially, in central city locations.  Its also the case that better educated workers are more likely to make longer moves, and move between states and to different metropolitan areas.  As a result, city centers are disproportionately attracting well-educated young adults.  Our data show that between 2000 and 2012, the number of 25 to 34 year-olds increased twice as fast within 3 miles of the center of the central business district in the 51 largest metropolitan areas as it did outside that circle.

There’s an important technical limitation to using municipal boundaries of the largest city to separate metros into “city” and “suburb.”  Principal cities vary widely in how much of a metro area they cover.  Some like Boston and Miami are a small fraction of the urban area; others municipalities like Jacksonville and San Antonio, encompass vast swaths of low density development.  At City Observatory, we strongly prefer using radius-based measures for making metropolitan comparisons. Unfortunately, CPS migration data aren’t available at the finer geographic detail needed to perform this kind of analysis.

It must also be said that 538 has set up a bit of a straw man:  the point is not that all millennials want to live or are living in cities. The point is that preferences have demonstrably changed in favor of cities. The migration patterns of young adults today are very different from those we observed just a decade or two ago. Looking at aggregate population data–not just year-to-year moves–we noted that the probability that a 25 to 34 year old lived in a close-in urban neighborhood, relative to all metro residents quadrupled from 1990 to 2010.

In our view, Ben Casselman glosses over the really critical point about changing migration patterns:

Millennials are moving to the suburbs at a much lower rate than past generations did at the same age. In the mid-1990s, people ages 25 to 29 were twice as likely to move from the city to the suburbs as vice versa. Today, they’re only about a quarter more likely.

That’s a big change.  And where people move in their 20s is important because the probability of migration falls precipitously with age:  a 35 year-old is roughly half as likely to move as a 25 year-old, and that probability declines steadily with age. If principal cities are doing a better job of attracting people in their 20s, it has major ramifications for future city population and economic growth.  City population change is highly sensitive to relatively minor changes in the probability and duration of city residence of young adults:  even if they move to the suburbs as they age, the growing proportion and longer tenure of young adults in cities has a measurable and continuing impact on city demographics.

Young adults are highly mobile:  they’re voting with their feet for the kinds of metropolitan areas and neighborhoods they want to live in.  When you look at the entire sample of movers to cities and suburbs–and don’t arbitrarily narrow the analysis–the data show that young adults, especially the most well-educated, are increasingly choosing cities.

 

Has the Tide Turned?

Last month, City Observatory released a new report—Surging City Center Job Growth—chronicling a widespread rebound in city center jobs. For the first time in decades, job growth in city centers around the country has surpassed the rate of job growth in peripheral areas.

In an article called “Fool for the City,” Jacob Anbinder of The Week responded to recent media reports about the return of city centers, commenting that the issue may have been over-hyped in the media. You might think that a publication that bills itself as “All you need to know about everything that matters”  might be a little bit more reticent in accusing others of hyperbole. Just the same, let’s take a minute to address the points raised in this article, and allay Mr. Anbinder’s fears.

As we described in our report, what’s remarkable about this trend is how it runs counter to the decades-long pattern of job decentralization. In analyzing data reaching back to the 1940s, we showed the steady ebbing of the relative economic importance of city centers. The message here isn’t a new era of urban triumphalism, so much as it is the end of a long period of unabated decentralization.

As a reminder, here are the key data points from our report:

We were careful in our work to flag the importance of the industrial composition of employment change through the business cycle on the observed patterns of job losses and gains. There’s no question that cities benefited from the strength of centralized industries, like professional services and finance, relative to the weakness of more decentralized industries like manufacturing, construction, and distribution. (In addition, contrary to the implication of the article that city center results were driven by government employment, our data excluded public administration employment.) But even after controlling for these industry variations, we showed that city centers had recorded a significant gain in their competitive position vis-a-vis suburbs.

The Week points out that for the entire nine-year period under consideration, many city centers were still below their 2001 level of employment. There is no question that the earlier 2002-07 period was a continuation of the historical trend, and that nearly all cities were then losing share of total metro employment. The key point in our study is that during the last four years for which we have data, the trend is quite different. (It is worth noting that 2002-07 was the height of the housing bubble and the peak of ex-urban development and job decentralization.) It’s hardly remarkable that most city centers didn’t grow fast enough in the four years coinciding with a very weak national economy to offset the relative decline they endured in the previous five.

Our report was quite clear that this pattern of city center revival isn’t universal. In the 2002-07 period, seven of 41 metropolitan areas outperformed their peripheries; in the 2007-11 period, 21 outperformed their peripheries. (21 of 41 is not an overwhelming majority; it is, however, a much bigger group than the 7 cities that saw this pattern in the previous time period.) As in politics, all economic geography is local: some city centers continue to follow the historic pattern of having growth that lags well behind their expanding suburban peripheries. We noted that job decentralization is still the order of the day in places like sprawling Houston and Kansas City.

It’s not surprising to find that the nascent urban comeback is happening faster in some places than in others: centralized employment did well in New York and San Francisco in the earlier 2002-07 period, when nearly all other city centers were lagging well-behind their peripheries in job growth. More analysis is needed to discern what sets of factors—industry mix, local policies, population movement—are at work in each metropolitan area.

There’s undoubtedly a lot to be learned from a closer, city-by-city and industry-by-industry examination of the data. This is the reason we published our report, and also why we’ve made our data available for others to download and analyze. Moreover, the underlying source of data, the Census Bureau’s Local Employment and Housing Dynamics series, is a powerful, yet under-used source of insight into the economic processes at work in our nation’s cities. We hope others will mine this data to generate an even richer picture of the changing geography of urban employment.

As we stressed in our report, four years of data drawn from a particularly turbulent time in our economic history is hardly the final word. We’re eager to see more recent data—the 2012 to 2014 data should be released by the Census Bureau later this year. When they are, we’ll be better able to judge whether the changes we’ve recorded in the past few years are a cross-current or a true turning of the tide.

Any Port in a Storm?

Over the past few weeks, there’s been a fair amount of media furor over the slowdown in container traffic handling on the West Coast as dockworkers and shipping companies negotiated the new terms of a labor deal.

You no doubt heard a fair amount of hyper-ventilation about the economic consequences of disruptions to this international supply line. Unsurprisingly, the longshoremen’s union took maximum advantage of its leverage over workflow to drive a hard bargain with the shippers. This is a kind of kabuki show that is repeated whenever these multi-year contracts are up for renewal. And, as almost always happens, the two parties have come to an agreement, and ports, especially the Ports of Los Angeles and Long Beach are working quickly to move the backlogged traffic.

With the prospect of a new wider Panama Canal re-arranging the competitive environment for global shipping, many seacoast cities are giving new thought to how port traffic might influence their future growth prospects.  There’s little question that big “load-center” ports are hubs of commerce, where the economies of scale in shipping seem to be creating a winner-take-all situation.

But how big a deal is container traffic to the typical metropolitan economy? What if your city isn’t the big winner in the container traffic game?  That question is a very live one in Portland. Overshadowed by the furor generated by the coast-wide slowdown was an announcement earlier this month by Korean shipper Hanjin that it was terminating container service to the Port of Portland. Hanjin accounts for three-quarters of Portland’s container traffic.

In a new column in Oregon Business magazine, I examine the economic ramifications of the Portland’s loss of dock-side container service.

For a city whose first name is “port,” the loss of container service seems like an economic body blow. We are constantly being told that Oregon has a “trade-dependent” economy. How will we survive without this iconic connection to the global marketplace?

The answer will surprise you: Just fine.

You can read the rest at OregonBusiness.Com

There’s little doubt that container service is a highly visible icon of any city’s connections to the global economy. It’s the sort of thing that television reporters can stand up in front of a camera and tell visually compelling stories.

But for most city economies, dockside container service has little to do with whether they succeed or fail in the global economy. The ability of cities to compete hinges not on whether they can cheaply move bulk goods, but on whether they can create world-class products. Particularly in high-cost countries like the United States, firms compete on product differentiation and performance, not transportation cost. This is true of the high-value products of advanced industries: everything from commercial jets to computer chips. This is even more true for services–software, motion pictures, financial services–for which physical movement of product is essentially irrelevant.

As we move toward an increasingly intangible, innovation-driven economy, the old metaphors we use to visualize the economy are becoming a less useful guide to thinking about how the world works.

What does it mean to be a “Smart City?”

The growing appreciation of the importance of cities, especially by leaders in business and science, is much appreciated and long overdue.  Many have embraced the Smart City banner.  But it seems each observer defines “city” in the image of their own profession.  CEOs of IT firms say that cities are “a system of systems” and visualize the city as an increasing and dense flow of information to be optimized.  Physicists have modeled cities and observed relationships between city scale and activity, treating city residents as atoms and describing cities as conforming to “laws.”

In part, these metaphors reflect reality.  In their function, cities have information flows and physical systems.  However, it is something more than its information flows and physical systems, and its citizens need to be viewed as something other than mindless atoms.

The prescriptions that flow from partial and incomplete metaphors for understanding cities can lead us in the wrong direction if we are not careful.  The painful lessons of seven decades of highway building in U.S. cities is a case in point.  Epitomized by the master builder, Robert Moses, we took an engineering view of cities, one in which we needed to optimize our cities to facilitate the flow of automobiles.  The massive investments in freeways (and the re-writing of laws and culture on the use of the right of way) in a narrow way made cities safe for much greater and faster travel–but at the same time they produced massive sprawl, decentralization and longer journeys, and eviscerated many previously robust city neighborhoods.

If we’re really to understand and appreciate cities, especially smart cities, our focus has to be elsewhere:  it has to be on people.  Cities are about people, and particularly about the way they bring people together.  We are a social species, and cities serve to create the physical venues for interaction that generate innovation, art, culture, and economic activity.

What does it mean for a city to be smart?

The most fundamental way a city can be smart is to have highly skilled, well-educated residents.  We know that this matters decisively for city success.  We can explain fully 60% of the variation of economic performance across large U.S. metropolitan areas by knowing what fraction of the adult population has attained a four-year college degree.  There’s strong evidence that the positive effects of greater education are  social–it spills over to all residents, regardless of their individual education.

Educational attainment is a powerful proxy measure of city economic success because having a smart population and workforce is essential to generating the new ideas that cause people and businesses to prosper.

So building a smart city isn’t really about using technology to optimize the efficiency of the city’s physical sub-systems.  There’s no evidence that the relative efficiency of water delivery, power supply, or transportation across cities has anywhere near as strong an effect on their success over time as does education.

It is in this process of creating new ideas that cities excel.  They are R&D facilities and incubators, and not just of new businesses, but of art, music, culture, fashion trends, and all manner of social activity.  In the process Jane Jacobs so compelling described, by juxtaposing diverse people in close proximity, cities produce the serendipitous interactions that generate what she called new work.

We don’t have an exacting recipe for how this happens.  But we do know some of the elements that are essential.  They include density, diversity, design, discovery and democracy.

Density. The concentration of people in a particular place.  Cities, as Ed Glaeser puts it, are the absence of space between people.  The less space, the more people, and the greater the opportunities for interaction.  Cities are not formless blobs; what happens in the center–the nucleus–matters, because it is the place that provides key elements of identity and structure and connection for the remainder of the metropolitan area it anchors.

Diversity. The range of different types of people in a place.  We have abundant evidence that the diversity of the population– by age, race, national origin, political outlook,and other qualities– helps provide a fertile ground for combining and recombining ideas in novel ways.

Design.  We are becoming increasingly aware that how we populate and arrange the physical character of cities matters greatly.  The arrangement and aesthetic of buildings, public spaces, streetscapes and neighborhoods matters profoundly for whether people embrace cities or abandon them.  We have a growing appreciation for urban spaces that provide interesting variety and are oriented to walking and “hanging out.”

Discovery.  Cities are not machines; citizens are not atoms.  The city is an evolving organism, that is at once host to, and is constantly being reinvented by, its citizen inhabitants.  A part of the attraction of cities is their ability to inspire, incubate, and adapt to change.  Cities that work well stimulate the creativity of their inhabitants, and also present them all with new opportunities to learn, discover, and improve.

Democracy.  The “mayor as CEO” is a tantalizing analogy for both mayors and CEOs; CEOs are used to wielding unitary, executive authority over their organizations; many mayors wish they could do the same.  But cities are ultimately very decentralized, small “d” democratic entities.  Decision-making is highly devolved, and the opportunities for top-down implementation are typically limited.  Citizens have voice (through voting) and the opportunity to “exit” by moving, appropriately limiting unilateral edicts.  Cities also give rise to new ideas, and when they work well, city political systems are permeable to the changing needs and values of their citizens– this is when many important changes bubble up.

All of these attributes of cities are susceptible, at least in part to analysis or description using the constructs of “information flows” or “systems of systems.”  They may be augmented and improved by better or more widespread information technology. But it would be a mistake to assume that any of them are capable of being fully captured in these terms, no matter how tempting or familiar the analogy.

Ultimately, when we talk about smart cities, we should keep firmly in mind that they are fundamentally about people; they are about smart people, and creating the opportunity for people to interact.  If we continuously validate our plans against this key observation, we can do much to make cities smarter, and help them address important national and global challenges.

Who’s Vulnerable to Retail Retrenchment?

This week comes news that Target is laying off 1,700 workers at its Minneapolis headquarters, looking to become leaner and more efficient. It’s just the latest move in a shifting retail landscape in the United States.

Target is not just downsizing its headquarters, it’s shifting to smaller urban stores–Target Express. Other retailers like Walmart and Office Depot have have also been developing smaller stores. The days of big boxes and power centers seem to be giving way to to more urban-centered and smaller-footprint retailing, undermining the economics of larger-scale retailing. It’s estimated that there are over 1,200 dead or dying malls in the U.S. It appears that we’re way overbuilt for retail space. Finding productive uses for these disused spaces is now a major undertaking for communities around the nation.

Several factors seem to be driving the tectonic shifts in retailing. Part of the problem is that retail, like housing,was overbuilt during the bubble: commercial developers typically followed new housing development, and as the housing stock sprawled in the last decade, so too did the expansion of retail space.

Another important factor is the technological change in the form of growing e-commerce. More and more, we’re purchasing goods and services via the Internet and mobile devices. According to data compiled by Erik Brynjolfsson, e-commerce now accounts for about 30 percent of non-food, non-auto retailing, and is continuing to grow:

fred

There’s a bit of irony here: big box stores only become economically feasible thanks to earlier technological advances, including universal product codes, computerized inventory management, real-time ordering, and global data networks. These same technologies now help enable smaller stores (tailoring inventory to localized demand) and empower consumers to order online at home and via pervasive mobile devices.

The shifting retail environment will have impacts on the transportation system as well. The latest transportation data show a decline in the number and length of shopping trips (which decreases transport intensity of retailing), but this is at least partially offset by more travel by commercial delivery vehicles (like UPS and Fedx). It’s an open question as to how this will play out: will these shifts encourage (more) fleets of smaller transit trucks, or will increasing e-commerce retail sales and smaller urban stores mean larger trucks on urban roads? (Regardless, the D.O.T. believes e-commerce will significantly impact our road infrastructure by 2045, and that despite the hopes of Jeff Bezos, drones may not help solve that any time soon.)

To judge who’s most likely to be affected by these trends, we compiled some metropolitan level data on the amount of retail space per capita. The data come from Co-Star, a private firm that tracks retail space leasing throughout the nation. (They helpfully make their market reports available here). These data are for 2007 and we’ve computed retail space per capita in each market by dividing total square footage by each metropolitan area’s 2007 population.

The national average is about 46 square feet of retail space per capita, with most metropolitan areas having between 40 and 55 square feet per capita. There are a number of outliers, however.

Milwaukee/Madison has the highest amount of retail space per capita, and many southern, sprawled metros rank higher on this metric as well. These are the places most likely to struggle with a dwindling appetite for retail space, and the economic consequences that follow, be it in fewer retail jobs, large swathes of unused space, or transportation costs. At the other end of the spectrum, some metropolitan areas have far more space-efficient retailing: Portland has just 30 square feet of retail space per capita, fully one-third less than the national average.

By global standards, the U.S. has much more space devoted to retailing than anyone else: comparable estimates for other countries include: 23 square feet per capita in the United Kingdom, 13 square feet per capita in Canada, and 6.5 square feet per capita in Australia. If the experience of these countries is any indication, it’s a good bet that there’s lots there’s still lots of room for downsizing in the U.S. retail sector. However, despite these trends, Miami apparently isn’t concerned.

How much could US retail shrink? And where?

The first quarter of 2017 has marked a parade of announced store closures. The long awaited axe has fallen on 68 more Macy’s stores around the country. J.C. Penney has announced it will close another 138 stores. Other major national retail chains, including The Limited, Gap, Walgreens, Aeropostale and Chico’s, have also announced similarly large closures.  These are just the latest moves in a shifting, mostly shrinking retail landscape in the United States.

One retailer, Target is not just downsizing its store count, it’s shifting to smaller urban stores–Target Express. Other retailers like Walmart and Office Depot have have also been developing smaller stores. The days of big boxes and power centers seem to be giving way to to more urban-centered and smaller-footprint retailing, undermining the economics of larger-scale retailing. It’s estimated that there are over 1,200 dead or dying malls in the U.S. It appears that we’re way overbuilt for retail space. Finding productive uses for these disused spaces is now a major undertaking for communities around the nation.

Several factors seem to be driving the tectonic shifts in retailing. Part of the problem is that retail, like housing,was overbuilt during the bubble: commercial developers typically followed new housing development, and as the housing stock sprawled in the last decade, so too did the expansion of retail space.

Another important factor is the technological change in the form of growing e-commerce. More and more, we’re purchasing goods and services via the Internet and mobile devices. Census Bureau data on retail sales show that e-commerce continues to increase its market share.  Excluding restaurant sales, and sales of vehicles and gasoline, e-commerce now accounts for about 12 percent of all retailing, a figure that has effectively doubled in the past six years.

There’s a bit of irony to the technological displacement at work here: big box stores only became economically feasible thanks to earlier technologies, like universal product codes, computerized inventory management, real-time ordering, and global data networks. These same technologies now help enable smaller stores (tailoring inventory to localized demand) and empower consumers to order online at home and via pervasive mobile devices.

The shifting retail environment will have impacts on the transportation system as well. The latest transportation data show a decline in the number and length of shopping trips (which decreases transport intensity of retailing), but this is at least partially offset by more travel by commercial delivery vehicles (like UPS and Fedex). It’s an open question as to how this will play out: will these shifts encourage (more) fleets of smaller transit trucks, or will increasing e-commerce retail sales and smaller urban stores mean larger trucks on urban roads? There’s some evidence that Internet delivery will mean less car travel, as the decline in shopping travel will more than offset the increased vehicle travel associated with deliveries. And delivery efficiency actually increases as volumes increase.

To judge who’s most likely to be affected by these trends, we compiled some metropolitan level data on the amount of retail space per capita. The data come from Co-Star, a private firm that tracks retail space leasing throughout the nation. (They helpfully make their market reports available here). These data are for 2007 and we’ve computed retail space per capita in each market by dividing total square footage by each metropolitan area’s 2007 population.

The national average is about 46 square feet of retail space per capita, with most metropolitan areas having between 40 and 55 square feet per capita. There are a number of outliers, however.

Milwaukee/Madison has the highest amount of retail space per capita, and many southern, sprawled metros rank higher on this metric as well. These are the places most likely to struggle with a dwindling appetite for retail space, and the economic consequences that follow, be it in fewer retail jobs, large swathes of unused space, or transportation costs. At the other end of the spectrum, some metropolitan areas have far more space-efficient retailing: Portland has just 30 square feet of retail space per capita, fully one-third less than the national average.

By global standards, the U.S. has much more space devoted to retailing than anyone else: comparable estimates for other countries include: 23 square feet per capita in the United Kingdom, 13 square feet per capita in Canada, and 6.5 square feet per capita in Australia. If the experience of these countries is any indication, it’s a good bet that there’s lots there’s still lots of room for downsizing in the U.S. retail sector. However, despite these trends, Miami apparently isn’t concerned.

What does it mean to be a “Smart City?”

Cities are organisms, not machines; So a smart city has to learn and not be engineered

The growing appreciation of the importance of cities, especially by leaders in business and science, is much appreciated and long overdue.  Many have embraced the Smart City banner.  But it seems each observer defines “city” in the image of their own profession.  CEOs of IT firms say that cities are “a system of systems” and visualize the city as an increasing and dense flow of information to be optimized.  Physicists have modeled cities and observed relationships between city scale and activity, treating city residents as atoms and describing cities as conforming to “laws.”

In part, these metaphors reflect reality.  In their function, cities have information flows and physical systems.  However, it is something more than its information flows and physical systems, and its citizens need to be viewed as something other than mindless atoms.

The prescriptions that flow from partial and incomplete metaphors for understanding cities can lead us in the wrong direction if we are not careful.  The painful lessons of seven decades of highway building in U.S. cities is a case in point.  Epitomized by the master builder, Robert Moses, we took an engineering view of cities, one in which we needed to optimize our cities to facilitate the flow of automobiles.  The massive investments in freeways (and the re-writing of laws and culture on the use of the right of way) in a narrow way made cities safe for much greater and faster travel–but at the same time they produced massive sprawl, decentralization and longer journeys, and eviscerated many previously robust city neighborhoods.

If we’re really to understand and appreciate cities, especially smart cities, our focus has to be elsewhere:  it has to be on people. We take the Jane Jacobs view: Cities are about people, and particularly about the way they bring people together.  We are a social species, and cities serve to create the physical venues for interaction that generate innovation, art, culture, and economic activity.

What does it mean for a city to be smart?

The most fundamental way a city can be smart is to have highly skilled, well-educated residents.  We know that this matters decisively for city success.  We can explain fully 60% of the variation of economic performance across large U.S. metropolitan areas by knowing what fraction of the adult population has attained a four-year college degree.  There’s strong evidence that the positive effects of greater education are  social–it spills over to all residents, regardless of their individual education.

Educational attainment is a powerful proxy measure of city economic success because having a smart population and workforce is essential to generating the new ideas that cause people and businesses to prosper.

So building a smart city isn’t really about using technology to optimize the efficiency of the city’s physical sub-systems.  There’s no evidence that the relative efficiency of water delivery, power supply, or transportation across cities has anywhere near as strong an effect on their success over time as does education.

It is in this process of creating new ideas that cities excel.  They are R&D facilities and incubators, and not just of new businesses, but of art, music, culture, fashion trends, and all manner of social activity.  In the process Jane Jacobs so compelling described, by juxtaposing diverse people in close proximity, cities produce the serendipitous interactions that generate what she called new work.

We don’t have an exacting recipe for how this happens.  But we do know some of the elements that are essential.  They include density, diversity, design, discovery and democracy.

Density. The concentration of people in a particular place.  Cities, as Ed Glaeser puts it, are the absence of space between people.  The less space, the more people, and the greater the opportunities for interaction.  Cities are not formless blobs; what happens in the center–the nucleus–matters, because it is the place that provides key elements of identity and structure and connection for the remainder of the metropolitan area it anchors.

Diversity. The range of different types of people in a place.  We have abundant evidence that the diversity of the population– by age, race, national origin, political outlook,and other qualities– helps provide a fertile ground for combining and recombining ideas in novel ways.

Design.  We are becoming increasingly aware that how we populate and arrange the physical character of cities matters greatly.  The arrangement and aesthetic of buildings, public spaces, streetscapes and neighborhoods matters profoundly for whether people embrace cities or abandon them.  We have a growing appreciation for urban spaces that provide interesting variety and are oriented to walking and “hanging out.”

Discovery.  Cities are not machines; citizens are not atoms.  The city is an evolving organism, that is at once host to, and is constantly being reinvented by, its citizen inhabitants.  A part of the attraction of cities is their ability to inspire, incubate, and adapt to change.  Cities that work well stimulate the creativity of their inhabitants, and also present them all with new opportunities to learn, discover, and improve.

Democracy.  The “mayor as CEO” is a tantalizing analogy for both mayors and CEOs; CEOs are used to wielding unitary, executive authority over their organizations; many mayors wish they could do the same.  But cities are ultimately very decentralized, small “d” democratic entities.  Decision-making is highly devolved, and the opportunities for top-down implementation are typically limited.  Citizens have voice (through voting) and the opportunity to “exit” by moving, appropriately limiting unilateral edicts.  Cities also give rise to new ideas, and when they work well, city political systems are permeable to the changing needs and values of their citizens– this is when many important changes bubble up.

All of these attributes of cities are susceptible, at least in part to analysis or description using the constructs of “information flows” or “systems of systems.”  They may be augmented and improved by better or more widespread information technology. But it would be a mistake to assume that any of them are capable of being fully captured in these terms, no matter how tempting or familiar the analogy.

Ultimately, when we talk about smart cities, we should keep firmly in mind that they are fundamentally about people; they are about smart people, and creating the opportunity for people to interact.  If we continuously validate our plans against this key observation, we can do much to make cities smarter, and help them address important national and global challenges.

“Smart Cities” have to be about much more than technology

A framework for thinking about smart cities

Cities are organisms, not machines

The growing appreciation of the importance of cities, especially by leaders in business and science, is much appreciated and long overdue.  Many have embraced the Smart City banner. But it seems each observer defines “city” in the image of their own profession.  CEOs of IT firms say that cities are “a system of systems” and visualize the city as an increasing and dense flow of information to be optimized.  Physicists have modeled cities and observed relationships between city scale and activity, treating city residents as atoms and describing cities as conforming to “laws.”

In part, these metaphors reflect reality.  In their function, cities have information flows and physical systems.  However, it is something more than its information flows and physical systems, and its citizens need to be viewed as something other than mindless atoms.

The prescriptions that flow from partial and incomplete metaphors for understanding cities can lead us in the wrong direction if we are not careful.  The painful lessons of seven decades of highway building in U.S. cities is a case in point.  Epitomized by the master builder, Robert Moses, we took an engineering view of cities, one in which we needed to optimize our cities to facilitate the flow of automobiles.  The massive investments in freeways (and the re-writing of laws and culture on the use of the right of way) in a narrow way made cities safe for much greater and faster travel–but at the same time they produced massive sprawl, decentralization and longer journeys, and eviscerated many previously robust city neighborhoods.

If we’re really to understand and appreciate cities, especially smart cities, our focus has to be elsewhere:  it has to be on people. We take the Jane Jacobs view: Cities are about people, and particularly about the way they bring people together.  We are a social species, and cities serve to create the physical venues for interaction that generate innovation, art, culture, and economic activity.

Technology shouldn’t be just about optimizing the status quo

So building a smart city isn’t really about using technology to optimize the efficiency of the city’s physical sub-systems.  There’s no evidence that the relative efficiency of water delivery, power supply, or transportation across cities has anywhere near as strong an effect on their success over time as does education.

The big gains from technology come not from marginal improvements to existing organizational arrangements, but to the ability to create entirely new arrangements that create new value and opportunities to do different things in entirely different and better ways. When information technology was first introduced into the office, it was envisaged as primarily a way to “automate the typing pool”–improving the efficiency of a small army of women who did all the typing.  What it turned out to be was a way to dramatically reorganize corporate and managerial activity, and led to successive generations of entrepreneurship that have transformed economic activity and cities.

We can be blinded by big data

Much of the smart city discussion is highly technocratic:  If we just had perfect, detailed, real time information on say, traffic demand, we could optimize the function of our existing systems. The trouble with this is that in reality, the data that we have is always only partial, and importantly, has a strong status quo bias.  Take transportation data for example, it reveals a pattern of behavior that has emerged in response to the current pattern of land use and highway infrastructure.

We have copious data about automobile travel, and that avalanche of data effectively dominates thinking about transportation. We don’t measure whole categories of activity, like walking and cycling, and so they are invisible in policy discussions. More importantly, a vast treasure trove of data about existing travel patterns doesn’t tell us anything about what kind of places we might aspire to build.

This isn’t simply a matter of somehow instrumenting bike riders and pedestrians with GPS and communication devices so they are as tech-enabled as vehicles. An exacting count of existing patterns of activity will only further enshrine a status quo where cars are dominant. For example, perfectly instrumented count of pedestrians, bicycles, cars in Houston would show—correctly—little to no bike or pedestrian activity. And no amount of calculation of vehicle flows will reveal whether a city is providing a high quality of life for its residents, much less meeting their desires for the kinds of places they really want to live in.

Unintended consequences

As our experience with the private automobile shows, the advent of a new technology can have powerful unintended consequences. As with previous advances in transportation technology, the car generated a vast decentralization of population and economic activity–and this new pattern of suburban sprawl made us vastly more dependent on cars for transportation, and have been a principal contributor to air pollution and global warming. The consequences for cities have been devastating. For example, as Nathan Baum-Snow has illustrated, each additional radial freeway built to facilitate car travel reduced a city’s population by 18 percent.

Prices matter

While technology is important, new technologies are deployed and paid for in specific ways that materially affect how their impacts. A key reason for the dominance of the automobile in US cities is a series of public policy decisions that have subsidized the car and insulated car users from the economic, social and environmental costs that they create. We’ve provided very extensive freeway systems, and don’t charge users directly for their use. Our failure to price peak hour road use is the direct cause of recurring traffic congestion in US cities. We mandate that new housing and businesses build parking as a condition of development. Cars pay little or nothing toward offsetting the damage they do to the atmosphere.

If we project the existing system of financing and pricing forward with new technologies, we’re likely to only worsen many of the problems we face.  Cheap autonomous vehicles could further flood the nation’s already crowded transportation infrastructure. But technological inflection points are good opportunities to revisit financial and institutional arrangements.  Its worth recalling that the gas tax was invented a little over a century ago as a way to pay for roads:  in the horse and buggy era, we didn’t pay for roads with a tax on hay.  The advent of robust communication and geolocation systems for cars, coupled with the growing consumer familiarity with per trip and per mile pricing, illustrate the ways in which we could change

Ultimately, when we talk about smart cities, we should keep firmly in mind that they are fundamentally about people; they are about smart people, and creating the opportunity for people to interact.  If we continuously validate our plans against this key observation, we can do much to make cities smarter, and help them address important national and global challenges.

The Perils of Conflating Gentrification and Displacement: A Longer and Wonkier Critique of Governing’s Gentrification Issue

It’s telling that Governing calls gentrification the “g-word”—it’s become almost impossible to talk about neighborhood revitalization without objections being raised almost any change amounts to gentrification. While we applaud the attempt to inject some rigor and precision into a debate that has been too often fueled by emotion and anecdote, Governing’s analysis serves only to muddy the waters of this contentious issue.

The Governing team explains that there is no agreed-upon definition for gentrification, and then go on to choose a definition and use it to measure number of neighborhoods that have, and haven’t, gentrified. The report notes that gentrification is not the same thing as displacement; but then repeatedly describes the harm of gentrification as being the displacement of the existing population.

Is Gentrification About Displacement? Or isn’t it?

The underlying problem confronting Governing’s analysis is the confusion of the terms “gentrification” and “displacement”. The Governing team begins by explaining that their definition of gentrification has nothing to do with displacement, but they then go on to detail how “gentrification signifies displacement of the poor, mostly people of color.”

The reason policy analysts and public officials are concerned about gentrification, to the extent it happens, is because it holds the potential to displace the poor from their longtime neighborhoods. As Governing acknowledges, the original definition by British sociologist Ruth Glass was that the middle class “invade” a neighborhood “until all or most of the working class occupiers are displaced,” and the social character of the neighborhood is changed.

Our own work has shown that over four decades, relatively few high-poverty neighborhoods have seen their poverty rates decline to below the national average. It’s also the case that far more of the long-suffering poor move out of high-poverty neighborhoods that stay poor than move away from high-poverty neighborhoods that see a significant reduction in poverty.

A striking omission from the Governing article is any more than a passing mention of the robust academic literature on the extent of displacement in gentrifying neighborhoods. Columbia University’s Lance Freeman reports that outmigration rates for low-skilled black residents of gentrifying neighborhoods are lower than in otherwise similar, non-gentrifying neighborhoods. The University of Colorado’s Terra McKinnish and Kirk White conclude the demographic flows associated with the gentrification of urban neighborhoods are not consistent with displacement or harm to minority households. New York University’s Ingrid Gould Ellen and Katherine O’Reagan write “. . . original residents are much less harmed than is typically assumed. They do not appear to be displaced in the course of change, they experience modest gains in income during the process, and they are more satisfied with their neighborhoods in the wake of the change.”

It’s also discouraging that this entire discussion of poor neighborhoods isn’t placed in a broader context of income segregation of the U.S. population. Income inequality has achieved a new level of visibility in public discussions in the past few years, thanks to the work of Thomas Piketty and others. A number of scholars—Brown University’s John Logan, Rutger’s Paul Jargosky, the University of Missouri’s Todd Swanstrom, and others—have carefully traced out how income inequality has played out in the form of greater spatial separation by income in the nation’s metropolitan areas. The latest research from Stanford’s Sean Reardon and his colleagues shows that income segregation is increasing, driven by the increasing secession of the rich from neighborhoods of lower- and middle-income households. The focus on gentrification—the very limited and small-scale movement of some higher-income and better-educated households into lower-income communities–completely misses the fact that income segregation is being driven by the decisions of higher-income families to increasingly isolate themselves in higher-income enclaves, often in exclusive suburbs and established high-income areas.

One of the ironies of the fear of gentrification is that is often used as excuse to steer limited dollars for assisted housing predominantly or exclusively into low-income neighborhoods, a practice the University of Minnesota’s Myron Orfield has shown to reinforce historical patterns of racial and economic segregation.

Levels vs. Change

In quantitative analysis, we can measure the level of something (whether incomes are high or low, for example). We can also measure the “change” in something—whether incomes are increasing or decreasing.

The limitation of the Governing definition of gentrification is that it is all about change, and it largely ignores levels. Even if education levels or incomes increase in a poor neighborhood, it still may be much less well-educated and lower-income than the average neighborhood in the metropolitan area of which it is a part. This absence of clearly defined thresholds is a perennial omission of gentrification discussions: if one wealthier, whiter, better-educated person moves into a neighborhood, does that constitute gentrification?

As the University of Pennsylvania’s Mark Stern and Susan Seifert note:

Clearly, there is no objective measure of when neighborhood improvement—or, in Jane Jacobs’ striking phrase, ‘unslumming’—becomes gentrification. But if we see neighborhood revitalization as desirable, we cannot afford to label all population change as gentrification. (2007)

There’s something odd about a measurement that purports to examine the extent of “gentrification,” but excludes from its analysis all city neighborhoods with income levels over 40 percent of the metropolitan average. In San Francisco, a city with 196 census tracts, Governing concluded that only 16 were “eligible” to gentrify, and that only 3 tracts did gentrify. Superficially, this creates the impression that gentrification affects only 3 neighborhoods in San Francisco, as opposed to say, 84 neighborhoods in Philadelphia, and 39 neighborhoods in Baltimore. But is it meaningful to conclude that gentrification, income disparities, and displacement are a tenth as widespread in San Francisco as these other cities, simply because San Francisco already had most of its neighborhoods dominated by “the gentry?” Should we put any stock in a measure that says gentrification is more prevalent in Detroit (7 neighborhoods), Cleveland (10) and Fresno (5), than it is in San Francisco (3 neighborhoods)?

The levels of income and education are still very low, and the levels of poverty still very high, in many of the neighborhoods that Governing says have “gentrified.” Can it be said that a neighborhood has gentrified if it has, by comparison to the rest of its metropolitan area, a higher fraction of low-income households than the rest of the metropolitan area? Is this really a helpful way to describe what’s happening in metropolitan areas?

No Apparent Displacement in “Gentrifying” Tracts

The identifiable harm of gentrification is displacement: if Governing has identified tracts that are gentrifying, and if gentrification is a problem, then we should be able to find evidence of displacement. Or, put another way, if neighborhoods are gentrifying and displacement isn’t happening, is it a serious problem?

Let’s compare what happened to gentrifying and non-gentrifying tracts, according to Governing’s definition. Of a total of 4,750 central city census tracts “eligible” to gentrify, 948 gentrified, but 3,802 did not. The tracts that did not gentrify lost 2.4 percent of their population in aggregate. The tracts that did gentrify actually saw their population increase by 6.7%. (This is consistent with the “up or out” pattern we identified in high-poverty tracts nationally: either poverty rates decline and population increases, or high poverty rates persist and population declines).

One recurring theme in overly simplistic gentrification analyses is housing as a zero sum game. If one whiter, wealthier, or better-educated person moves into a neighborhood, that must necessarily mean that one poorer, less-educated person of color must move out. Both Governing’s report and our analysis of high poverty neighborhoods shows that gentrifying or rebounding neighborhoods are actually seeing population increases.

It’s also the case that more poor people live in the Governing’s “gentrified” tracts today than in 2000: the poverty rate in gentrified tracts declined by 0.7%, while the total population increased by 6.5%. Assuming the poverty rate in these tracts exceeded 13 percent in 2000, the population living below the poverty line in these tracts had to have actually increased between 2000 and 2013; hardly evidence, on its face, of widespread displacement.

Another recurring theme in this kind of analysis is the implication that if a neighborhood doesn’t gentrify, that it somehow stays the same. But our analysis of poor neighborhoods showed that places of high poverty aren’t stable; if these neighborhoods don’t see a reduction in poverty, people leave: on average high-poverty neighborhoods that didn’t rebound lost 40 percent of their population over four decades. The same pattern holds for Governing’s “non-gentrifying” neighborhoods—they lost 2.4 percent of their population. This contrasts with the gentrifying places, which were adding population.

Understanding the Nuance of Neighborhood Change

Rather than just a binary classification of neighborhoods as either “gentrified” and “not gentrified,” it’s worth looking at the actual numbers of people involved, poor and non-poor, to get a sense of what’s really happening and what it means.

At City Observatory, we frequently work with tract-level data, and closely follow developments in Portland, where we’re based. We took a close look at Governing’s data for Portland, in part because they concluded that 58% of “eligible” census tracts gentrified in the past decade.

Here are their headline findings for Portland: According to Governing, 62 of 143 census tracts in the City of Portland were “eligible” to gentrify. Of these, 36 gentrified, and 26 did not, according to their calculations.

We gathered the data on population and poverty for all the City of Portland census tracts for 2000 and 2009-13, and categorized them according to the Governing methodology (we noted the classifications as coded on Governing’s map of Portland tracts). We have one more census tract in the City of Portland (in the not eligible category). We ignore this minor difference for the purpose of our analysis. Here’s what the data show about the gentrifying neighborhoods in Portland in the aggregate:

The total population in “gentrifying” neighborhoods actually increased from about 152,000 to about 165,000. This is consistent with our observation that neighborhood change is not a zero sum game. Neighborhoods can gain new residents without necessarily losing existing ones.

The number of poor persons living in “gentrifying” neighborhoods increased. In 2000, there were 25,037 persons in poverty in these neighborhoods; in 2013 there were 34,499 persons living in poverty, about 9,500 more. Even allowing for the increase in poverty rates in the nation and the region over the decade, it’s hard to argue that there was widespread displacement of the poor from these “gentrifying” neighborhoods if they now have 9,500 more poor residents.

As a result, the poverty rate in “gentrifying” neighborhoods in Portland increased between 2000 and 2013 in the aggregate. In 2000 the poverty rate was 16 percent; in 2013, the poverty rate had risen to 20 percent. Even after “gentrifying,” these neighborhoods had a poverty rate that was higher than the regional average of about 13.5 percent.

Meanwhile, the number of non-poor residents also increased by a net of 3,800, from about 127,000 to about 130,000. The “gentrifying” neighborhoods gained about two-and-a half times more net additional poor residents than net additional non-poor residents. Spread over 36 Census Tracts and about 11 years, (2000 through 2011, the mid-point of the 2009-13 five year census pattern) this works out to a net increase of about 10 non-poor persons per “gentrifying” census tract per year—hardly a sweeping change in typical neighborhood demographics.

Poverty, Population, City of Portland, 2000 and 2009-13

Organized by Governing Magazine Gentrification Typology
Governing Category “Gentrified” “Not Gentrified” “Not Eligible” City Total
Tracts 36 26 81 143
Population 2013 165,436 120,742 317,422 603,600
Calculated Persons in Poverty 2013 34,499 27,948 44,845 107,292
Average Tract Poverty Rate 2013 20.0% 22.7% 13.8% 17.0%
Population 2000 152,168 105,583 280,454 538,205
Calculated Persons in Poverty 2000 25,037 15,000 30,242 70,279
Average Tract Poverty Rate 2000 16.8% 14.0% 10.9% 12.9%
Growth in Poverty, 2000-2013 37.8% 86.3% 48.3% 52.7%
NonPoor 2000 127,131 90,583 250,212 467,926
NonPoor 2013 130,937 92,794 272,577 496,308
Percent Change 3.0% 2.4% 8.9% 6.1%
Change in Non_Poor 3,806 2,211 22,365 28,382
Growth in Population 13,268 15,159 36,968 65,395
Poor 9,462 12,948 14,603 37,013
NonPoor 3,806 2,211 22,365 28,382

This is not to say that all of these neighborhoods experienced this same pattern. By our count, 22 of Governing’s 36 gentrifying Portland census tracts experienced increased numbers of persons living in poverty, and 14 saw decreases in their poverty population. Of these 14, nine had decreases of less than five percent since 2000, four had decreases of between five to ten percent, and only one census tract saw a decrease of more than ten percent in its population living in poverty. Arguably, these tracts may be experiencing a measure of displacement. But for reference, it’s worth noting that the between 1970 and 2010, the typical urban high-poverty tract that stayed high-poverty lost about 40 percent of its population. 31 of the 36 “gentrifying” tracts still had poverty rates above the regional average “post-gentrification.”

Defining the Undefinable

After musing about whether gentrification is synonymous with displacement, Governing concludes that we should treat them as two different things, and cites the Centers of Disease Control as agreeing with this view. The authors quote the CDC as saying: “gentrification is merely the transformation of neighborhoods from low value to high value.” The CDC definition, says Governing, has nothing to do with displacement.

It’s worth looking at the CDC’s work here. It turns out that the Centers for Disease Control hasn’t actually done its own research on gentrification. What you’ll find at the CDC website is one wiki-like page of secondary citations, compiled by an unidentified author: http://www.cdc.gov/healthyplaces/healthtopics/gentrification.htm. The CDC is hardly regarded as an expert or arbiter on the subject: Google Scholar reports a total of four citations to this CDC webpage.

It’s important to note that the CDC website—like the rest of the literature on gentrification—clearly flags the harm of gentrification as displacement of the existing population. They use the term “transformation” in their definition. If you read on in the sentences immediately following the definition cited by Governing, this is clear: “This change has the potential to cause displacement of long-time residents and businesses. Displacement happens when long-time or original neighborhood residents move from a gentrified area because of higher rents, mortgages, and property taxes.”

There’s one more wrinkle here. Notice that the CDC definition refers to the transformation of neighborhoods from “low value to high value.” The CDC definition, unlike Governing’s, is about levels, and not just changes. Governing’s definition is not that formerly poor neighborhoods become “high value” neighborhoods, but that they become “higher” value neighborhoods—that their property values increase faster than for the region as a whole. It is entirely possible for a low-value neighborhood to have a higher percentage increase in prices, and still be a low-value neighborhood.

Is there some reason we might want to be cautious about leaning on housing price data from the past decade or so?

A linchpin of the Governing analysis is its reliance on a decade’s worth of housing price data as reported by homeowners to the Census Bureau. They compare the reported value of owner-occupied housing in the 2000 Census, with data from the 2009-13 editions of the American Community Survey, and use this to identify lower income neighborhoods that have experienced higher than average rates of home price appreciation.

As we all recall, that decade represented the period of the biggest volatility in home prices and housing markets in at least eight decades. Home prices were inflated by a giant bubble through the mid-part of the decade, and collapsed in a downturn that wiped out trillions of dollars of household equity, produced millions of foreclosures, and resulted in 17 million more households renting their homes.

Another important development during the past decade was the big increase in gas prices in 2007-08, which persisted until just the past few months. Arguably, higher gas prices had a big impact on the housing market, depressing the price of suburban and exurban homes that were sold to what the real estate community called the “drive-til-you-qualify” crowd.

It’s obvious that this was a decade where a range of market forces were buffeting housing prices. It’s a stretch to assume that gentrification was the sole cause of higher prices in some previously poor neighborhoods.

There’s another deeper technical issue here: Governing’s estimates are based on median home values data that are subject to bias from composition effects. (For a discussion of the volatility and noise that median measures create for housing price inflation estimates, see Jordan Rappaport, “A guide to aggregate house price measures” Kansas City Federal Reserve Bank, 2007.)  The Census Bureau asks for estimates of home values only for owner-occupied homes—not rentals. As long as the number and mix of these homes doesn’t change from one census to another, this isn’t a problem for using the census data to estimate average prices. But the “median” home value is subject to composition effects: if a different number and mix of homes is in the sample in the base year and the final year, the quality of the estimate is compromised. If many formerly owner-occupied low-value homes are foreclosed upon, and converted to rentals, they are no longer included in the median. Because foreclosure was more common among low-value houses this effect drives up the reported median for the remaining higher-value houses. Foreclosure problems are far more common in lower-income neighborhoods than middle- and upper-income neighborhoods. It’s also the case that if new market rate housing gets built in a neighborhood, it is usually at a higher price point than existing housing; this too tends to cause measured median prices to rise, especially in neighborhoods with lots of older, smaller, lower-value homes.

What do we do?

As the Governing team well understands, the metropolitan U.S. is dramatically segregated by income—and income segregation is increasing. An abundance of social science literature shows that concentrated poverty magnifies all of the negative effects of growing up poor (Patrick Sharkey, Jargowsky & Swanstrom). Newly released research on intergenerational mobility shows the devastating effects of concentrated poverty, and that even poor kids that grow up in integrated places have much greater opportunities than their counterparts in neighborhoods of more concentrated poverty (Chetty, Massey and Rothwell).

We live in a nation increasingly segregated by income. Even as racial segregation has waned, income segregation has increased. If we’re serious in our rhetoric about equality of opportunity, we have to do something to tackle the growing spatial class segregation we see in cities.

As Daniel Kay Hertz has observed, critics of gentrification need to say how they expect to achieve a more integrated nation and more integrated cities if they are somehow opposed to some higher-income people moving into what have become lower-income neighborhoods. “The kind of cognitive dissonance that allows someone to decry segregation while they wish to “reverse” the process of integration makes it impossible to articulate a real vision for what a just city might look like. Those who would declare themselves firmly anti-gentrification need to grapple with whether they’re comfortable defending a racial geography born of discrimination and violence.”

Those who raise “gentrification” as an impending threat to American cities owe us a coherent vision of how we can create more just and equitable neighborhoods. Lamenting—and exaggerating—gentrification generates plenty of heat, but precious little light on how cities ought to respond to the twin challenges of income segregation and neighborhood change.

Florida’s Biotech Bet

For more than a decade, one of the hottest trends in economic development has been pursuing biotechnology. Cities and states around the nation have made considerable investments in biotech research, ranging from California’s voter-approved $3 billion research program, to smaller efforts in cities around the country, including Indianapolis, St. Louis, and Phoenix.

One of the states that made the biggest bets on biotech was Florida, which in 2003 committed state funds to luring the Scripps Research Institute to building a new campus in Palm Beach County. The Scripps deal served as a template for subsidies to other life sciences research institutions opening similar research labs in other Florida cities. The total cost of the program is estimated to reach more than $800 million.

In a new article, Reuters has questioned whether Florida has gotten its money’s worth for the investments it has made in biotechnology. The biotech investments were originally sold based on the promise that they would lead to a flourishing new industry employing more than 44,000 people. But a decade later, there’s little evidence of progress.

We’ve long followed the biotechnology industry. In 2002, my colleague Heike Mayer and I undertook an extensive study of the clustering of the US biotech industry, published by the Brookings Institution–Signs of LIfe–which showed that the industry’s economic impact was tightly concentrated in just a few leading centers around the nation. While life sciences research was becoming slightly more widespread as more cities competed for National Institutes of Health (NIH) funding, all of the measures of commercialization–new firm startups, venture capital investment, and privately funded research and development partnerships– were becoming more concentrated in a few leading cities. Our analysis showed that three biotech leaders (Boston, San Francisco, and San Diego) had decisive competitive advantages in starting and growing new biotech firms that other cities would find difficult, if not impossible, to overcome.

The succeeding decade has confirmed our original analysis. The three leading centers are even more dominant today that they were a decade ago. In 2000-01, Boston, San Francisco, and San Diego accounted for about 54 percent of venture capital invested in biotechnology. In 2010-12, the three metros accounted for 60 percent of biotech venture capital. Data on venture capital flows come from the PriceWaterhouseCoopers Moneytree survey.

There’s probably no better indicator of the growth of biotechnology commercialization than the flow of venture capital funds to new startup companies. By this measure, the state of Florida’s position is essentially no different than it was a decade ago. While venture capital funding fluctuates from quarter to quarter, Florida’s share of national biotechnology venture capital funding is still less than 1 percent–in the same range that it was before its subsidies to Scripps and other research laboratories.

As it turns out, doing biomedical research doesn’t automatically lead to new companies and job creation. The hard and costly work of turning promising research ideas into marketable products happens in only a few places. The challenge in growing a commercial biotechnology hub is in overcoming the overwhelming competitive advantages that established clusters have in being places that have the financial, human, and institutional resources to succeed in this complicated and risky business. Despite the time and expense that Florida and other states have invested in biotech research, there’s almost no evidence that anyone has made anything more than marginal changes to the landscape of the U.S. biotech industry.

Jobs Return to City Centers

(This post coincides with the newly released report, Surging City Center Job Growth. The report and more details are found here.

For decades, urban economists have chronicled the steady decentralization of employment in our metropolitan areas. First people moved to the suburbs for low density housing, and then businesses followed—especially retail and service businesses that catered to decentralized population. Over time, the manufacturing and distribution business which had traditionally chosen city-centered transportation hubs also moved to more sprawling locations, enabled by the shift to truck transportation and the growth of the nation’s highway system.

A few industries continued to be disproportionately found downtown. Banks, insurance companies, government offices, and many professional service firms still preferred central office locations that facilitated easy face-to-face contact. But retail moved increasingly to suburban malls and highway strip centers, manufacturing and distribution to industrial parks, and many clerical and administrative functions moved from the center to more dispersed office parks.

At City Observatory, we’ve tracked the growing movement of talented young workers back to urban neighborhoods. The growing attractiveness of urban living is leading to measurable increases in skill level of the labor force near city centers. Employers are taking notice: a growing number of firms report that they are choosing downtown locations in order to tap into the growing talent pool of young workers.

We’ve identified dozens of examples of these downtown moves and expansions, which led us to ask whether this was actually moving the needle in city center employment levels. We tapped a novel and relatively new data source, the Census Bureau’s Local Employment and Housing Dynamics (LEHD) series. It maps, block-by-block, the location of jobs in most of the nation’s metropolitan areas. Building on a research methodology developed originally by Ed Glaeser and Matt Kahn, and further applied by Brookings researcher Elizabeth Kneebone, we focused on the number of jobs within a three-mile circle surrounding the center of the central business district of each of the nation’s largest metro areas. We used a similar technique, and different data, as part of our Young and Restless report in October.

Looking back over the past decade, we found a remarkable reversal in the pattern of job growth. During the economic expansion from 2002 to 2007, the historic trend of job decentralization was very much present. City centers saw employment growth of barely one-tenth of a percent per year, while the more outlying areas grew ten times as fast.

But since 2007—the period coinciding with the onset and early recovery from the Great Recession—the picture changed dramatically. In the aggregate, the 41 metropolitan areas for which we have comparable data showed a 0.5 percent per year growth in city center employment and a 0.1 percent decrease in employment in the periphery. While only 7 city centers outperformed their surrounding metros in the 2002-07 period, 21 outperformed the periphery in 2007-11. This is a widespread trend, however the change isn’t yet universal. The growth rate of outlying areas still widely outstrips that of the city center in a half dozen metropolitan areas including Houston, Kansas City, and Las Vegas.

Data documenting this reversal come from an extremely volatile period in our recent economic history—2007 to 2011, covering the time from the peak of the last economic cycle through the trough of the Great Recession, and the first two years of recovery. We know that cyclical factors, particularly the decline in construction and goods-producing industry, caused the economic blow to fall heavily on more decentralized businesses.

To separate out the effects of the economic cycle from underlying trends in city center competitiveness, we developed a shift share analysis that looks at the change in employment by industry sector. This analysis shows that while more centralized industries outperformed decentralized ones, this factor alone didn’t account for the city center growth. Compared to the previous period, city centers actually erased their competitive disadvantage relative to suburbs, and in some industries (arts, entertainment, dining, lodging, and finance, insurance, and real estate) clearly outperformed more peripheral locations.

We think that there are a number of reasons to believe that the relatively strong performance of city centers will be maintained in this economic expansion. As we noted in our Young and Restless report last year, talented young workers are increasingly choosing to live in and near city centers. Just as the outward migration of population propelled employment decentralization in the last century, it may well be that the movement of population back to the center will sustain employment growth in city centers.

We look forward to following this trend as more data becomes available; to read the full report, go here.

How is economic mobility related to entrepreneurship? (Part 2: Small Business)

We recently featured a post regarding how venture capital is associated with economic mobility. We know that these are strongly correlated—and that, if we are concerned with the ability of children today to obtain ‘The American Dream,’ we should be concerned with how to increase economic mobility.

To understand more about how cities can increase intergenerational economic mobility, we wanted to take a look at another measure of entrepreneurship: small businesses per capita.

We follow Glaeser, et al, and measure the number of businesses with 20 or fewer employees per 1,000 population in each of the nation’s largest metropolitan areas. As in the previous post, we measure economic mobility as the probability that children born in the bottom quintile rise to the top quintile as adults.

The chart below shows the results: cities with a larger number of small businesses per capita have higher rates of economic mobility. This relationship is positive, but statistically less strong a fit (R-squared: .16) than venture capital.

The data from this post and the previous one suggest that there a positive relationship between entrepreneurship and higher levels of economic mobility, particularly that economic mobility is somewhat correlated with higher numbers of small businesses and more strongly correlated with venture capital.

This analysis is both partial and preliminary. We know from Chetty, et al, that there are other factors (segregation, schools, family structure) that influence economic mobility. A more comprehensive analysis would consider whether or not after controlling for the variation explained by these other factors there was any remaining variation explained by entrepreneurship. Moreover, these relationships are simple correlations, and do not necessarily indicate cause and effect. For example, it could be that economic mobility causes entrepreneurship. Furthermore, our data on small businesses and venture capital are taken from recent years; a more rigorous analysis would look to see whether small business and venture capital levels of two or more decades ago were correlated with economic mobility over the succeeding time period.

Still, taken as a whole, the data suggest that more entrepreneurial places have higher levels of economic mobility. Why this relationship exists and what implications it may have for policy are questions worthy of further research.

To learn more about innovation and entrepreneurship from a metro perspective, go to our cards here. (We also feature information on economic mobility and opportunity, economic segregation, and more here.)

Less in Common

The essence of cities is bringing people—from all walks of life—together in one place.  Social interaction and a robust mixing of people from different backgrounds, of different ages, with different incomes and interests is part of the secret sauce that enables progress and creates opportunity.  This ease of exchange underpins important aspects of our personal lives, civic effectiveness and economic development.

But over the past several decades, a number of trends–some social, some economic, some political, and others technological–have interacted to dramatically change the ways, the places, and the amounts of interaction between different groups in our society.  By many measures, we now spend less time in social settings, and are less likely to regularly interact with people whose experiences are different from our own.  In our schools, communities, work, shopping and personal activities, we’re increasingly separated from one another.

Our new report, Less in Common, surveys a wide range of measures of how Americans have grown apart from one another over the past several decades.  We’ve intentionally drawn promiscuously from a variety of fields to illustrate the breadth and variety of ways in which this trend seems to be unfolding.

Many of these changes are reflected in the physical landscape of our cities.  In North America, development patterns, particularly the growth of suburbs after World War II, diminished access to an easily shared urban life.  Space and experiences became more private, fueled by suburban expansion, large lots, and the predominance of single-family homes. These development patterns have resulted in Americans having “less in common.”  This phenomenon appears to play out in many different ways:

Distrust among Americans is increasing.  A key marker of social capital that is regularly used in comparing nations and tracking trends over time is the generalized feeling of trust.  The General Social Survey reports that the share of the population that says “most people can be trusted” has fallen from a majority in the 1970s, to about one-third today.

Americans spend significantly less time with their neighbors.  In the 1970s, nearly 30 percent of Americans frequently spent time with neighbors, and only 20 percent had no interactions with them.  Today, those proportions are reversed.

The biggest portion of our leisure time is spent watching television.  TV watching is up to 19 hours per week today compared to about 10 hours in the 1960s.  We spend less time socializing and communicating.

Our recreation is increasingly privatized.  Since 1980, the number of members of private health clubs have quadrupled to more than 50 million.  We used to swim together—prior to World War II, almost all pools were public.  Today, we swim alone in the 5 million or so private swimming pools compared to just a few thousand public ones.

Driving alone has become the norm, with transit reserved for the poor. Today, 85 percent of American commuters travel to work in private automobiles, up from 63 percent in 1960.  Carpooling has fallen by half since 1980, and the share who commute via transit has declined from 12 percent in 1960 to less than 5 percent today.

Economic segregation trends upward as middle-income neighborhoods decline. High-income and low-income Americans have become more geographically separated within metropolitan areas. Between 1970 and 2009 the proportion of families living either in predominantly poor or predominantly affluent neighborhoods doubled from 15 percent to 33 percent.

Many of us live in gated communities. By 1997 it was estimated that there were more than 20,000 gated community developments of 3,000 or more residents. By design, gated communities restrict access and carefully control who is allowed into a community to separate residents from outsiders.

Politically, America sorts itself into like-minded geographies.  Nearly two-thirds (63 percent) of consistent conservatives and about half (49 percent) of consistent liberals say most of their close friends share their political views.

There are some counter-trends to the general pattern of isolation and separation.  Racial segregation, though still high, has declined steadily for decades. New community spaces—like farmer’s markets—have grown rapidly.  Widespread availability of the Internet combined with social media has made it easier and more democratic to connect with others and with all forms of information.

A broadly shared sense of common interest, anchored in a society that promotes social mobility and easy interaction, is a vital underpinning of effective political institutions and the economy.  If we’re going to make progress in tackling a range of our nation’s challenges, and live up to our full potential, we need to reinvigorate the civic commons.

You can also see the findings in the form of an easy-to-share infographic:

Click to see the full infographic.
Click to see the full infographic.

Surging City Center Job Growth

For over half a century, American cities were decentralizing, with suburban areas surpassing city centers in both population and job growth. It appears that these economic and demographic tides are now changing. Over the past few years, urban populations in America’s cities have grown faster than outlying areas, and our research shows that jobs are coming with them.

Our analysis of census data shows that downtown employment centers of the nation’s largest metropolitan areas are recording faster job growth than areas located further from the city center. When we compared the aggregate economic  performance of urban cores to the surrounding metro periphery over the four years from 2007 to 2011, we found that city centers—which we define as the area within 3 miles of the center of each region’s central business district—grew jobs at a 0.5 percent annual rate. Over the same period, employment in the surrounding peripheral portion of metropolitan areas declined 0.1 percent per year. When it comes to job growth, city centers are out-performing the surrounding areas in 21 of the 41 metropolitan areas we examined. This “center-led” growth represents the reversal of a historic trend of job de-centralization that has persisted for the past half century.

As recently as 2002-2007, peripheral areas were growing much faster (1.2 percent annually) and aggregate job growth was stagnant in urban cores (0.1 percent). While the shift of metropolitan job growth toward services is aiding job centralization, the strong central growth of 2007-11 appears to be driven by the growing competitiveness of central cities relative to peripheral locations.

Our analysis shows that city centers had unusually strong job growth relative to peripheral locations in the wake of the Great Recession. Some of the impetus for central city growth comes from the relatively stronger performance of industries that tend to be more centralized, such as finance, entertainment, restaurants, and professional services.  The story is not just that job growth in central cities is improving when compared to outlying areas – city centers have also erased their competitive disadvantage relative to peripheral locations.

We undertook a shift-share analysis that allowed is to separate out the effects of changing industry mix from relative competitiveness. The data make it clear that city centers are more competitive in 2011 than they were in 2007. While city centers had a negative competitive effect in the 2002-07 period, their relative competitiveness for industry has been equal to peripheral locations from 2007-11.

 

The strength of city centers appears to be driven by a combination of the growing attractiveness of urban living, and the relatively stronger performance of urban-centered industries (business and professional services, software) relative to decentralized industries (construction, manufacturing) in this economic cycle. While it remains to be seen whether these same patterns continue to hold as the recovery progresses, (the latest LEHD data on city center job growth are for calendar year 2011), there are structural forces that suggest the trend of center-led growth will continue.

To hear a podcast on the report and its ramifications, go here. 

Download the full report on this page to learn more about this shift and read our complete analysis.

How Governing got it wrong: The problem with confusing gentrification and displacement

Here’s a quick quiz:  Which of the following statements is true?

a) Gentrification can be harmful because it causes displacement

b) Gentrification is the same thing as displacement

c) Gentrification is a totally different thing than displacement

d) All of the above

If the only studying you did was a reading of the latest series on gentrification from Governing Magazine, you’d have answered “d.” And of course, you’d have a tough time defending your answer.

In attempting to assemble a strong, data-driven definition of this controversial buzzword, a set of feature articles in its February 2015 issue entitled “The ‘G’ word—a special report on gentrification,” Governing succeeds only in making the tortured debate over gentrification even more contentious and unclear.

The most basic flaw of its analysis is coming down squarely on all sides of whether “gentrification” is the same thing as “displacement.”  While the authors claim that these two terms are different things, all of the harms from gentrification that they point to involve displacement: the problem of previous, generally poor residents being forced out of a neighborhood as it changes.

Governing has impressive maps and data—but maps and data are only as sound as the assumptions they are built on. The assumptions here—that gentrification can be accurately measured solely by looking at changes in house prices and education levels in relatively poor city neighborhoods—are flat out wrong, if we are concerned, as Governing tells us we should be, about the displacement of the poor.

There’s precious little evidence that there has been, in the aggregate, any displacement of the poor from the neighborhoods Governing flags as “gentrifying.”  If there were displacement, you’d expect the number of poor people in these neighborhoods to be declining.  In fact, nationally, there are more poor people living in the neighborhoods that they identify as “gentrifying” in 2013 than there were in 2000. Here’s the math*. Governing’s gentrifying neighborhoods have gained poor AND nonpoor residents according to Census data. And even after “gentrifying,” these neighborhoods still have higher poverty rates, on average, than the national average.

Careful academic studies of gentrifying neighborhoods, by Columbia’s Lance Freeman and the University of Colorado’s Terra McKinnish, show that improving neighborhoods actually do a better job of hanging on to previous poor and minority residents than poor neighborhoods that don’t improve. The University of Washington’s Jacob Vigdor has estimated that even when rents go up, existing residents generally attach a value to neighborhood improvements that more than compensates for the higher costs.

This confirms our own analysis of 1,100 urban high-poverty neighborhoods over the past four decades. Only about one in twenty of the census tracts we analyzed saw their poverty rate drop below the national average, and three-quarters stayed very high poverty, but didn’t improve or stay the same: they continued to deteriorate, losing on average 40 percent of their population over 40 years.

In contrast to gentrification, which is rare and seems to be seldom associated with actual displacement, concentrated poverty is real—a growing and devastating challenge that is damaging the futures of millions of Americans, especially children of color. In the past forty years, the number of high-poverty urban neighborhoods has tripled and their population has doubled, to 4 million. Growing up poor is difficult; growing up in neighborhoods where a large fraction of your neighbors are also poor is worse, exposing kids to higher crime and lower quality schools, results in increased mental health issues, fewer job and educational opportunities, and— according to new research by Patrick Sharkey, Raj Chetty and Jonathan Rothwell & Doug Massey— permanently lowers life prospects relative to otherwise similar kids who grow up in mixed income neighborhoods.

Raising a false alarm about gentrification is the policy equivalent of shouting “fire” in a crowded theatre: it promotes mindless panic and does nothing to help us understand and tackle our real urban problems. A magazine that calls itself “Governing” should know the difference between sensationalism and thoughtful analysis.

 

*:  Here’s the math:

Mathematically it’s clear that more poor people live in the Governing’s “gentrified” tracts today than in 2000: according to Governing between 2000 and 2009-13, the poverty rate in gentrified tracts declined by 0.7%, while the total population of these tracts increased by 6.5%.  Assuming the poverty rate in these tracts exceeded 13 percent in 2000, the population living below the poverty line in these tracts had to have actually increased between 2000 and 2013:  hardly evidence, on its face, of widespread displacement.

 

Lost in Place

Lost in Place: Why the persistence and spread of concentrated poverty–not gentrification–is our biggest urban challenge.

A close look at population change in our poorest urban neighborhoods over the past four decades shows that the concentration of poverty is growing and that gentrification is rare.

While media attention often focuses on those few places that are witnessing a transformation, there are two more potent and less mentioned storylines. The first is the persistence of chronic poverty. Three-quarters of 1970 high-poverty urban neighborhoods in the U.S. are still poor today. The second is the spread of concentrated poverty: three times as many urban neighborhoods have poverty rates exceeding 30 percent as was true in 1970 and the number of poor people living in these neighborhoods has doubled.

The result of these trends is that the poor in the nation’s metropolitan areas are increasingly segregated into neighborhoods of concentrated poverty. In 1970, 28 percent of the urban poor lived in a neighborhood with a poverty rate of 30 percent or more; by 2010, 39 percent of the urban poor lived in such high-poverty neighborhoods. The data, methodology and results of our study are spelled out in our full report, available in PDF format here. The highlights are as follows:

  • High poverty is highly persistent. Of the 1,100 urban census tracts with high poverty in 1970, 750 still had poverty rates double that of the national average four decades later.
  • Though poverty persisted, these high-poverty neighborhoods were not stable—in the aggregate they lost population, with chronic high-poverty neighborhoods losing 40 percent of their population over four decades.
  • Moreover, few high-poverty neighborhoods saw significant reductions in poverty. Between 1970 and 2010, only about 100 of the 1,100 high-poverty urban neighborhoods experienced a reduction in poverty rates to below the national average. These 100 formerly high-poverty census tracts accounted for about five percent of the 1970 high-poverty neighborhood population. In contrast to chronically high-poverty neighborhoods, which lost population, these “rebounding” neighborhoods recorded an aggregate 30 percent increase in population.
  • Urban high-poverty neighborhoods proliferated between 1970 and 2010. The number of high-poverty neighborhoods in the core of metropolitan areas has tripled and their population has doubled in the past four decades. A majority of the increase in high-poverty neighborhoods has been accounted for by “fallen stars”—places that in 1970 had poverty rates below 15 percent, but which today have poverty rates in excess of 30 percent.
  • The growth in the number of poor persons living in “fallen star” neighborhoods dwarfs the decrease in the poverty population in “rebounding” neighborhoods. Since 1970, the poor population in rebounding neighborhoods fell by 67,000 while the number of poor persons living in fallen star neighborhoods increased by 1.25 million.
  • The data presented here suggest an “up or out” dynamic for high-poverty areas. A few places have gentrified, experienced a reduction in poverty, and generated net population growth. But those areas that don’t rebound don’t remain stable: they deteriorate, lose population, and overwhelmingly remain high-poverty neighborhoods. Meanwhile, we are continually creating new high-poverty neighborhoods.

To be poor anywhere is difficult enough, but a growing body of evidence shows the negative effects of poverty are amplified for those who live in high-poverty neighborhoods—places where 30 percent or more of the population live below the poverty line. Quality of life is worse, crime is higher, public services are weaker, and economic opportunity more distant in concentrated poverty neighborhoods. Critically, concentrated poverty figures prominently in the inter-generational transmission of inequality: children growing up in neighborhoods of concentrated poverty have permanently impaired economic prospects.

Our analysis focuses on the 51 largest US metropolitan areas–all those with a population of 1 million or more in the latest Census. The following tables summarize, by metro area, the key variables in our research–the number of high poverty neighborhoods in 1970 and 2010, and the numbers of neighborhoods transitioning between various categories over time.

Listen to the author speak about the report on Think Out Loud, Oregon Public Broadcasting, December 9, 2014:

The Strong Towns Podcast also had the author on to speak about the report.

 

 

For people interested in tracking the performance of a single metropolitan area across all of our measures of concentrated poverty, we offer a Metro-level dashboard. You can select an individual metropolitan area and see how it performs on each of our indicators.

Finally, you can drill down to the level of individual census tracts to examine population change, and the change in the number of persons living in poverty in each metropolitan area covered in our report. (See the full-sized version here)

You can also see the findings in this easy-to-share infographic:

Click for full infographic.
Click for full infographic.

America’s Most Diverse Mixed Income Neighborhoods

In a nation increasingly divided by race and economic status, where our life prospects are increasingly de ned by the wealth of our zip codes, some American neighborhoods are bucking the trend.

These neighborhoods—which we call America’s most diverse, mixed-income neighborhoods—have high levels of racial, ethnic and income diversity. This report identifies, maps and counts the nation’s most diverse mixed-income neighborhoods. In these neighborhoods, residents are much more likely than the average American to have neighbors from different racial/ethnic groups than themselves, and neighbors with different levels of income. We find that:

  • Nearly 7 million Americans live in neighborhoods with both high levels of racial/ethnic and economic diversity.
  • Roughly half of these neighborhoods are found in three of the nation’s largest, most diverse metropolitan areas: New York, Los Angeles and San Francisco.
  • Most large metropolitan areas have several neighborhoods that are among the nation’s most diverse and mixed income. Forty-four of the nation’s 52 largest metro areas have at least one diverse, mixed-income neighborhood.
  • The racial and ethnic diversity of a metropolitan area sets the context for having diverse, mixed income neighborhoods. Whether metropolitan diversity is reflected in the lived experience in the typical neighborhood depends on how segregated a metropolitan area is by race, ethnicity and class.
  • Some metropolitan areas come much closer to realizing their potential for neighborhood racial/ethnic diversity, given their metropolitan demographic composition.

We identified the nation’s most diverse, mixed income neighborhoods using Census data on the race, ethnicity and household income of neighborhood residents. For each of more than 31,000 urban neighborhoods, we computed a Racial and Ethnic Diversity Index (REDI), which corresponds to the probability that any two randomly selected individuals in a neighborhood would be from different racial/ethnic categories. (Using Census data, we tabulated the number of white, black, Asian, Latino and all other persons in each neighborhood). We used a similar approach to compute an Income Diversity Index (IDI) which measures the variety of household incomes. Neighborhoods that ranked in the top 20 percent of all urban neighborhoods nationally on both of these measures were classified as diverse mixed income neighborhoods.

Which cities have the highest levels of diversity and mixed income?

Nearly all of the nation’s largest cities have at least one neighborhood that meets our definition as being both racially and ethnically diverse and mixed income. Three large cities–New York, Los Angeles and San Francisco account for nearly half such neighborhoods, but some smaller cities also rank high in the fraction of their population living in these diverse, mixed income neighborhoods.

Which cities are performing up to their potential?

Whether a city has many diverse, mixed income neighborhoods depends directly on the demographics of the metropolitan area in which it is located. There is still a wide range of racial and ethnic diversity among metropolitan areas. The following chart shows the relationship between a metropolitan area’s overall racial and ethnic diversity (shown on the horizontal axis) and the percentage of that region’s population that lives in diverse, mixed income neighborhoods. More diverse metros generally have a larger share of their population living in diverse, mixed income neighborhoods. The regression line shows the typical relationship between metro diversity and the share of population living in diverse, mixed income neighborhoods. Cities above that line are performing better, on average, than one would expect based on their diversity; cities below that line are performing less well.

Some cities do a better job of realizing their diversity at a neighborhood level, than others. For each large metropolitan area we’ve computed the racial and ethnic diversity of the median neighborhood–reflecting lived experience of the typical resident. We’ve then compared that with the racial and ethnic diversity of the metropolitan area to see how closely the experience of the typical neighborhood resident comes to matching the diversity of the metropolitan area in which they live. Cities at the top of the list have neighborhood diversity that closely resembles metro diversity; those at the bottom are much more segregated, and don’t experience at the neighborhood level much of the diversity of their region.

Where are the most diverse, mixed income neighborhoods?

We’ve mapped the locations of the most racially and ethnically diverse and most mixed income neighborhoods in each of the nation’s 52 largest metropolitan areas. The map for San Francisco–one of the higher ranking metro areas–shows strong concentrations of diverse, mixed income neighborhoods in the City of San Francisco and the East Bay.

Detailed maps of the location of diverse, mixed income neighborhoods for each of the nation’s 52 largest metropolitan areas are available here. These on-line maps enable you to see the patterns of diversity in each metro area, and drill-down to the census tract level to inspect data for individual neighborhoods.

Why integration matters

A growing body of social science research confirms the importance of diversity to economic success. Greater socioeconomic mixing is facilitated in neighborhoods that re ect America’s racial and ethnic diversity, and which offer housing that is affordable to people with a range of incomes. In a series of studies led by Stanford’s Raj Chetty and his colleagues at the Equality of Opportunity Project, racial and economic segregation have been shown to reduce intergenerational economic mobility (the probability that children of low income families will, as adults, earn higher incomes than their parents). A recent post at City Observatory presents a synopsis of the literature on this subject, with citations to key works.

For a long time, we’ve known that neighborhoods of concentrated poverty are toxic to the life prospects of children who grow up there. Rothwell and Massey have shown that your neighbors’ educational attainment is nearly half as large as your parents’ educational attainment in shaping your life prospects. Living in a neighborhood with greater diversity and a mix of incomes generally means that families enjoy better-resourced public services and civic assets (including schools, parks and libraries) and develop stronger, more diverse social networks. Diverse, mixed-income neighborhoods are a platform for helping kids from lower-income families to escape poverty and realize the American dream.

Want to know more?

We’ve laid out our data, methodology and more detailed findings on our analyses of racial and ethnic diversity, and of income diversity in our technical report “Identifying America’s Most Diverse Inclusive Neighborhoods.”

How is economic mobility related to entrepreneurship? (Part 1: Venture Capital)

The work of Raj Chetty and his colleagues at the Equality of Opportunity project has spurred intense interest in the extent of economic mobility, measured by the likelihood that children born to low-income parents achieve higher economic status when they are adults. Their work shows a remarkable degree of geographic variation in intergenerational economic mobility. In many communities, the chances of measurably improving one’s economic prospects are dramatically lower than in others. The variations aren’t random: their analysis finds that intergenerational economic mobility is correlated with a number of community characteristics, such as residential segregation, income inequality, school quality, social capital, and family structure.

In theory, we believe that entrepreneurship is a key mechanism for promoting economic mobility. Entrepreneurs can create new businesses that give themselves—and their employees—the chance to improve their economic position. We already know that entrepreneurship is one of the critically important factors in stimulating metropolitan economic growth. Job growth is strongly correlated with an abundance of small firms. Across metropolitan areas, metro areas with more small firms relative to the size of their population see faster employment growth (Glaeser, Kerr, & Ponzetto, 2010).

Fast growing, entrepreneurial firms may be particularly important for providing opportunities for upward mobility because they tend to hire more younger workers than other bigger firms (Ouimet & Zarutskie, 2013). Having a large number of young, small, entrepreneurial firms may create more opportunities for young workers from all economic strata to progress through the economic spectrum.

So, what is the relationship between entrepreneurial activity and economic mobility? One way we look at this is to examine venture capital per capita. (For this analysis, like most others we produce, we focus on the nation’s 51 largest metropolitan areas—those with populations larger than 1 million in 2012.)

Venture capital investments are a key indicator of entrepreneurial activity. We tabulate data from the National Venture Capital Association on the dollar value of venture capital in 2011 divided by the population of the metropolitan area. Because of the very large disparities in venture capital per capita among metropolitan, we took the log of this variable.

We compare the venture capital per capita in each metropolitan area with the level of intergenerational mobility by metropolitan area. We use Chetty, et al’s measure of intergenerational economic mobility: the probability that children born to families in the lowest income quintile had incomes as adults that put them in the highest income quintile. Among the nation’s largest metropolitan areas the probability of moving from the lowest quintile to the highest varied by a factor of about three: a four percent chance in the least economically mobile areas to a nearly 12 percent chance in the most economically mobile areas.

The chart below shows the relationship between venture capital and economic mobility for these large metropolitan areas. The data show a positive relationship between venture capital and economic mobility: cities with higher levels of venture capital have higher levels of economic mobility. (The R2 of .31 suggests that this is a statistically significant relationship.)

This strong positive relationship is not something we can immediately claim as a causal link—however, it has implications for further study. It also raises interesting questions: if cities attract more venture capital, will they be able to attract more young talent? And how will that impact economic mobility and inequality within the city?

In a future post we will examine the link between the number of small businesses in a metro and economic mobility, and conclude this segment. (To read more on economic opportunity, go here, and to read more about innovation and entrepreneurship, see our work here.)

One tip for a prosperous city economy

Local media over the course of the last several months have asked us variations on one question repeatedly: if our city wants to do better – be more productive, retain more young people, reduce poverty—how can it do that?

That’s a very complicated question of course, and each metro area and urban core has its own problems based on current policies and laws, history, and geography, among other factors. However, there is one indicator that above all else predicts success of city residents: college attainment rates. Even for those without a 4-year degree, this predicts success; essentially, if your neighbors are better educated, you are more likely to have a better income. With that, all of the correlates of a higher income such as health, educational opportunities for children, and even happiness—are higher.

It’s striking how strong and consistent the correlation between education and higher personal incomes is.  Economists attribute this to a number of factors.  Better educated workers command a high skill premium, because they’re more adaptable and productive, and are critical to growing knowledge based firms.  Education has important spillover benefits:  on average, workers of all education levels are more productive (and higher paid) if they live in cities that are better educated.  A well-educated population makes a city more resilient in the face of economic and technological change, and better able to quickly adapt to new circumstances and opportunities.

Cities around the nation pursue a range of different economic strategies–pursuing new industries, promoting innovation, encouraging entrepreneurship, expanding infrastructure, and building civic amenities.  While there are merits to all of these approaches, every one of them takes a back seat to improving educational attainment as a way to raise incomes.  Put another way, all of these strategies will work better in a place with strong educational attainment, and communities with weak educational attainment will find only meager returns.

Improving educational attainment isn’t the only economic strategy, but it’s a fundamental one, and if your city fails to move forward in this important area, it will find it more difficult to successfully implement all of its other tactics.

At CityObservatory, we track attainment rates closely, as we believe talent is the biggest driver of positive (or negative) change in a city. The figure above shows the most updated figures on educational attainment and per capita income, from the 2013 American Community Survey data. To learn more about how talent drives city success, go here, and be sure to check back often, as we will continue to discuss how talent and success are tied to complex urban problems (and solutions to those problems).

How segregation limits opportunity

The more segregated an metro area is, the worse the economic prospects of the poor and people of color

Our City Observatory report, Lost in Place, closely tracks the growth of concentrated poverty in the nation’s cities; this is particularly important because of the widespread evidence of the permanent damage high-poverty neighborhoods do to children of poor families.

Two recent studies shed additional light on the importance of economic and racial integration to the life chances of students from low income families and children of color.

Writing in the journal Social Problems, Lincoln Quillan explores the question “Does Segregation Create Winners and Losers?” Quillian uses data from the Panel Study of Income Dynamics, a federal survey program that gathers longitudinal data on a representative group of Americans over several decades.

Quillian shows that increases in segregation at the metropolitan level are associated with lower rates of high school completion for poor and black students. Poor and black students that live in more segregated metropolitan areas are less likely to graduate from high school after controlling for other observable factors that influence individual success, such as the level of their parents’ education. Significantly, higher rates of segregation do not appear to have any statistically significant effects on the high school completion rates of whites or the non-poor. Taken together, these findings suggest that increasing racial and economic integration improves the educational outcomes for black and poor students without any negative effect on the educational outcomes of white and non-poor students.

This is important. If increased economic integration does not affect educational prospects for higher-income students, then the myth that having more integrated neighborhoods will “drag down” the potential success for the current residents is just that: a false myth. The implication of this research for housing policy is particularly salient.

In another article, published in the Annals of the American Academy of Political and Social Science, Sean Reardon, Lindsay Fox, and Joseph Townsend look at the trends in income segregation. Using data from the American Community Survey, they look at the trends behind the growing overall levels of income segregation in most metropolitan areas.

Their analysis finds that aggregate household income segregation has increased mostly because of the increasing isolation of the highest income households from low- and moderate-income households. This is what Robert Reich famously labeled “the secession of the successful.” Higher-income households are more likely to live in neighborhoods with other high-income households than was true two three decades ago. The authors also estimate changes in income segregation for each of the 50 largest metropolitan areas in the nation. They point out wide variations across the country.

Differences in income levels and residential segregation patterns among metropolitan areas produce very different experiences for the urban poor in different metros. In some higher income metro areas with less segregation, the poorest residents live in neighborhoods with noticeably higher incomes than the poorest residents of poorer, more segregated metros. For example, those in the tenth percentile of household income in Washington D.C. and Minneapolis live in neighborhoods that have average household incomes equal to the levels experienced by the median-income households in Atlanta and Los Angeles. You can see these differences in the figure below, excerpted from the paper:

Reardon Figure 4

This plots household income against neighborhood income. Most metros are similar, with the typical low-income family living in a neighborhood with a median income of $45K. Washington and Minneapolis have higher average incomes and are more economically integrated than other large metropolitan areas. Families in the lowest 25th percentile in these cities live in neighborhoods with median incomes of $60,000 (Minneapolis) and $70,000 (Washington). In the typical large metro area, you have to have an income of $75,000 (or more) to have such well-to-do neighbors.

Finally, this paper also presents major findings on racial integration and associated effects on economic integration. Black and Hispanic households tend to be highly concentrated into black and Hispanic neighborhoods, which has implications for poverty and economic mobility that we outline in our report here and blog post here. Most importantly, households with the same yearly income live in very different neighborhoods depending on their race:

“Black middle-class households (with incomes of roughly $55-$60,000), for example, typically live in neighborhoods with median incomes similar to those of very poor white households (those with incomes of roughly $12,000). For Hispanic households the disparity is only slightly smaller. Moreover, even high-income black and Hispanic households do not achieve neighborhood income parity with similar-income white households.”

While the growing gap between rich and poor is capturing greater policy attention, these two studies remind us that the spatial patterns of integration within metropolitan areas have a big impact on the quality of life and life prospects, especially of low-income households. It also indicates that how we build and inhabit our cities influences educational attainment and economic success, have an important role in ameliorating the effects of income inequality, which can have long-lasting impacts on city-wide educational attainment and economic success.

A hat tip to City Observatory’s friend Bridget Marquis for flagging these articles.

New Findings on Economic Opportunity (that you should know)

Our recent report, Lost in Place, closely tracks the growth of concentrated poverty in the nation’s cities; this is particularly important because of the widespread evidence of the permanent damage high-poverty neighborhoods do to children of poor families.

Two new studies shed additional light on the importance of economic and racial integration to the life chances of poor students and children of color.

Writing in the journal Social Problems, Lincoln Quillan explores the question “Does Segregation Create Winners and Losers?”

Quillian uses data from the Panel Study of Income Dynamics, a federal survey program that gathers longitudinal data on a representative group of Americans over several decades.

Quillian shows that increases in segregation at the metropolitan level are associated with lower rates of high school completion for poor and black students. Poor and black students that live in more segregated metropolitan areas are less likely to graduate from high school after controlling for other observable factors that influence individual success, such as the level of their parents’ education. Significantly, higher rates of segregation do not appear to have any statistically significant effects on the high school completion rates of whites or the non-poor. Taken together, these findings suggest that increasing racial and economic integration improves the educational outcomes for black and poor students without any negative effect on the educational outcomes of white and non-poor students.

This is important. If increased economic integration does not affect educational prospects for higher-income students, then the myth that having more integrated neighborhoods will “drag down” the potential success for the current residents is just that: a false myth. The implication of this research for housing policy is particularly salient.

In another article, due for publication in a forthcoming issue of the Annals of the American Academy of Political and Social Science, Sean Reardon, Lindsay Fox, and Joseph Townsend look at the trends in income segregation. Using data from the American Community Survey, they look at the trends behind the growing overall levels of income segregation in most metropolitan areas.

Their analysis finds that aggregate household income segregation has increased mostly because of the increasing isolation of the highest income households from low- and moderate-income households. Higher-income households are more likely to live in neighborhoods with other high-income households than was true two three decades ago. The authors also estimate changes in income segregation for each of the 50 largest metropolitan areas in the nation. They point out wide variations across the country.

Differences in income levels and residential segregation patterns among metropolitan areas produce very different experiences for the urban poor in different metros. In some higher income metro areas with less segregation, the poorest residents live in neighborhoods with noticeably higher incomes than the poorest residents of poorer, more segregated metros. For example, those in the tenth percentile of household income in Washington D.C. and Minneapolis live in neighborhoods that have average household incomes equal to the levels experienced by the median-income households in Atlanta and Los Angeles. You can see these differences in the figure below, excerpted from the paper:

Reardon Figure 4

This plots household income against neighborhood income. Most metros are similar, with the typical low-income family living in a neighborhood with a median income of $45K. Washington and Minneapolis have higher average incomes and are more economically integrated than other large metropolitan areas. Families in the lowest 25th percentile in these cities live in neighborhoods with median incomes of $60,000 (Minneapolis) and $70,000 (Washington). In the typical large metro area, you have to have an income of $75,000 (or more) to have such well-to-do neighbors.

Finally, this paper also presents major findings on racial integration and associated effects on economic integration. Black and Hispanic households tend to be highly concentrated into black and hispanic neighborhoods, which has implications for poverty and economic mobility that we outline in our report here and blog post here. Most importantly, households with the same yearly income live in very different neighborhoods depending on their race: “Black middle-class households (with incomes of roughly $55-$60,000), for example, typically live in neighborhoods with median incomes similar to those of very poor white households (those with incomes of roughly $12,000). For Hispanic households the disparity is only slightly smaller. Moreover, even high-income black and Hispanic households do not achieve neighborhood income parity with similar-income white households.”

While the growing gap between rich and poor is capturing greater policy attention, these two studies remind us that the spatial patterns of integration within metropolitan areas have a big impact on the quality of life and life prospects, especially of low-income households. It also indicates that how we build and inhabit our cities influences educational attainment and economic success, have an important role in ameliorating the effects of income inequality, which can have long-lasting impacts on city-wide educational attainment and economic success.

A hat tip to City Observatory’s friend Bridget Marquis for flagging these articles.

Why integration matters

Socioeconomic mixing, in neighborhoods that are diverse in race, ethnicity and income, benefits everyone

To some extent, we take for granted that integration and equal opportunity should be valued for their own sake. But its worth noting that achieving greater integration along both racial/ethnic and income dimensions is important to achieving more widespread prosperity and combatting poverty.

A growing body of sociological and economic research have demonstrated the high costs associated with racial and income segregation. While a comprehensive review of this literature is beyond the scope of this paper, we highlight here some of the key research findings that bear on the economic consequences of neighborhood diversity. Neighborhoods of concentrated disadvantage are not simply places where many households suffer from their own individual problems. The segregation of poverty (or a marginalized racial group) creates its own additional, collective burden on residents of these communities.

Galster and Sharkey undertake an extensive literature review of data on neighborhood effects of poverty. They find that segregation is associated with lower cognitive development and weaker academic performance, greater likelihood of teen pregnancy and risky behaviors, reduced physical and mental health, lower incomes and lower probability of employment, greater likelihood of being affected by or engaged in crime. Looking at more than 100 studies which they regard as quantitatively rigorous they conclude:

. . . the findings on the number of (methodologically rigorous) studies that have found substantial, statistically significant effects of spatial context (for at least some set of individuals) and those that have not, by outcome domain. The tally makes it clear that the preponderance of evidence in every outcome domain is that multiple aspects of spatial context exert important causal influences over a wide range of outcomes related to socioeconomic opportunity, though which aspects are most powerful depends on the outcome and the gender and ethnicity of the individuals in question.
(Galster & Sharkey, 2017)

Part of this burden is evident in day-to-day quality of life issues, such as greater exposure to crime. Studies of the “Moving to Opportunity” program, in which families were given assistance to move from low-income to middle-income neighborhoods, showed a marked improvement in self-reported well-being. Moving to a neighborhood whose poverty rate was 13 percentage points lower was associated with an increase in self-reported quality of life equivalent to an increase of $13,000 in household income (Ludwig et al., 2012). But perhaps the most serious effects of concentrated disadvantage are the ways in which it acts to reproduce inequality and quash economic opportunity and mobility—the very promise of the American dream.

High-poverty neighborhoods put their residents at a significant and immediate economic disadvantage. They typically have fewer local jobs than other neighborhoods, and often are distant from, or poorly connected to, other major job centers. These communities also often lack social networks that allow residents to find job openings (Bayer, Ross, & Topa, 2004).

For these and other reasons, people who grow up in high-poverty neighborhoods, on average, have worse economic outcomes than people who grow up in other kinds of neighborhoods, even if their family backgrounds are identical. The Equality of Opportunity Project has shown that inter-generational income mobility is significantly higher in metropolitan areas with lower levels of income segregation(Chetty, Hendren, Kline, & Saez, 2014)). The effect is so strong that, for children whose families move from high-segregation to low-segregation metropolitan areas, each additional year spent in the high-segregation region before the move is associated with less income as an adult.

Chetty and Hendren find that across metropolitan areas both income and racial ethnic segregation have a negative effect on children’s income as adults (Chetty & Hendren, 2016) (Chetty & Hendren, 2016)

“. . . our analysis strongly supports the hypothesis that growing up in a more segregated area – that is, in a neighborhood with concentrated poverty – is detrimental for disadvantaged youth. “

But they go on to say that it’s not because of their parents access to jobs, but because of the children’s exposure to a different set of peers.

“Areas with less concentrated poverty, less income inequality, better schools, a larger share of two-parent families, and lower crime rates tend to produce better outcomes for children in poor families. Boys’ outcomes vary more across areas than girls’ outcomes, and boys have especially negative outcomes in highly segregated areas. One-fifth of the black-white income gap can be explained by differences in the counties in which black and white children grow up.”

Other studies have found similar effects. For example, black children who grow up in high-poverty neighborhoods that transition to low levels of poverty have incomes that are 30 to 40 percent higher than black children with similar backgrounds who grow up in neighborhoods that remain at high levels of poverty (Sharkey, 2013) Observing the results of a natural experiment that relocated families from public housing in Chicago, Eric Chyn found that children who moved even relatively short distances to neighborhoods with somewhat lower poverty rates also experienced noticeable gains in earnings (Chyn, 2016)

Another analysis suggests that the educational level of ones neighbors has an effect on a child’s economic future nearly as large as that of the educational level of a child’s own parents. The effect of neighborhood educational level on children’s future earnings have been estimated to be two-thirds as powerful as the influence of the children’s own parental educaton (Rothwell & Massey, 2014).

The effects that are observed at the neighborhood level appear to compound to produce the variations in economic results we observe across metropolitan areas. Quillian shows that increases in segregation at the metropolitan level are associated with lower rates of high school completion for poor and black students. (Quillian, 2014) Quillian uses data from the Panel Study of Income Dynamics, a federal survey program that gathers longitudinal data on a representative group of Americans over several decades. Poor and black students that live in more segregated metropolitan areas are less likely to graduate from high school after controlling for other observable factors that influence individual success, such as the level of their parents’ education. Significantly, higher rates of segregation do not appear to have any statistically significant effects on the high school completion rates of whites or the non-poor. Taken together, these findings suggest that increasing racial and economic integration improves the educational outcomes for black and poor students without any negative effect on the educational outcomes of white and non-poor students.

A recent study prepared by the Urban Institute and the Metropolitan Policy Center estimated the cumulative economic and social costs associated with segregation in that metropolitan area. They found that the annual estimated cost of segregation in Chicago was more than $4 billion annually in lost income, and meant that fewer residents achieved a college education, while more were victims of crime, including homicide. (Acs, Pendall, Treskon, & Khare, 2017)

Taken together, the weight of social science evidence shows that racial/ethnic and economic segregation have profound consequences for individuals, for neighborhoods and entire cities. Much of the persistence and severity of poverty is due to the continued segregation. More integrated neighborhoods and more integrated cities enjoy better economic results, and produce better lifetime opportunities for their children. These findings point up the critical importance of the role of the nation’s racially and ethnically diverse, mixed income neighborhoods.

References

Acs, G., Pendall, R., Treskon, M., & Khare, A. (2017). The Cost of Segregation: National Trends and the Case of Chicago, 1990–2010. Washington, DC: Urban Institute. Retrieved from http://www. urban. org/research/publication/cost-segregation.

Bayer, P., Ross, S. L., & Topa, G. (2004). Place of Work and Place of Residence: Informal Hiring Networks and Labor Market Outcomes (Working paper No. 2004–07). University of Connecticut, Department of Economics. Retrieved from https://ideas.repec.org/p/uct/uconnp/2004-07.html

Chetty, R., & Hendren, N. (2016). The impacts of neighborhoods on intergenerational mobility ii: County-level estimates. National Bureau of Economic Research. Retrieved from http://www.nber.org/papers/w23002

Chetty, R., Hendren, N., Kline, P., & Saez, E. (2014). Where is the Land of Opportunity? The Geography of Intergenerational Mobility in the United States. National Bureau of Economic Research. Retrieved from http://www.nber.org/papers/w19843

Chyn, E. (2016). Moved to opportunity: The long-run effect of public housing demolition on labor market outcomes of children. Unpublished Paper. University of Michigan, Ann Arbor.

Galster, G., & Sharkey, P. (2017). Spatial Foundations of Inequality: A Conceptual Model and Empirical Overview. RSF, 3(2), 1–33. https://doi.org/10.7758/RSF.2017.3.2.01

Ludwig, J., Duncan, G. J., Gennetian, L. A., Katz, L. F., Kessler, R. C., Kling, J. R., & Sanbonmatsu, L. (2012). Neighborhood effects on the long-term well-being of low-income adults. Science, 337(6101), 1505–1510.

Quillian, L. (2014). Does Segregation Create Winners and Losers? Residential Segregation and Inequality in Educational Attainment. Social Problems, 61(3), 402–426.

Rothwell, J. T., & Massey, D. S. (2014). Geographic Effects on Intergenerational Income Mobility. Economic Geography, n/a-n/a. https://doi.org/10.1111/ecge.12072

Sharkey, P. (2013). Stuck in place: Urban neighborhoods and the end of progress toward racial equality. University of Chicago Press. Retrieved from http://books.google.com/books?hl=en&lr=&id=R-b_NlPJeuUC&oi=fnd&pg=PR5&dq=patrick+sharkey+stuck+in+place&ots=xJkeq39Kje&sig=0lmKDBM6OxHGMNk0jBga4EtDFqM

 

Consuming the city: Ranking restaurants per capita

The number of eating places per capita is a key measure of a city’s livability

Cities are great places for consumers.  They provide an abundance and variety of choices, especially in the form of experiences. While our conventional economic indicators don’t fully capture the nature and depth of choices in cities, there are some measures that shed light on which places offer the most.  Today we offer our index of restaurants per capita as one such indicator of where choice is greatest.

There are plenty of competing rankings for best food cities floating around the internet. You can find lists for cities with the most restaurants, the best restaurants, the most distinctive local restaurants… and of course none of these seem to agree (although the “winners” tend to be similar among these lists).

But what about the cities that provide the most dining options per person? And what does restaurant variety have to do with a city’s livability?

One of the hallmarks of a great city is a smorgasbord of great places to eat. Cities offer a wide variety of choices of what, where, and how to eat, everything from grabbing a dollar taco to seven courses of artisanally curated locally raised products (not to mention pedigreed chickens). The “food scene” is an important component of the urban experience.

Restaurants are an important marker of the amenities that characterize attractive urban environments. Ed Glaeser and his colleagues found that “Cities with more restaurant and live performance theaters per capita have grown more quickly over the past 20 years both in the U.S. and in France.”

Matthew Holian and Matthew Kahn have seen that an increase in the number of restaurants per capita in a downtown area has a statistically significant effect in reducing driving and lowering greenhouse gas production. We’ve assembled data on the number of restaurants per capita in each of the nation’s largest metropolitan areas. These data are from the County Business Patterns data compiled by the US Census Bureau for 2012. Note that this category, technically NAICS 72251, includes both sit down, table service restaurants and simpler fast food and self-service self-service establishments. We’re also looking at metro-wide data to assure that the geographical units we’re comparing are defined in a similar fashion—political boundaries like city limits and county lines are arbitrary and vary widely from place to place, making them a poor basis for constructing this kind of comparison.

As you might guess, the metro areas with the most restaurants per capita are found predominantly in the Northeast and on the West Coast. Elsewhere, New Orleans scores high as well. While the average metropolitan area has about 17 restaurants per 10,000 residents, the range is considerable. The San Francisco metropolitan area has more than 23 restaurants per 10,000, while Riverside and Grand Rapids have only about 14 per 10,000. (On this map areas shaded green have the highest number of restaurants per capita; areas shaded red have the fewest. Detailed data on individual metropolitan areas is shown in the table below).

The top six metropolitan areas on this indicator are San Francisco, New York, Providence, Boston, Seattle and Portland. Each of these cities has twenty or more  restaurants per 10,000 population. With the possible exception of Providence, all of these are recognized as major food cities in the US. (And Portland achieves its high ranking without counting the city’s more than 500 licensed food carts.)

In an important sense, the number of different restaurants in an area correlates to the range choices available to consumers. Cities that have more restaurants per capita tend to have larger restaurants (measured by the average number of employees per restaurant).  Interestingly, Las Vegas, which we think of as a tourism mecca, has fewer restaurants per capita than the average metropolitan area. A lot of this has to do with scale—the average restaurant in Las Vegas tends to be much larger than in other metropolitan areas.

This ranking doesn’t include anything about quality–simply quantity–but the higher restaurants per capita can indicate higher competition (and therefore better quality options), or higher demand (a signal that more diversity of options is valued, allowing for more valuable experiences).

While this isn’t a perfect listing of best food culture — each person’s measure of the ‘best food town’ is subjective — it does settle the debate of where you should go to have the largest selection of eatery options.

 

You are where you eat.

The Big Idea: Many metro areas vie for the title of “best food city.” But what cities have the most options for grabbing a bite to eat — and what does that say about where you live?

8463293463_559cacf5bd_m

There are plenty of competing rankings for best food cities floating around the internet. You can find lists for cities with the most restaurants, the best restaurants, the most distinctive local restaurants… and of course none of these seem to agree (although the “winners” tend to be similar among these lists).

But what about the cities that provide the most dining options per person? And what does restaurant variety have to do with a city’s livability?

One of the hallmarks of a great city is a smorgasbord of great places to eat. Cities offer a wide variety of choices of what, where, and how to eat, everything from grabbing a dollar taco to seven courses of artisanally curated locally raised products (not to mention pedigreed chickens). The “food scene” is an important component of the urban experience.

Restaurants are an important marker of the amenities that characterize attractive urban environments. Ed Glaeser and his colleagues found that “Cities with more restaurant and live performance theaters per capita have grown more quickly over the past 20 years both in the U.S. and in France.”

Matthew Holian and Matthew Kahn have seen that an increase in the number of restaurants per capita in a downtown area has a statistically significant effect in reducing driving and lowering greenhouse gas production.

We’ve assembled data on the number of full service restaurants per capita in each of the nation’s largest metropolitan areas. These data are from the County Business Patterns data compiled by the US Census Bureau for 2012. Note that the “full service” definition basically applies only to sit down, table service restaurants, not the broader category that includes fast food and self-service. We’re also looking at metro-wide data to assure that the geographical units we’re comparing are defined in a similar fashion—political boundaries like city limits and county lines are arbitrary and vary widely from place to place, making them a poor basis for constructing this kind of comparison.

As you might guess, the metro areas with the most restaurants per capita are found predominantly in the Northeast and on the West Coast. Elsewhere, New Orleans and Denver score high as well. While the average metropolitan area has about seven full-service restaurants per 10,000 residents, the range is considerable. The San Francisco metropolitan area has more than 11 restaurants per 10,000, while Riverside has only five and seven other metropolitan areas have fewer than six.

The top five metropolitan areas on this indicator are San Francisco, Providence, Portland, New York, and Seattle. Each of these cities has nine or more full service restaurants per capita. With the possible exception of Providence, all of these are recognized as major food cities in the US. (And Portland achieves its high ranking without counting the city’s more than 500 licensed food carts.)

Interestingly, Las Vegas, which we think of as a tourism mecca, has fewer restaurants per capita than the average metropolitan area. A lot of this has to do with scale—the average restaurant in Las Vegas tends to be much larger than in other metropolitan areas. According to the Census Bureau, almost eight percent of Las Vegas restaurants employed more than 100 workers; nationally the average is only two percent.

This ranking doesn’t include anything about quality–simply quantity– but the higher restaurants per capita can indicate higher competition (and therefore better quality options), or higher demand (a signal that more diversity of options is valued, allowing for more valuable experiences). It is also highly correlated with per capita income, which makes sense: the more people that are able to afford frequent restaurant outings, the more restaurants there will be.

While this isn’t a perfect listing of best food culture — each person’s measure of the ‘best food town’ is subjective — it does settle the debate of where you should go to have the largest selection of eatery options. If you’re going to travel 2,000 miles for dinner, it might be wise to make a reservation. Or if you’re going to Portland, at least be ready to wait in line.

 

Photo courtesy of Janet at Flickr Creative Commons

How productive is your city?

Which metropolitan economies are the most productive?  Our broadest measure of economic output is gross domestic product — the total value of goods and services produced by our economy.  Economists usually compare the productivity of national economies by looking at GDP per worker or per employee.  At the sub-national level, the Bureau of Economic Analysis estimates an analogous concept “Gross Metropolitan Product” –the total value of goods and services produced in a metropolitan area.

If we divide metropolitan GDP by population, we get a rough idea of which metropolitan economies are the most productive on a per person basis.  Nationally, gross metropolitan product averages about $55,000 per person in the nation’s largest metropolitan areas.

The distribution is characterized by two distinct outliers: Riverside, CA on the low end, and San Jose on the high end. The two cities are 400 miles apart, but San Jose has a GDP per capita almost $75,000 more than Riverside (that’s more than most cities produce in a year per person).

In general, it’s clear that the productivity of a few big cities in the northeast and west coast is much higher than those in the middle of the country. Nine metros have gross domestic product over $65,000 per capita, and the only one of these not on the east or west coast is Houston.

It should be noted that this looks quite similar to the map of educational attainment: GDP per capita and educational attainment are highly correlated, and an increase in the level of talent in one’s city is associated with an increase in GDP:

We should keep in mind that gross product is a broad measure of economic activity:  it picks up the value of goods and services produced in an area, including the rental value of owner-occupied homes and returns to physical capital.  While most labor income in a metropolitan area goes to residents of that area, capital income often goes to owners who live elsewhere.  Since GMP measures the value of services where businesses are located, rather than where shareholders live, it apportions the capital returns for banks in New York, to New York, and for software firms in Seattle, to Seattle, rather than to the location of the shareholders of these firms.

Some technical notes:  The Bureau of Economic Analysis measures gross domestic product of metropolitan areas in chained 2009 dollars.  These data are for calendar year 2013; annual data for 2014 should be released in the third quarter of this year.  You can explore GDP by industry sector to see which industries make the biggest contribution to regional output in each metropolitan area.  Detailed data are available on the BEA website:  http://www.bea.gov/regional/index.htm

Keeping it Weird:  The Secret to Portland’s Economic Success

Note: This article appeared originally in the February 13, 2010, edition of The Oregonian. Forgive any anachronistic references.

These are tough economic times. Although economists tell us the recession is officially over, a double-digit unemployment rate tells us something different. The bruising battle over the economic consequences of tax Measures 66 and 67 underscored deep disagreement — and uncertainty — about Oregon’s economic future.

What will we do for a strategy? I think you can find the answer hidden in plain sight. Keep Portland Weird. You’ve seen the bumper sticker around town. It’s funny and controversial. It’s spawned imitators (Keep Portland Beered, Keep Portland Wired) and competitors (Keep Vancouver Normal). But it’s not just a bumper sticker — it’s an economic strategy.

In a turbulent economy, being different and being open to new ideas about how to do things are remarkably important competitive advantages.

The bumper sticker may not be original — apparently the idea was imported from a buy-local campaign in Austin, Texas — but it is popular, with more than 18,000 of the stickers sold. And make no mistake, Portland is weird, at least compared with other major U.S. metropolitan areas.

We developed a weirdness index for the national organization CEOs for Cities that measures the differences in behavior based on 60 different indicators of what people do, watch, read and consume.

We used this data to rank the 50 largest metro areas, based on how closely their patterns tracked the overall national average. Portland ranks 11th of the 50.

The most normal places in the country are in the Midwest. Consumption patterns, attitudes and behaviors in St. Louis, Kansas City, Cincinnati and Columbus almost exactly match national norms.

Trying to summarize weirdness in a single index is, of course, a contradiction in terms. Every weird city is weird in its own unique way. San Francisco and Salt Lake City rank among the weirdest — most different from the U.S. average in attitudes, activities and behaviors –but are nothing alike. So it makes sense to drill down to find out what makes each place distinctive.

In what ways is Portland weird? As you might expect, recreation, environmentalism, and great food and drink figure prominently. Compared with the U.S. average, Portlanders are twice as likely to go camping, 60 percent more likely to go hiking or backpacking and 40 percent more likely to golf or hunt. Portland has the highest per-capita ownership of hybrid vehicles of any city, and more people belong to environmental groups. We also rank above average in consumption of alcohol, coffee and tea.

Another way to track local weirdness is to look at what terms people are searching for on the Internet. According to Google over the past year, Portland ranks first among U.S. metro areas for the search terms “sustainability,” “vegan,” “farmers market,” “cyclocross,” “microbrew” and “dragonboat,” and second — after Seattle — for “espresso.”

But aside from winning bar bets or playing Trivial Pursuit, what’s the economic importance of being weird?

As it turns out, a lot.

When it comes to economic success in today’s economy, the key is to differentiate yourself from your competitors. Harvard Business School’s Michael Porter counsels businesses that “competitive strategy is about being different.” And the late, great urbanist Jane Jacobs told us, “The greatest asset that a city can have is something that’s different from every other place.”

Practical examples of how distinctive local behaviors translate into economic activity are right in our own backyard.

Back in the ’60s, at a time when most adults didn’t sweat in public if they could avoid it, people in Oregon started the trend of jogging and running for health. One guy started selling these people Japanese sneakers out of the back of his station wagon: Phil Knight. The company he founded is a global powerhouse.

A similar story could be told about two avowed ex-hippie home brewers, who as soon as it was legal to do so, started selling kegs of their beer to local restaurants out of the back of their Datsun pickup. Kurt and Rob Widmer, and a host of other amateurs turned entrepreneurs, ignited a trend that is even today reshaping the brewing industry.

The conventional business wisdom of the 1960s or 1970s would never have forecast that Portland would become a hotbed for two industries that were either in steep decline (shoes) or increasingly monopolized by giant corporations (beer). But with local consumers who were willing to take a flier on something new — and whose tastes anticipated a much larger shift in global attitudes — athletic apparel and microbrewing both became signature industry clusters in metropolitan Portland.

True entrepreneurship is about deviant behavior: starting a business that makes a product that no one else has thought of or thinks there’s a market for. Entrepreneurs and open-minded, experimental customers go hand-in-hand.

Openness to change isn’t just about new products or services; it’s about community and government as well. Oregonians’ willingness to test novel or untried ideas of all kinds — urban growth boundaries, modern streetcars, vote-by-mail, death-with-dignity — is both representative of a widely held attitude towards change and a powerful advantage in a fast-moving world.

And in many cases, innovative public policies are essential to growing new industries. Microbrewers owe their early start, in part, to Oregon’s decision to be one of the first states to legalize craft brewing. Many Portland businesses are exporting the knowledge gained from the region’s pioneering work in urban planning, streetcars, green buildings and cycling.

Openness to new ideas also is critical to attracting and retaining mobile, talented young people — the college-educated 25- to 34-year-olds I call the “young and restless.” Our in-depth national study of migration trends showed that over the past decade, Portland has seen a 50 percent increase in this group, the fifth-fastest growth of any large metro area.

Portland’s special character, and the sense that one can live their values and make a mark, are key to this migration. As one interviewee put it: “This place communicates to newcomers that it ‘isn’t done yet’ and that there’s an opportunity for me to contribute to what it will become.”

To be sure, the Keep Portland Weird mantra has spawned detractors and wags: Keep Vancouver Normal. Keep Portland Sanctimonious. We shouldn’t do things just to be different, but we should never be dissuaded from trying something simply because it is different or would make us different from other places.

Decades ago, Gov. Tom McCall understood and gave voice to this sense of Oregon exceptionalism, when he famously said, “Come visit, but don’t stay.” Our pioneering spirit runs deep. Remember, the state’s motto is “She flies with her own wings,” which in today’s parlance would be translated as marching to the beat of a different drum.

Keeping Portland Weird ought to be the theme of our economic strategy. Especially today. As Hunter S. Thompson advised, when the going gets weird, the weird turn pro. We can be reasonably certain that the U.S. and world economies will need to change dramatically to meet the challenges we face in coping with climate change, providing health care and building livable communities. In the days ahead, being weird can be a competitive advantage.

Making weirdness your marketing slogan turns the usual logic of boosterism on its head. The conventional wisdom prescribes emphasizing a “good business climate” — usually consisting of the same things you find everywhere else, just cheaper. Traditional strategies chiefly involve clinging to the past or shamelessly clumsily copying what everyone else is doing.

If one buys into the view that the “world is flat” — the metaphorical reference to a level playing field in a global market — the temptation is to focus on making yourself “flatter.” In reality, the world though smaller and more tightly linked, isn’t flat. There are giant spikes of industry, creativity and inventiveness in particular places. So the key is to understand what your “spikes” are and capitalize on them. The alternative strategy — make Portland flatter — is a recipe for mediocrity and failure in a global knowledge-based economy where the ability to generate new ideas and turn them into businesses and better communities is the only source of sustainable competitive advantage.

No one can predict what will be the industries of the future. They have to be invented and created through trial and error — lots of trials, and almost as many errors. A place that is open to new ideas — especially weird ones — is by its nature better positioned to generate the kinds of trials that lead to these new industries.

Why biotech strategies are often 21st century snake oil

Thanks to technological innovations, our lives are in many ways better, faster, and safer: We have better communications, faster, cheaper computing, and more sophisticated drugs and medical technology than ever before. And rightly, the debates about economic development focus on how we fuel the process of innovation. At City Observatory, we think this matters to cities, because cities are the crucibles of innovation, the places where smart people collaborate to create and perfect new ideas.

While the emphasis on innovation is the right one, like any widely accepted concept, there are those who look to profit from the frenzy of enthusiasm and expectation.

Around the country, dozens of cities and many states have committed themselves to biotech development strategies, hoping that by expanding the local base of medical research, that they can generate commercial activity—and jobs—at companies that develop and sell new drugs and medical devices. There’s a powerful allure to trying to catch the next technological wave, and using it to transform the local economy.

Over the past decade, for example, Florida has invested in excess of a billion dollars to lure medical research institutions from California, Massachusetts and as far away as Germany to set up shop in the Sunshine State. Governor Jeb Bush pitched biotech as a way to diversify Florida’s economy away from its traditional dependence on tourism and real estate development.

The historic Florida capitol. Credit: Stephen Nakatani, Flickr
The historic Florida capitol. Credit: Stephen Nakatani, Flickr

 

Of course it hasn’t panned out; Florida’s share of biotech venture capital—a key leading indicator of commercialization—hasn’t budged in the past decade. And several of the labs that took state subsidies are down-sizing or folding up their operations as the state subsidies are largely spent. Massachusetts-based Draper Laboratories (which got $30 million from the state) recently announced it was consolidating its operations at its Boston headquarters and closing outposts in Tampa and St. Petersburg—in part because they were apparently unable to attract the key talent that they needed. The Sanford-Burnham Institute, which got over $300 million in state and local subsidies, is contemplating leaving town and turning its Orlando facilities over to the local branch of the University of Florida.

And while Florida’s flagging biotech effort might be well-meant but unlucky, in one recent case, the spectacular collapse of a development scheme has to be chalked up to outright fraud. As the San Francisco Chronicle’s Thomas Lee reports, both private and public investors have succumbed to the siren song of biotech investment. Last month, the Securities and Exchange Commission issued a multi-million dollar fine, and a lifetime investment ban, to Stephen Burrill, a prominent San Francisco-based biotech industry analyst and fund manager. Burrill diverted millions of dollars meant for biotech startups funds to his personal use. Not only that, but Burrill was a key advisor to a private developer who landed $34 million in state and federal funds to build a highway interchange to service a proposed biotech research park in rural Pine Island, Minnesota, based on Burrill’s promise he could raise a billion dollar investment fund to fill the park with startups. In the aftermath of the SEC action, Burrill is nowhere to be found, and the Elk Run biotech park sits empty.

But puffery and self-dealing are nothing new on the technological frontier or indeed, in the world of economic development. The most recent example, biomedical equipment maker Theranos, which claimed that it had produced a new technology for performing blood tests with just a single drop of blood. The startup garnered a $9 billion valuation, and conducted nearly 2 million tests before conceding that its core technology didn’t in fact work. Theranos has told hundreds of thousands of its patients that their test results are invalid. As ZeroHedge’s Tyler Darden relates, the company rode a wave of fawning media reports that praised its disruptive “nano” breakthrough technology (WIRED) and lionized its CEO as “the world’s youngest self-made female billionaire” and “the next Steve Jobs.” All that is now crashing to earth.

When it comes to biotech breakthroughs, consumers, investors and citizens are all easy prey for the hucksters that simultaneously appeal to our fear of illness and disease and our hope—borne from the actual improvements in technology—that theirs is just the next step in a long chain of successes. Investors pony up their money for biotech—even though nearly all biotech firms end up money losers, according to the most comprehensive study, undertaken by Harvard Business School’s Gary Pisano. And as my colleague Heike Mayer and I pointed out nearly a decade ago, it’s virtually impossible for a city that doesn’t already have a strong biotech cluster to develop one now that the industry has locked into centers like San Francisco, San Diego and Boston.

At first glance, biotech development strategies seemed like political losers: you incur most of the costs of building new research facilities and paying staff up front, and it takes years, or even decades for the fruits of research to show up in the form of breakthroughs, products, profits and jobs. No Mayor or Governor could expect to still be in office by the time the benefits of their strategy were realized. But as it turns out, the distant prospects of success always enable biotech proponents to argue that their efforts simply haven’t yet been given enough time (and usually, also resources) to succeed. And likewise, no one can pronounce them failures. When asked why the struggling Scripps Institute in West Palm Beach hadn’t produced any of the spin off activity expected, local economic developers had a read explanation, reported the Palm Beach Post:

“Biotech officials urge patience and repeat the mantra that a science cluster needs decades to evolve. “This takes a lot of time to develop,” said Kelly Smallridge, president of the Business Development Board of Palm Beach County.”
“The biotech bonanza Jeb Bush hoped for? It didn’t go as planned,” Palm Beach Post, June 15, 2015

So rather than being a liability, the long gestation period of biotech emerges as a political strength. Apparently, you’ve got to give the snake oil just a little bit more time to kick in.

Is life really better in Red States (and cities)?

The red state/blue state divide is a persistent feature of American politics. Political differences among states are also associated with important economic differences, and a similar patterns hold across and within metro areas. Big cities are more likely to be blue, and smaller towns and rural areas are red. The more densely populated portions of every metro area are also more likely to be blue. Understanding and eventually bridging these fissures is a major challenge for the nation.

In an article in last week’s New York Times, urbanist Richard Florida seems to have, if perhaps only inadvertently, given aid and comfort to the persistent myth that people are somehow worse off in big cities compared with smaller towns and suburbs.

It could be that this impression is amplified by the headline writer’s provocative question: “Is life better in America’s Red States?” While he doesn’t directly answer this question, Florida seems to imply that because housing is on average cheaper in red states, people who live there must be better off.

But is it the case that cheap housing is a reliable marker of economic well-being?

While it’s true that average home prices are higher in blue states, it’s important to consider why that is, and what it signifies. First and most importantly, blue state housing prices are driven higher because incomes and economic productivity are higher in bluer states and bluer cities. GDP per capita tends to be higher in metro areas that favored President Obama’s re-election by the widest margin, as shown here:

 

 

Note: if you hover over the orange trend line, you will see that the p-value is low and significant at the 1% threshold. (The p-value measures the statistical likelihood that the relationship between vote margin and productivity –measured by GDP per capita–is different from zero).  It measures correlation and tells us nothing about causality.    You can see the now familiar red-blue pattern on the attached map; the size of circles for each city corresponds to GDP per capita:  

The question then is, are higher housing prices in blue places an indication that the standard of living is lower?

Focusing on dollars per square foot misses the important fact that, unlike our stone age ancestors, we don’t rely on shelter solely as a means of warding off the cold, dark and wild beasts. We don’t value houses just as boxes—location matters. The reason the price of a square foot of land in Manhattan is worth as much as an acre of farmland in North Dakota has everything to do with the access it provides to a range of services, experiences and goods.

To an economist, if people are willing to pay a higher price for something—like housing in Manhattan or San Francisco or Honolulu—it’s a good indication that it has a higher value. A big part of the reason housing prices are higher in bigger cities than small ones is that we value the personal and economic opportunities that come from being close to lots of other people. As University of Chicago economist and Nobel Laureate Robert Lucas famously put it: “What can people be paying Manhattan or downtown Chicago rents for, if not being near other people?”

Harvard’s Ed Glaeser, author of Triumph of the City, has explored this theme in great depth. Increasingly, he argues, the biggest driver of city growth is the consumption advantages of living in cities, with close proximity to a wide range of goods, services, experiences, social interactions and cultural activities. This “consumer city” theory means that cities increase the well-being of their residents by facilitating all kinds of consumption. Indeed there are whole categories of goods, and especially services that are simply unavailable at any price outside major cities: think of everything from diverse ethnic restaurants to specialized medical care to cutting edge live art and music.

Provocative new work by Jessie Handbury shows once you adjust for the variety and quality of goods available in different places, the cost of living in big cities is actually lower than smaller cities. Her work looks at variations in the price and availability of food. It’s almost certain that differences in services are even more skewed in favor of city residents.

Moreover, looking just at differences in housing costs ignores important city advantages of density, proximity and convenience. Higher rents invariably provide city residents with better physical access to jobs, shopping, culture and social interaction. As Scott Bernstein and his colleagues at the Center for Neighborhood Technology have shown, savings in transportation costs in cities largely, and in some cases fully, offset differences in rents.

People who live in blue cities drive much less on average than those who live in red cities, and the savings in time and expense are substantial. My own work shows that residents of blue Portland Oregon drive about 20 percent less than in other large metro areas, saving them more than a billion dollars a year in transportation costs.

Florida makes one point that we all ought to pay attention to: as a nation we’d be much better off if we created more opportunities for people to live and work in blue cities. Because residents in big blue cities are so much more productive than otherwise identical workers in smaller red cities, we take a substantial hit to national economic productivity and growth. Enrico Moretti estimates GDP would be 13 percent or so higher if it weren’t for constrained population growth in these highly productive cities.

There’s an old adage that claims than an economist is someone who knows the price of everything and the value of nothing. Assuming that difference in house prices per square foot across metropolitan areas accurately reflects cost of living differences is arguably wrong. Cheap houses entail high costs for other things—like transportation—and to believe cheap houses automatically equals better quality of life misses the huge and tangible differences in the price and availability of a whole range of goods, services and experiences that make life nicer.

The political message here ought to be the high prices for blue cities generally, and the growing market premium for housing in dense, urban neighborhoods particularly, is a signal that Americans want more cities, and more opportunities for urban living. It’s a fair criticism of blue cities to say that they haven’t done a good enough job of making it possible for more people to live there—and this has a lot to do with local land use planning. But it has also been amplified by decades of federal subsidies to sprawling low-density development.

One final addendum on Richard Florida’s political analysis: as troubling as the persistent red/blue divide is among states and cities, it’s probably wrong to attribute the 2014 election results to this dichotomy. The huge fall off in turnout, especially among younger voters compared to 2012, is clearly the big driver of November’s red tide. Not only was 2014 the lowest off-year election turnout—only 37 percent—in six decades, but the electorate skewed far older in 2014 than in 2012. Voters over 65 made up 22 percent of voters in 2014, up from 16 percent in 2012; voters under 30 made up 13 percent of the electorate down from 19 percent in 2012. The 2014 red surge wasn’t so much geographic as it was demographic.

Where are the food deserts?

One of the nation’s biggest health problems is the challenge of obesity:  since the early 1960s the number of American’s who are obese has increased from about 13 percent to 35 percent.

The problem is a complex, deep-seated one, and everything from our diet, to our inactive life-styles, to the built environment have been implicated as contributing factors.

Over the past five years, a new term has crept into our common lexicon of cities:  food desert.  (Google Trends reports almost no use of the term prior to 2009, and mentions have grown steadily since then).

The image of a food desert conveys a strong, specific image:  people who live so far from a grocery store with healthy food that they have little alternative but to subsist on the unhealthy alternatives close at hand.  But what exactly is a food desert?  And how many Americans have poor diets because of the distance they have to travel to reach a grocery store?

Judged by proximity to grocery stores nearly all of rural America is a food desert.  Nathan Yau at FlowingData uses Google maps data to construct a compelling map of how far it is to the nearest grocery store across the entire nation. The bleakest food deserts are the actual deserts of the American West, in Nevada and Wyoming.

City dwellers, particularly those in the biggest, most dense cities tend to live closest to supermarkets and have the best food access.  At City Observatory, we’re big fans of WalkScore, the app that computes a walkability index for any residential address in the U.S. based on its proximity to common destinations like stores, parks and schools.  Earlier this year, WalkScore used their data and modeling prowess to develop some clear, objective images of who does (and doesn’t) have a good grocery store nearby.  They estimate that 72 percent of New York City residents live within a five-minute walk of a grocery store.  At the other end of the spectrum, only about five percent of residents of Indianapolis and Oklahoma City are so close.  If you want to walk to the store, this data shows the real food deserts are in the suburbs.

There are other ways of measuring food access and mapping food deserts.  The U.S. Department of Agriculture and PolicyMap have both worked to generate their own maps of the nation’s food deserts.  They use a combination of physical proximity (how far it is to the nearest grocery store) and measurements of neighborhood income levels.

While it’s clear that income plays a big role in food access, it’s far from clear how to combine income and proximity to define food deserts.  The USDA uses an overlay which identifies low-income neighborhoods with limited food access.  PolicyMap has a complicated multi-step process that compares how far low-income residents have to travel to stores compared to higher income residents living in similarly dense neighborhoods.

In practice, combining neighborhood income and physical proximity actually muddles the definition of food access.  First, and most important, it acknowledges that income, not physical distance is the big factor in nutrition.  Both of these methods imply that having wealthy neighbors or living in the country-side means than physical access to food is not a barrier.  Second, it is your household’s income, not your neighbor’s income, that determines whether you can buy food.  Third, these methods implicitly treat low income families differently depending on where they live.  For example, PolicyMap excludes middle income and higher income neighborhoods from its definition of “limited supermarket access” areas—and therefore doesn’t count lower income families living in these areas as having poor food access.

The fact that both of these systems use a different yardstick for measuring accessibility in rural areas suggests that proximity isn’t really the issue. Rural residents are considered by USDA to have adequate food access if they live within ten miles of a grocery story whereas otherwise identical urban residents are considered to have adequate access only if they live within a mile or half-mile of a store.

If we’re concerned about food access, we probably ought to focus our attention on poverty and a lack of income, not grocery store location.  The argument here parallels that of Nobel Prize winning economist Amartya Sen, who pointed out that the cause starvation and death in famines is seldom the physical lack of sufficient food, but is instead the collapse of the incomes of the poor.  Sen’s conclusion was that governments should focus on raising incomes if they wanted to stave off hunger, rather than stockpiling or distributing foodstuffs

It’s tempting to blame poor nutrition and obesity on a lack of convenient access to healthier choices, but the problem is more difficult and complex than that.  Poverty and poor education are strong correlates of poor nutrition and obesity.

Finally, it’s reasonable to question whether the physical proximity to healthier eating choices is the big driver of our hunger and nutrition problems.

Millions of Americans, rich and poor, walk right past the fresh vegetables and buy chips, soda, and other calorie-rich processed foods.  The “food desert” narrative is a convenient way of making it sound like personal choice doesn’t enter into the problem.  But studies show that there is no apparent relationship between a store’s mix of products and its customer’s body/mass index (BMI) (Lear, Gasevic, and Schuurman, 2013). Limited experimental evidence suggest that improving the supply of fresh foods seems to have limited impacts on food consumption patterns.  Preliminary results of a study of consumers in a Philadelphia neighborhood that got better supermarket access showed no improvement in fruit and vegetable consumption or body mass index even for those who patronized the new store.

Of course, we have good reasons to believe that the built environment does play an important role in obesity—but that may have more to do with how easy it is to walk to all our daily destinations, and not just the distance to the fresh food aisle.

How Should Portland Pay for Streets?

For the past several months, Portland’s City Council has been wrestling with various proposals to raise additional funds to pay for maintaining and improving city streets. After considering a range of ideas, including fees on households and businesses, a progressive income tax, and a kind of Rube Goldberg income tax pro-rated to average gasoline consumption, the council has apparently thrown up its hands on designing its own solution.

The plan now is for the street fee solution to be laid at the feet of Portland voters in the form of a civic multiple choice test: Do you want to pay for streets with a monthly household street fee, a higher gas tax, a property tax, an income tax or something else entirely?

Given voter antipathy of taxes of any kind, it’s likely that “none of the above” would win in a landslide if it’s included as an option on the ballot (not likely).

All of these options have their own merits and problems, and it’s doubtful that there is a majority consensus for any one of them. How, how much, and who pays for streets is a key issue for every city. From an urbanist and public finance perspective, and as a guide to thinking about which—if any—of these approaches Portland should adopt, here are my eight suggested rules for paying for streets:

1. Don’t tax houses to subsidize cars. Despite mythology to the contrary, cars don’t come close to paying for the cost of the transportation system. The Tax Foundation estimates that only 30% of the cost of roads is covered by user fees like the gas tax. Not only do cars get a free ride when it comes to covering the cost of public services—unlike homes, they’re exempt from the property tax—but we tax houses and businesses to pay for car-related costs. Here are three quick examples: While half of storm runoff is from streets, driveways and parking lots, cars aren’t charged anything for stormwater—but houses are. A big share of the fire department’s calls involve responding to car crashes—and cars pay nothing toward fire department costs. Similarly, the police department spends a significant amount of its energy enforcing traffic laws—this cost is borne largely by property taxes—which houses pay, but cars don’t. If we need more money for streets, it ought to be charged on cars.

Adding a further charge on houses to subsidize car travel only worsens a situation  in which those who don’t own cars subsidize those who do. One in seven Portland households doesn’t own a car, and because they generally have lower incomes than car owners, fees tied to housing redistribute income from the poor to the rich.

2. End socialism for private car storage in the public right of way. Except for downtown and a few close-in neighborhoods, we allow cars to convert public property to private use for unlimited free car storage. Not asking those who use this public resource to contribute to the cost of its construction and upkeep makes no sense and ultimately subsidizes car ownership and driving. This subsidy makes traffic worse and unfortunately—but understandably—makes it harder and more expensive to build more housing in the city’s walkable, accessible neighborhoods. If, as parking expert Don Shoup has suggested, we asked those who use the streets for overnight car storage to pay for the privilege, we’d go a long way in reducing the city’s transportation budget shortfall—plus, we’d make the city more livable.  We should learn from the city’s success in reforming handicapped parking that getting the prices right makes the whole system work better.

3. Reward behavior that makes the transportation system work better for everyone. Paying for the transportation system isn’t just about raising revenue—it should be about providing strong incentives for people to live, work and travel in ways that make the transportation system work better and make the city more livable. Those who bike, walk, use transit, and who don’t own cars (or own fewer cars) actually make the street system work better for the people who do own and use cars. We ought to structure our user fee system to encourage these car-free modes of transportation, and provide a financial reward to those who drive less. The problem with a flat-household fee or an income tax is it provides no incentive for people to change their behavior in a way that creates benefits for everyone.

4. Prioritize maintenance. There’s a very strong argument that we shouldn’t let streets deteriorate to the point where they require costly replacement. Filling potholes and periodically re-surfacing existing streets to protect the huge investment we’ve already made should always be the top priority. Sadly, this kind of routine maintenance takes a back seat to politically sexier proposals to expand capacity. We need an ironclad “fix it first” philosophy. Also, we need to guard against “scope creep” in maintenance. There’s a tendency, once a “repair” project gets moving, to opt for the most expensive solution (see bridges: Sellwood, Columbia River Crossing). That’s great if your project gets funded, but a few gold-plated replacements drain money that could produce much more benefit if spread widely.  We need to insist on lean, cost-effective maintenance.

5. Don’t play “bait and switch” by bonding revenue to pay for shiny, big projects. There’s an unfortunate and growing tendency for those in the transportation world to play bait-and-switch with maintenance needs. They’ll tell us about the big maintenance backlog to justify tax and fee increases. Then they bond two or three decades worth of future revenue to pay for a shiny new project; the Sellwood Bridge and the local share of the Portland-Milwaukie light rail have been funded largely by tying up the increase in state gas tax revenue,vehicle registration fees, and flexible federal funds for the next two decades. The state, which routinely financed construction on a pay-as-you-go basis, has also maxed out its credit card: in 2002 ODOT spent less than 2% of its state revenue on debt service; today, it spends 35%. Now it is pleading poverty on highway maintenance. Politically, this makes a huge amount of sense.  You get to build the projects today, and pass the costs into the future. Unfortunately, in practice it leads to a few gold-plated projects now, while jeopardizing the financial viability of the transportation system in the long run.

6. Promote fairness through the “user pays” principle. We all want the system to be “fair.” In the case of general taxes, we often put a priority on progressivity—that taxes ought to be geared toward ability to pay. But for something like transportation (as with water rates, sewer rates, or parking meter charges), fairness is best achieved by tying the cost to the amount of use, or what economists call the “benefit principle.” Charges tied to use are fair for two important reasons: higher income people tend to use (in this case, drive) more than others, and therefore will end up paying more. Also, charges tied to use enable people to lower the amount they pay by changing their behavior.

7. Don’t buy the phony safety card. We’ll hear all about the need to spend money to make our streets safer. The safety argument is an all-purpose smokescreen to justify almost any expenditure, no matter how distantly related to safety. (Ostensibly, the $3.5 billion Columbia River Crossing project was justified as a “safety” project, even though the I-5 bridge had a lower crash rate than the Fremont Bridge). Here’s the key fact of street safety: Smaller, slower streets are safer. Metro’s region-wide analysis of crash data showed that fast-moving, multi-lane arterials are by far the most dangerous streets in the region for cars, cyclists, and pedestrians . The more we get people out of cars, the more crashes and injuries decline. The most effective thing we can do to improve safety turns out to be the cheapest: implement features that slow and calm traffic, and make walking, cycling, and transit more attractive.

Correction:  Commissioner Steve Novick points out correctly  that his proposal contains a specific list of laudable safety projects that he proposes undertaking with street fee proceeds if his proposal is adopted.  These projects don’t fall into the “phony safety” category outlined above.  My apologies if this commentary implied otherwise.  Still, voters should consider two other things.  First, while the proposed list is a good one, it is “preliminary and subject to change” and isn’t binding on future city commissions, and the “safety” category is an elastic one.  Safety projects are defined as those that “reduce the likelihood of a person being killed or injured and address the perception of risk.”  Second, transportation money is very fungible.  Its always possible to re-arrange the budget to tell someone that this “new” money is only being used for good purposes.   The larger question is the overall priorities for the entire transportation budget.  If safety spending out of current revenues is reduced, the net gain could be less than advertised.(Revised, 10.20 PM January 8).

8. Don’t write off the gas tax yet. There’s a widely repeated shibboleth that more fuel-efficient vehicles have made the gas tax obsolete. Despite its shortcomings as a revenue source—chiefly that it bears no relationship to the time of day or roadway that drivers use—there’s nothing wrong with the gas tax as a way to finance street maintenance that a higher tax rate wouldn’t solve. While other methods like a vehicle-miles-traveled fee make a lot of sense, the reason they’re popular with the transportation crowd is because they would be set high enough to raise more money. And there’s the rub: people are opposed to the gas tax not because of what is taxed, but because of how much they have to pay. As an incremental solution to our maintenance funding shortfall, there’s a lot to like about a higher gas tax: it requires no new administrative structure, it’s crudely proportionate to use, and it provides some incentives for better use of streets. So when very serious people gravely intone that the gas tax is “obsolete” or “politically impossible”—you should know what they’re really saying is that people simply don’t want to pay more for streets.

Transportation and urban livability are closely intertwined. Over the past few decades it has become apparent that building our cities to cater to the needs of car traffic have produced lower levels of livability. There are good reasons to believe that throwing more money at the existing system of building and operating streets will do little to make city life better. How we choose to pay for our street system can play an important role in shaping the future of our city. As Portlanders weigh the different proposals for a street fee in the coming months, they should keep that thought at the top of their minds.

City Report: Lost in Place

Here’s a summary of our latest CityReport: Lost in Place: Why the persistence and spread of concentrated poverty–not gentrification–is our biggest urban challenge.

Lost in Place traces the history of high poverty neighborhoods in large US cities, and constructs a new view of the process of neighborhood change.  This article summarizes some of our key findings.  A complete guide to our report, including a PDF of the report narrative, sortable tables of metro area data and links to our neighborhood level maps are available here.

We were drawn to examine this question because of the concerns that are frequently raised suggesting that efforts to promote urban revitalization have the unintended negative consequence of making life worse for the urban poor. The argument is made that improving a neighborhood simply results in one population (wealthier, whiter) moving in and the existing residents (poorer persons of color) moving out, with the result that the poor are worse off than they were had no revitalization occurred. There are some well-known examples of places that are now very high income–like Chelsea in Manhattan–which were once poor neighborhoods.

But the question is seldom asked: How representative are these instances? And how prominently do they figure shaping the overall pattern and prevalence of urban poverty?

The term gentrification itself is fraught with discord. It is widely used and seldom precisely defined. In this study, we’ve set out to shed some light on the question by focusing on a single index of neighborhood well-being: the poverty rate. Despite its flaws, the poverty rate is a good marker of a neighborhood’s relative economic status over time. Moreover, critiques of gentrification flag its harm to the poor, so logically, we should find the most dramatic effects of gentrification in high poverty neighborhoods.

There are very good reasons we ought to be concerned about the plight of those living in high poverty neighborhoods. A growing body of social science research confirms that concentrated poverty magnifies all of the pathologies associated with poverty. Most troubling, new research shows that these effects make a major contribution to the intergenerational transmission of poverty: children growing up in neighborhoods of high poverty have permanently impaired life chances compared to otherwise identical children growing up in neighborhoods with low poverty.

Our data show that while striking when it happens, instances of gentrification of previously high-poverty neighborhoods are quite rare. Only about 5 percent of the poor living in urban high-poverty neighborhoods in 1970 would have found that their neighborhood saw its poverty rate decline to less than the national average four decades later.

Three-quarters of high-poverty neighborhoods were still places of high poverty four decades later. But they were far from stable; on average these chronically-poor neighborhoods lost 40 percent of their population over four decades. High-poverty neighborhoods are not stable or sustainable; they are in a steady process of decay. It’s an illusion to suggest that in the absence of gentrification, a poor neighborhood will remain the same.

Infographic

If we’re concerned about the poor and about concentrated poverty, our attention should be riveted to a much larger and more ominous trend: the growth of new neighborhoods of high poverty. Between 1970 and 2010, the number of urban neighborhoods with poverty rates exceeding 30 percent nearly tripled, to 3,100, and the number of poor persons living in these neighborhoods doubled from 2 million to 4 million.

A majority of these newly-poor neighborhoods were places that in 1970 had poverty rates below the national average–places we call “falling stars.” They were arguably middle class 40 years ago, and today are neighborhoods of high poverty.

The sheer scale of the spread of concentrated poverty emphasizes how modest the effect of gentrification has been on the location of the poor. The number of poor living in high-poverty neighborhoods that rebounded since 1970 declined by about 67,000. This is a good maximum estimate of the “displacement” associated with gentrification. Over the same time, the number of poor persons living in newly poor neighborhoods increased by 1,250,000. This suggests that at most, the relocation of the poor attributable to gentrification accounts for perhaps five percent of the increase of population living in concentrated poverty.

We’re coming to understand that place plays a big role in shaping economic opportunity. How we build our cities–and whether we allow concentrated poverty to persist and spread–will have a profound impact on whether future generations will continue to share the American dream.


For more information regarding economic opportunity, economic segregation, and concentrated poverty, go here, and to see the full report including metro-level dashboards and maps, go here.

Understanding Your City’s Distinctiveness Through Occupational Data

At City Observatory, we’ve come the conclusion that every city has its own unique characteristics that both define its identity and which play a key role in shaping its economic opportunities.  These distinctive traits don’t always shine through in conventional economic data, which leads us to look for the rare statistics that convey more nuance about every place.

One such data source is the Bureau of Labor Statistics Occupational Employment Statistics (OES).  The OES includes metropolitan-level estimates of the number of workers in occupational categories, as well as estimates of the range of pay levels.  It’s possible to use the occupational employment estimates to calculate a location quotient–a measure of specialization, which shows how much larger or smaller a share of a region’s employment base is made up of a particular occupation.  We used the OES data to identify the occupation in each metropolitan area with the highest location quotient–indicating the occupation in each metropolitan area that is the most disproportionately likely to be found in that region compared to all others.  Note that the occupation with the highest location quotient is not necessarily the most common occupation in the region, just the one that is more concentrated in that region than any other occupation, relative to the typical metropolitan area.

Occupations with high location quotients are indicators of a city’s knowledge specializations.   While it’s hard to measure knowledge directly, occupational data give us a window into where the most highly developed knowledge is located. These knowledge specializations have important economic development implications.  If you’re looking to grow a business and be successful, you want a pool of talented people who understand your industry, its technology, and its markets.  The occupational data shed light on the concentrations of specially talented workers.  The leading specializations for each of the nation’s largest metropolitan areas are shown here.



 

These data confirm many of our intuitive notions about the clustering of industries, knowledge, and occupations.  New York’s leading occupation is fashion designers, Los Angeles’s is media and communication workers.  Las Vegas is the leader for gaming workers, Washington for political scientists, and blue collar Milwaukee for foundry mold and coremakers.

It’s interesting to compare metro specializations to those for entire states.  We compared our results to those generated for states by the website Mental Floss who prepared “Which job is most unique to your state?” — a similar analysis of state occupational distinctiveness a couple of months ago.

Many cities share their principal occupational specialization with the state they are located in, but in other cases, there’s evidence of an urban-rural divide.  In Louisiana the most distinctive occupation is captains, mates and pilots of water vessels, while in New Orleans, its entertainers, performers, and sports workers. Oregon’s leading occupation is logging workers, but in metro Portland, the most specialized occupation is semiconductor processors.

Occupational data provide a rich source of insight into the knowledge, skills, and abilities of a region’s workers.  Those who want to explore the occupational approach to understanding city distinctiveness should read this paper by Ann Markusen and Greg Schrock.

 

City Report: America’s Most Diverse, Mixed Income Neighborhoods

Today we’re releasing our latest CityReport: America’s Most Diverse, Mixed Income Neighborhoods.

In this report, we use Census data to identify those neighborhoods that have the highest levels of both racial/ethnic and income diversity among all urban neighborhoods in the US.

We were motivated to take on this analysis, in part, because so much attention is focused on the cleavages and segregation of American cities. There’s little question that we’ve become increasingly divided by income, and that racial and ethnic segregation still underpin the persistence of poverty and the lack of opportunity for too many Americans.  And while our country is divided in many ways, we thought it would be helpful to look at those places where our growing diversity was reflected in a neighborhood that was occupied by households of every economic strata.

That’s what this report does:  we look at the places that have the highest levels of racial/ethnic diversity, measured by the likelihood that any two randomly selected neighborhood residents would be from two different racial/ethnic groups (white, black, Latino, Asian or other).  We constructed a parallel measure of income diversity based on the representation of five different household income groups in a neighborhood.  In both cases, we identified the neighborhoods that are in the top twenty percent of all urban US neighborhoods based on each of our measures of diversity.

Our core finding is that there are more than 1,300 such neighborhoods in the US that are home to nearly 7 million Americans.  While about half of these neighborhoods are in just three large metro areas (New York, San Francisco and Los Angeles), nearly every large US metropolitan area has at least one neighborhood that is among the nation’s most diverse, mixed income neighborhoods.

One challenge we face in reporting our results is that the word diversity has become a colloquial euphemism for “people of color.” This report uses the word diversity in a more precise, mathematical context:  diverse means people of different racial and ethnic groups, not simply people of color.  A neighborhood that is 100% Asian or 100% Latino or 100% white or 100% black is not diverse.

Our interest in identifying diverse, mixed income neighborhoods is heightened by the growing body of social science research that shows the widespread negative effects of segregation for cities, for neighborhoods, for families and especially for children. The American Dream, that any child can grow up to achieve success, has been effectively denied to many of those who grow up in neighborhoods that are segregated, where children are cut off from resources and networks that lead to opportunity.

In a sense, these diverse mixed income neighborhoods may provide examples and insights about how we can fashion our cities to be more inclusive.

At least some of the neighborhoods we’ve identified as the most diverse, mixed income are those that are also frequently described as gentrifying. Gentrification is a hot topic in all three of the metro areas we count has having the most diverse, mixed income neighborhoods (New York, Los Angeles and San Francisco). Places like Bedford-Stuyvestant, San Francisco’s Mission District, and downtown Los Angeles all show up as being among the most racially and ethnically diverse, and mixed income of any metropolitan neighborhoods.

The big question, going forward, is whether rapidly changing gentrifying neighborhoods can maintain thise income and ethnic diversity, or whether they will inexorably transition to being all upper income and predominantly white. The available evidence suggests that there’s little likelihood of that happening. Of the neighborhoods that transitioned to multi-ethnic status between 1970 and 1990, fully 90 percent were still multi-ethnic in 2010.  In addition, what happens in these gentrifying neighborhoods is subject to public policy. Cities that use the increase in property values and attendant tax revenues from revitalization to help support affordable housing construction can help assure that gentrifying neighborhoods remain accessible to a wide range of income groups. In addition, how we invest in public space can create opportunities to build bridging social capital between new arrivals and long time residents.


For the full report, including metro level data and maps, visit our CityReport page here. 

Young and Restless

The Young and Restless—25 to 34 year-olds with a bachelor’s degree or higher level of education—are increasingly moving to the close-in neighborhoods of the nation’s large metropolitan areas.   This migration is fueling economic growth and urban revitalization.

Using data from the recently released American Community Survey, this report examines population change in the 51 metropolitan areas with 1 million or more population, and focuses on the change in population in close-in neighborhoods, those places within 3 miles of the center of each metropolitan area’s primary central business district.

Urban cores attracted increased numbers of young adults even in metropolitan areas that were losing population and hemorrhaging talented young workers.  Metropolitan Buffalo, Cleveland, New Orleans and Pittsburgh, all of which experienced population declines over the past decade, saw an increase in the number of young adults with a college degree in their close-in neighborhoods.  (In these cases, the numerical increases were from small bases, but show that the urban core is attractive even in these economically troubled regions).

Overall these close-in neighborhoods have higher levels of educational attainment among their young adult population than the overall metropolitan areas of which they are a part.  The college attainment rate of young adults living in close-in neighborhoods in the largest metropolitan areas increased to 55 percent from 43 percent in 2000.  Outside the three-mile urban core, educational attainment rates increased slightly from about 31 percent to about 35 percent.

Talented young workers are both economically important in their own right—playing especially important roles in meeting the labor needs of fast-growing knowledge-based firms—and also as a kind of indicator of the overall health and attractiveness of a metropolitan area.  And despite the decline in overall migration rates in the U.S., they remain highly mobile.  With a million young adults moving each year, the stakes are large.

To see how your city fares, peruse the tables and map below:

 

Cover Photo courtesy of Total Due and Flickr Creative Commons.

Measuring “anti-social” capital

The number of security guards is a good measure of a city’s level of “anti-social” capital

In his book Bowling Alone, Robert Putnam popularized the term “social capital.” Putnam also developed a clever series of statistics for measuring social capital. He looked at survey data about interpersonal trust (can most people be trusted?) as well as behavioral data (do people regularly visit neighbors, attend public meetings, belong to civic organizations?). Putnam’s measures try to capture the extent to which social interaction is underpinned by widely shared norms of openness and reciprocity.

It seems logical to assume that there are some characteristics of place which signify the absence of social capital. One of these is the amount of effort that people spend to protect their lives and property. In a trusting utopia, we might give little thought to locking our doors or thinking about a “safe” route to travel. In a more troubled community, we have to devote more of our time, energy, and work to looking over our shoulders and protecting what we have.

The presence of security guards in a place is arguably a good indicator of this “negative social capital.” Guards are needed because a place otherwise lacks the norms of reciprocity that are needed to assure good order and behavior. The steady increase in the number of security guards and the number of places (apartments, dormitories, public buildings) to which access is secured by guards indicates the absence of trust.

The number of security guards in the United States has increased from about 600,000 in 1980 to more than 1,000,000 in 2000 (Strom et al., 2010). These figures represent a steep increase from earlier years. In 1960, there were only about 250,000 guards, watchmen and doormen, according to the Census (which used a different occupational classification scheme than is used today). The Bureau of Labor Statistics reports that the number of US security guards has increased by almost 100,000 since 2010, to a total of more than 1.1 million. As a measure of how paranoid and unwelcoming we are as a nation, security guards outnumber receptionists by more than 100,000 workers nationally.

Sam Bowles and Arjun Jayadev argue that we have become “one nation under guard” and say that the growth of guard labor is symptomatic of growing inequality. The U.S. has the dubious distinction of employing a larger share of its workers as guards than other industrialized nations and there seems to be a correlation between national income inequality and guard labor.

Just as the U.S. has a higher fraction of security guards than other nations, some cities have more security guards than others. To understand these patterns, we’ve compiled Bureau of Labor Statistics data from the Occupational Employment Survey on private security guards. BLS defines security guards as persons who guard, patrol, or monitor premises to prevent theft, violence, or infractions of rules, and whom may operate x-ray and metal detector equipment. (The definition excludes TSA airport security workers).

This occupational data reports the number of security guards in every large metropolitan area in the country. Adjusting these counts by the size of the workforce in each metro area tells us which places have proportionately the most security guards–which are arguably the least trusting–and which places have the fewest security guards, which may tend to indicate higher levels of social trust. We rank metropolitan areas by the BLS estimates of the number of security guards per 1,000 workers.  For particularly large metro areas, we report BLS estimates for the largest metropolitan division in a metro area.)

Security Guards per 1,000 Workers, 2017

At the top of the list is Las Vegas. While the typical large metro area has about 8 security guards per 1,000 workers, Las Vegas has 19 per 1,000.  Miami ranks second, with more than  twice as many (18 per 1000) as the average large metro. Other cities with high ratios of security guards to population are Memphis, New Orleans, Miami and Baltimore. Washington D.C., with its high concentration of government offices, defense and intelligence agencies, and federal contractors, also has a high proportion of security guards.

At the other end of the spectrum are a number of cities in which the ratio of security guards to workforce is one-third lower than in the typical metro area. At the bottom of the list are Minneapolis-St. Paul, Grand Rapids and Portland, all with fewer than six security guards per 1,000 workers. (The Twin Cities and Portland also do well on most of Putnam’s measures of social capital)

It seems somewhat paradoxical, but the salaries paid to security guards get treated as a net contribution to gross domestic product. Yet, in many important senses, security guards don’t add to the overall value of goods and services so much as they serve to keep the ownership of those goods and services from being rearranged. As Nobel prize winning economist Douglass North has argued, we ought to view the cost of enforcing property rights as a “transaction cost.” In that sense, cities that require lots of guards to assure that property isn’t stolen or damaged and that residents, workers, or customers aren’t victimized, actually have higher costs of living and doing business than other places. These limits on easy interaction may stifle some of the key advantages to being in cities.

The varying thickness of the blue line

Cops per capita: An indicator of “Anti-social” capital?” 

Why do some cities have vastly fewer police officers relative to their population than others?

In the 1966 film “The Thin Blue Line” director William Friedkin explored the role police officers played in protecting the broader populace from violence and disorder. As we’ve frequently noted at City Observatory, there’s been a marked, and in many ways, under-appreciated decline in crime rates in American cities.  In the typical large city, crime is less than half what it was when Friedkin filmed.  Interestingly, the thickness of the “blue line” varies widely across US metro areas. We think that’s a possible indicator of which places perceive they need more police in order to live safely.  The fact that some cities have far fewer police than others suggests that social capital and other factors deterring crime may be more important in explaining variations in crime rates.

If it seems like there are a lot of police in New York, you’re right.

Previously, we’ve used counts of the number of security guards per capita as an indicator of “anti-social” capital. Our measurement built on the idea of social capital explained by Robert Putnam, in his book Bowling Alone. Putnam developed a clever series of statistics for measuring social capital. He looked at survey data about interpersonal trust (can most people be trusted?) as well as behavioral data (do people regularly visit neighbors, attend public meetings, belong to civic organizations?). Putnam’s measures try to capture the extent to which social interaction is underpinned by widely shared norms of openness and reciprocity.

It seems logical to assume that there are some characteristics of place which signify the absence of social capital. One of these is the amount of effort that people spend to protect their lives and property. In a trusting utopia, we might give little thought to locking our doors or thinking about a “safe” route to travel. In a more troubled community, we have to devote more of our time, energy, and work to looking over our shoulders and protecting what we have.

We argued that the presence of security guards in a place is arguably a good indicator of this “negative social capital.” Guards are needed because a place otherwise lacks the norms of reciprocity that are needed to assure good order and behavior. The steady increase in the number of security guards and the number of places (apartments, dormitories, public buildings) to which access is secured by guards indicates the absence of trust.

Might the same notion apply to public safety officers? If some places feel the need to hire more police to feel safe, doesn’t that suggest an absence of social capital? A few weeks back, we were introduced to an analysis of the police to population ratio by state. Compiled by Bill McGonigle, this analysis used data from the FBI’s Crime in the United States, to estimate the total number of police in each state, and then divided the result by population. That got us thinking about creating a similar index for metropolitan areas. The FBI’s data aren’t reported by MSA, so instead we looked to the Census Bureau.

We undertake this comparison at the metropolitan level, using data from the Census Bureau’s American Community Survey. For the most part, using metro data nets out the effects of the wide variations in the demographics of central city boundaries from place to place, which tends to confound municipal comparisons. (For example, the cities of Miami and Atlanta include less than 10 percent of the population of their metro areas, while Jacksonville and San Antonio include a majority, including areas that would be regarded as “suburban” elsewhere.)The ACS asks respondents about their occupation, three occupations correspond to police officers:

3710:  First-line supervisors of police and detectives

3820:  Detectives and criminal investigators

3870:  Police officers

We used the University of Minnesota’s invaluable IPUMS* data source to tabulate these data by metropolitan area. The underlying data are from the 2014-2018 five-year American Community Survey.  There’s one underlying quirk of the ACS data to be aware of:  respondents are classified according to where they live, rather than where they work. Because most metropolitan areas are large and encompass entire labor markets, that’s a reasonably accurate way of counting; but in some metro areas, where people commute from outside the metro area, this may not accurate count the number of police employed locally.

When we tabulate the data for metropolitan areas with a million or more population, and divide the number of police by the population of each metro area, we get the following ranking.  (We report the number of police officers per 1,000 population, metro areas with the fewest police per capita are shown at the top of the list).

There’s a wide variation in the number of police per capita across metro areas.  While the median metropolitan area has about 3.3 policy officers per 1,000 population, some have as few as 2.4, while others have 5 or more.

The cities with the fewest police officers include San Jose, Portland, Salt Lake City, Minneapolis and Seattle.  The top cities on our list mostly coincide with the top states on McGonigle’s list of police population ratios.  Oregon, Washington, Minnesota and Utah rank  first, second, fourth and fifth, respectively, of the state’s with the fewest police officers per capita. (The Twin Cities, Seattle, Salt Lake and Portland also do well on most of Putnam’s measures of social capital).

Recall that our data is on the number of police living in each metro area. We suspect that the relatively low number of police per thousand population in San Jose (1.6) and Los Angeles (2.4) reflects the high cost of housing and long distance commuting in these areas. Riverside, which is adjacent to Los Angeles has a much higher than average number of police per 1,000 population (4.50).  It seems likely that proportionately more police officers commute from adjacent areas outside the Los Angeles and San Jose metro areas which have lower housing costs.

The metro areas with the most police officers per capita include Virginia Beach, Las Vegas, and Miami.  Some of the cities with high numbers of police fit our media stereotypes:  Law and Order (New York) and The Wire (Baltimore) both rank in the top five for police per capita, both have at least 50 percent more police per capita than the typical large metro in the US.

Security Guards and Police Officers

As we mentioned, we’ve previously looked at the number of security guards per capita as another indicator of “anti-social capital.” We thought we’d look at the relationship between the number of police officers per capita and the number of security guards per capita. In theory, it might be the case that private security guards could be filling a gap, i.e. more common in places where the public sector isn’t providing “enough” security. Or alternatively, it could be that fear or security concerns could lead to having both more public police and more security guards in some cities, and fewer in others.

The data strongly support the latter interpretation. The following chart shows the per capita number of police (from the chart above) and the per capita number of security guards (from the same ACS survey from which we drew our police officer counts). Each dot represents one of the largest US metro areas. We’ve excluded three metro areas from our calculations: San Jose and Los Angeles (because of the commuting issue discussed above) and Las Vegas, because it is a wide outlier, with far more security guards per capita than any other city.

There’s a strong positive correlation between the number of police per capita and the number of security guards per capita in a metropolitan area. Places that tend to have more police, also tend to have more security guards. Portland, Seattle and Minneapolis all rank low in both the number of security guards and policy per capita.  Conversely, New York, Washington, Baltimore and New Orleans have high numbers of both police and security guards. Most cities fall relatively close to the regression line we’ve plotted on the chart, but there are some outliers. Miami and Orlando have relatively more private security guards than police; while Virginia Beach has many more police than security guards. This tends to reinforce our view that out metric is reflecting anti-social capital, or perhaps more accurately, the absence of social capital in some cities. Both the public sector and the private sector spend considerably more resources in some metro areas than others in order to protect persons and property, almost certainly because they believe that localized norms of behavior and reciprocity are inadequate.

 

* – Steven Ruggles, Sarah Flood, Ronald Goeken, Josiah Grover, Erin Meyer, Jose Pacas and Matthew Sobek. IPUMS USA: Version 10.0 [dataset]. Minneapolis, MN: IPUMS, 2020. https://doi.org/10.18128/D010.V10.0

Anti-Social Capital?

In his book Bowling Alone, Robert Putnam popularized the term “social capital.” Putnam also developed a clever series of statistics for measuring social capital. He looked at survey data about interpersonal trust (can most people be trusted?) as well as behavioral data (do people regularly visit neighbors, attend public meetings, belong to civic organizations?). Putnam’s measures try to capture the extent to which social interaction is underpinned by widely shared norms of openness and reciprocity.

It seems logical to assume that there are some characteristics of place which signify the absence of social capital. One of these is the amount of effort that people spend to protect their lives and property. In a trusting utopia, we might give little thought to locking our doors or thinking about a “safe” route to travel. In a more troubled community, we have to devote more of our time, energy, and work to looking over our shoulders and protecting what we have.

The presence of security guards in a place is arguably a good indicator of this “negative social capital.” Guards are needed because a place otherwise lacks the norms of reciprocity that are needed to assure good order and behavior. The steady increase in the number of security guards and the number of places (apartments, dormitories, public buildings) to which access is secured by guards indicates the absence of trust.

The number of security guards in the United States has increased from about 600,000 in 1980 to more than 1,000,000 in 2000 (Strom et al., 2010). These figures represent a steep increase from earlier years. In 1960, there were only about 250,000 guards, watchmen and doormen, according to the Census (which used a different occupational classification scheme than is used today).

This trend has led Sam Bowles and Arjun Jayadev to argue that we have become “one nation under guard” and that the growth of guard labor is symptomatic of growing inequality. The U.S. has the dubious distinction of employing a larger share of its workers as guards than other industrialized nations and there seems to be a correlation between national income inequality and guard labor.

Just as the U.S. has a higher fraction of security guards than other nations, some cities have more security guards than others. To understand these patterns, we’ve compiled Bureau of Labor Statistics data from the Occupational Employment Survey on private security guards. BLS defines security guards as persons who guard, patrol, or monitor premises to prevent theft, violence, or infractions of rules, and whom may operate x-ray and metal detector equipment. (The definition excludes TSA airport security workers). In 2015, there were more than 1,050,000 security guards in the US.

This occupational data reports the number of security guards in every large metropolitan area in the country. Adjusting these counts by the size of the population in each metro area tells us which places have proportionately the most security guards– which are arguably the least trusting, and which places have the fewest security guards. This may be an indicator of higher levels of social trust.

Here are the data:

At the top of the list is Las Vegas. While the typical large metro area has about 39 security guards per 10,000 population, these Miami has more than  twice as many (86 per 10,000). Other cities with high ratios of security guards to population are Memphis, New Orleans, Miami and Baltimore. Washington D.C., with its high concentration of government offices, defense and intelligence agencies, and federal contractors, also has a high proportion of security guards.

At the other end of the spectrum are a number of cities in which the ratio of security guards to population is one-third lower than in the typical metro area. At the bottom of the list are Minneapolis-St. Paul, Providence and Portland. (The Twin Cities and Portland also do well on most of Putnam’s measures of social capital)

It seems somewhat paradoxical, but the salaries paid to security guards get treated as a net contribution to gross domestic product. Yet, in many important senses, security guards don’t add to the overall value of goods and services so much as they serve to keep the ownership of those goods and services from being rearranged. As Nobel prize winning economist Douglass North has argued, we ought to view the cost of enforcing property rights as a “transaction cost.” In that sense, cities that require lots of guards to assure that property isn’t stolen or damaged and that residents, workers, or customers aren’t victimized, actually have higher costs of living and doing business than other places. These limits on easy interaction may stifle some of the key advantages to being in cities.

Anti-Social Capital?

In his book Bowling Alone, Robert Putnam popularized the term “social capital.” Putnam also developed a clever series of statistics for measuring social capital. He looked at survey data about interpersonal trust (can most people be trusted?) as well as behavioral data (do people regularly visit neighbors, attend public meetings, belong to civic organizations?). Putnam’s measures try to capture the extent to which social interaction is underpinned by widely shared norms of openness and reciprocity.

It seems logical to assume that there are some characteristics of place which signify the absence of social capital. One of these is the amount of effort that people spend to protect their lives and property. In a trusting utopia, we might give little thought to locking our doors or thinking about a “safe” route to travel. In a more troubled community, we have to devote more of our time, energy, and work to looking over our shoulders and protecting what we have.

The presence of security guards in a place is arguably a good indicator of this “negative social capital.” Guards are needed because a place otherwise lacks the norms of reciprocity that are needed to assure good order and behavior. The steady increase in the number of security guards and the number of places (apartments, dormitories, public buildings) to which access is secured by guards indicates the absence of trust.

The number of security guards in the United States has increased from about 600,000 in 1980 to more than 1,000,000 in 2000 (Strom et al., 2010). These figures represent a steep increase from earlier years. In 1960, there were only about 250,000 guards, watchmen and doormen, according to the Census (which used a different occupational classification scheme than is used today).

This trend has led Sam Bowles and Arjun Jayadev to argue that we have become “one nation under guard” and that the growth of guard labor is symptomatic of growing inequality. The U.S. has the dubious distinction of employing a larger share of its workers as guards than other industrialized nations and there seems to be a correlation between national income inequality and guard labor.

Just as the U.S. has a higher fraction of security guards than other nations, some cities have more security guards than others. To understand these patterns, we’ve compiled Bureau of Labor Statistics data from the Occupational Employment Survey on private security guards. BLS defines security guards as persons who guard, patrol, or monitor premises to prevent theft, violence, or infractions of rules, and whom may operate x-ray and metal detector equipment. (The definition excludes TSA airport security workers).

This occupational data reports the number of security guards in every large metropolitan area in the country. Adjusting these counts by the size of the workforce in each metro area tells us which places have proportionately the most security guards– which are arguably the least trusting, and which places have the fewest security guards. This may be an indicator of higher levels of social trust.

Here are the data:

At the top of the list are Las Vegas and Miami. While the typical large metro area has about 9 security guards per 1,000 workers, these two cities have roughly twice as many (Las Vegas as 21 per 1,000; Miami 17 per 1,000. Washington D.C., with its high concentration of government offices, defense and intelligence agencies, and federal contractors, also has a high proportion of security guards.

At the other end of the spectrum are a number of cities in which security guards make up about one-third less of the workforce than in the typical metro area. At the bottom of the list are Minneapolis-St. Paul, Providence and Portland. (The Twin Cities and Portland also do well on most of Putnam’s measures of social capital)

It seems somewhat paradoxical, but the salaries paid to security guards get treated as a net contribution to gross domestic product. Yet, in many important senses, security guards don’t add to the overall value of goods and services so much as they serve to keep the ownership of those goods and services from being rearranged. As Nobel prize winning economist Douglass North has argued, we ought to view the cost of enforcing property rights as a “transaction cost.” In that sense, cities that require lots of guards to assure that property isn’t stolen or damaged and that residents, workers, or customers aren’t victimized, actually have higher costs of living and doing business than other places. These limits on easy interaction may stifle some of the key advantages to being in cities.

Ten More you should read about Gentrification, Integration and Concentrated Poverty

Gentrification and neighborhood changes are hotly contested subjects.  In the past few years some very thoughtful and provocative work has been done that helps shed light on these issues.  Here we offer ten more of the more interesting arguments that have been put forward as a follow up to our previous post, as well as our report on gentrification and poverty.

  1. Myron Orfield and Thomas Luce looked at the racial composition of urban neighborhoods over the past three decades and conclude that contrary to widespread fears of gentrification, the data clearly show that once a neighborhood becomes predominantly non-white it virtually never reverts to predominantly white. Just two census tracts out of the nearly 1,500 that were predominantly non-white in 1980 became predominantly white in the next three decades, and only seven percent of them became diverse.
  2. Next Cities Sandy Smith outlines some of the strategies that cities are pursuing to minimize displacement of populations in those neighborhoods that are experiencing gentrification.
  3. Daniel Hartley’s study for the Cleveland Federal Reserve Bank of gentrifying neighborhoods shows that neighborhood upgrading is associated with economic improvements for existing residents, in the form of higher credit scores than otherwise similar residents living in neighborhoods that don’t experience gentrification.  Hartley studied credit scores in the gentrifying neighborhoods of 55 cities and found the numbers went up for original residents, whether they owned property or rented.
  4. In his new book, The Concentration of Poverty in the New Millenium, Paul Jargowsky presented data on the number of persons living in census tracts with extremely high rates of poverty (40 percent or greater).  His work shows that the biggest increases in concentrated poverty have been in the Midwest and in smaller to medium sized metropolitan areas.
  5. In their 2011 paper for the Brookings Institution, Alan Berube and Elizabeth Kneebone track the number of neighborhoods of extreme poverty (census tracts with poverty rates of 40 percent or higher) using data from the 2005-09 American Community Survey.  While concentrated poverty had eased during the 1990s, their analysis–The Re-Emergence of Concentrated Poverty: Metropolitan Trends in the 2000s–showed that it had increased substantially and especially affected Midwestern metropolitan areas.
  6. For the past several months, the Furman Center at New York University has been sponsoring a “slow debate” on gentrification, neighborhood change and integration.  Entitled “The Dream Revisited: A Discussion on Neighborhood Gentrification” you’ll find a series of point-counterpoint essays by experts in the field including Lance Freeman and Rachel Godsil
  7. Writing this year in a paper prepared for the American Assembly, Todd Swanstrom considers whether the process of gentrification is different in “legacy cities”–older slower growing or declining industrial cities.  Swanstrom argues that gentrification has been studied mostly in “strong market” cities with high and rising real estate prices, and that the nature and impacts of gentrification are far different in places with weaker real estate markets.   
  8. Concerns about the adverse effects of gentrification on rents often prompts local alliances between renters and community groups to oppose new development.  In an article in Dissent, “Fighting Gentrification, but to what end?” Ben Ross challenges whether opposing development actually protects affordability.  Limiting development limits supply, pushing prices–and rents-higher.  As long as the demand for dense, walkable neighborhoods exceeds the supply, lower income households will find it difficult to afford such neighborhoods.  Instead of opposing density, he argues, we ought to be looking for ways to increase it in places where it makes the most sense.
  9. Kendra Bischoff and Sean Reardon trace out the connections between growing income inequality and growing economic segregation in the nation’s metropolitan areas in the Russell Sage report “More Unequal and More Separate: Growth in the Residential Segregation of Families by Income, 1970-2009.”  Their analysis shows that the number of families living in middle income neighborhoods has declined, and that we are increasingly segregated into high income and local income neighborhoods.
  10. In their pathbreaking work studying intergenerational economic mobility, Raj Chetty and his colleagues at Harvard and Berkeley have generated an impressive body of data about the connections between place and economic opportunity.  They look at the chances that children growing up in the poorest families grow up to have higher levels of income and find that one of the correlates of economic mobility is income segregation:  metropolitan areas with areas of concentrated poverty have less economic mobility.

Are suburbs really happier?

A few months back our friends at CityLab published the results of a survey looking at differences in attitudes about cities and suburbs under the provocative headline, “Overall, Americans in the suburbs are still the happiest.”

Their claim is buttressed with a reported finding that 84 percent of all the respondents in suburbs said that they were “satisfied with their communities”, while only 75 percent of those who reported living in cities felt the same.

While at first glance, this seems to be pretty cut and dried, a closer look at the data suggests that the answer is far less clear.

As with all surveys, it’s worth paying very close attention to the actual question asked, the size of the survey’s margin of error, and the other factors that determine how respondents answer particular questions.

When we consider each of these factors, it actually turns out that it is difficult to make a strong claim that suburban residents are happier than their urban kin.

First, consider the question asked in the State of the City survey.  It isn’t about “happiness”—it’s actually about satisfaction.  This is more akin to a consumer satisfaction.   There’s actually a well-developed happiness literature that asks people about their overall level of happiness. The conventional question is very internally focused, and doesn’t refer to place. The Pew Center has a good introduction to this subject here.   So when we interpret these data, we should think of them not showing whether people are more or less happy than others living elsewhere, but whether they are satisfied or dissatisfied with their communities.

Second, in interpreting survey results, it’s important to consider the sample size and the sampling distribution of error.  The overall survey included 1,656 respondents and the reported margin of error for the survey was plus or minus 3.4 percentage points.  But that margin of error holds only for the entire sample—subgroups of the population (like just the one-third or so of respondents living in cities) are fewer in number and therefore have a larger percentage point margin of error.  That number isn’t reported.  But differences of less than four or five percentage points between sub-groups of the sample are likely to be borderline significant at best.  When differences between sub-groups are small, we shouldn’t make too much out of them.

Third, we know that happiness (or in this case, satisfaction) is correlated with income.  Higher-income people are more likely to say they are happy; lower-income people less likely.  So if suburbs have more higher-income people and cities have more lower-income people, the apparent difference in reported satisfaction could be the product of income, rather than location.  This appears to be the case for the data reported here.

satisfaction

 

Like published happiness research, the State of the City survey shows that reported satisfaction is highly correlated with income. Some 88 percent of those with incomes over $75,000 said their community was excellent or good; only 66 percent of those with incomes of less than $30,000 said the same. It’s worth noting here that the impact of income is larger than the impact of location in influencing satisfaction (a 9 percent difference between city and suburb as opposed to a 22 percent difference between high and low-income groups.  These data suggest that the real headline finding of this survey should be “higher income people are more satisfied with the places they live.”  The unsurprising takeaway here is that more income enables you to afford to live in a community that makes you happy. We could get a more direct answer to this question by looking directly at income data.  While the CityLab article didn’t publish the findings on income by city and suburb, we can observe the differential effect of income on satisfaction levels in cities and suburbs by looking at two other variables—education and home ownership—which tend to be correlated with income.If we look just at the college educated, we find that the differences between cities and suburbs shrink by about half:  College-educated urban residents are almost exactly as satisfied as college educated suburban residents (80 percent v. 85 percent).

If we look just at homeowners—who in general have higher incomes than renters—we find that the differences between cities and suburbs almost entirely disappear.  Urban homeowners, for example, are almost exactly as satisfied as suburban homeowners (84 percent v. 87 percent).

Finally, we might want to consider how the race of respondents influences community satisfaction.  When you dig into the data by race and ethnicity, the entire difference in reported levels of satisfaction appears to be the result of the differential racial and ethnic composition of cities and suburbs.  Non-Hispanic whites living in cities were almost exactly as likely to report being satisfied with their communities (84 percent) as non-Hispanic whites living in suburbs (83 percent).

It’s definitely worth looking hard at data on personal happiness and community satisfaction, but in doing so, it’s critical that we take care to understand what the data are—and aren’t telling us.

Ten things you should read about Gentrification, Integration and Concentrated Poverty

Gentrification and neighborhood changes are hotly contested subjects.  In the past few years some very thoughtful and provocative work has been done that helps shed light on these issues.  Here we offer a baker’s dozen of some of the more interesting arguments that have been put forward.

  1. Daniel Kay Hertz explores the contradictions that emerge between our widely voiced aspiration for integration and the knee-jerk tendency to condemn segregation.
  2. Jonathan Rothwell and Douglas Massey present research showing that the education of one’s neighbors is nearly half to two-thirds as powerful in influencing children’s long term economic prospects as is the education of their own parents.
  3. In “Beyond Gentrification”, Stephanie Brown explores the complex and contradictory concepts that are conflated in the common use of the word gentrification and describes a new framework for thinking about neighborhood change in mixed-income multi-cultural communities.
  4. Margery Turner of the Urban Institute speaks to the connections between place and economic improvement.  She describes how mobility turns out to be an important way that families actually escape poverty. She argues that we need to move from place-based to place-conscious strategies, and explicitly allow for the fact that some neighborhoods will best be viewed as “launch pads” to help people get going.
  5. Barbara Sard and Douglas Rice document the limited reach of housing assistance programs.  They summarize the evidence that high-poverty neighborhoods, which are often violent, stressful, and environmentally hazardous, can impair children’s cognitive development, school performance, mental health, and long-term physical health.  Their article notes that despite abundant evidence of the negative effects of living in high poverty neighborhoods, federal housing assistance programs tend to concentrate the poor in existing neighborhoods of high poverty.
  6. Ed Glaeser and Jacob Vigdor review the evidence on changing patterns of racial segregation in the United States provided in Census 2010.  They conclude that the nation is becoming less segregated, chiefly by the decline of all-white neighborhoods.  But predominantly African-American neighborhoods have not transitioned to mixed race status.  They conclude that for every prominent example of a black neighborhood undergoing gentrification—in Harlem, Roxbury, or Columbia Heights—there are countless more neighborhoods witnessing no such trend. Instead, the dominant trend in predominantly black neighborhoods nationwide has been population loss.
  7. The media epicenter of the gentrification debate has been the Google buses that carry high tech workers from their homes in San Francisco to jobs in the Silicon Valley.  Tech Crunch’s Kim Mai Cutler offers a far-ranging analysis of the connections between economic growth, building restrictions, housing affordability and income distribution in her provocative essay:  “How Burrowing Owls Lead to Vomiting Anarchists (or SF’s Housing Crisis Explained).”
  8. Planner Pete Saunders writes that the move of talented young people back into cities creates the opportunity to strengthen the historically limited  job and social networks that have limited the economic opportunities of those living in neighborhoods of concentrated poverty. The challenge is that too often, neighborhood change is treated as a zero sum game.  We’re missing opportunities to use the attachment to place–through simple neighborhood-level means like picnics, sports leagues and similar events mediated by community groups to knit stronger bonds between long time residents and newcomers.
  9. Robert Sampson’s work points up the strong racial component to income segregation.   More than half of black kids born in 1995 in high poverty neighborhoods remained there in 2012; fewer than 15 percent had moved up to “low poverty.”  A third of black children growing up in low poverty ended up in high poverty neighborhoods; compared with 2 percent of white children.
  10. Patrick Sharkey’s book, Stuck in Place, shows when neighborhoods change, the original residents benefit substantially. In adulthood, black children whose neighborhoods changed around them in ways that lead to less concentrated poverty did much better in terms of income, earnings and wealth compared with other black children who started in very similar neighborhoods but whose neighborhoods did not see the same degree of change.

There is more to explore on this topic elsewhere, and this is by no means an exhaustive list. However, we think it’s a good start. To read more about associated topics like economic opportunity on our site, go here.

Metro’s “Why Bother” Climate Change Strategy

If you’ve hung around enough espresso joints, you’ve probably heard someone order a “tall, non-fat decaf latte.” This is what baristas often call a “why bother?” That would also be a good alternate description for the Metro Climate Smart Communities Plan.

Framed in glowing rhetoric, the plan purports to be a two-decade long region-wide strategy for meeting our responsibility to address this serious global problem. But in reality it sets goals so low that they actually call for reducing the pace at which we’re reducing driving and greenhouse gas emissions.

This weak plan is all the more surprising given the area’s history. The Portland area has long prided itself on being a forward-looking first mover, when it comes to seriously addressing climate change. More than two decades ago the City of Portland became the nation’s first local government to adopt a greenhouse gas reduction plan.

It’s increasingly apparent that climate change is a serious menace. Now, Portland’s elected regional government is working on a new effort to develop what it calls a “climate smart communities plan.”

Transportation is the region’s single largest source of greenhouse gases, so it makes sense to focus on transportation. Metro’s plan sets a number of targets to guide regional transportation planning that in theory might help the region reduce its carbon emissions from transportation over the next two decades. The key performance measure is “vehicle miles traveled,” or VMT– basically a count of how much driving we do in the region. Right now, the average Portland area resident drives about 19 miles per person per day. There are a lot of ways to measure the transportation system–bike mode share, number of bus hours, total number of transit passengers, counts of pedestrians. But if you have to pick one number that tells you how car-dependent and emissions heavy your transportation system is, it’s VMT. And by the rules of thumb prescribed by transportation engineers, VMT levels translate in a very straightforward way into the “need” for more roads capacity for cars. If VMT goes up, they’ll say, you need more roads. If it goes down, you’ll need less.

Over the past decade, Portland has made good progress in reducing VMT. Since 2006–when we drove about 20.1 miles per person per day, we’ve cut driving at an average annual rate of about 1.7 percent per year. Since 1996–a period that includes an era of much cheaper gas prices–driving has fallen about 1 percent per year. But that was all in the past when we were un-enlightened and pretty un-motivated about the threat of climate change. Now that we’re serious–and we’re “smart” about this issue– we’re really going to get aggressive, right? Not so much.

Metro’s plan is that we reduce driving by a grand total of an additional two miles per person per day between now and 2035, or from a current level of 19 miles per person per day to about 17 miles per person per day. That’s right–over the the next two decades metro’s climate smart plan calls for reducing driving at about 0.4% per year–about one-fourth as fast as we have been reducing driving over the past several years without a climate smart plan. In effect, Metro’s very feeble target for VMT reductions means that they are planning for a world where there’s a lot more private car driving–and demand for roads and expensive road projects–than even current, business-as-usual trends suggest.

Rather than reducing driving, this assumption is likely to lead to an investment strategy that enables or encourages more driving that would otherwise occur if we simply assumed that recent trends continue.

To get an idea of just how feeble this planned reduction is, consider the recent travel demand forecast prepared by the Washington State Department of Transportation. They predict that over the next two decades, per capita vehicle miles traveled will decline about 1.1 percent per year. Keep in mind, this isn’t some rabid environmentalist’s stretch goal: it’s the highway department’s prediction of driving trends, without any regard to climate change. (Even this rate of decline is still only about 60% as fast as the region has managed over the past decade).

WSDOT’s baseline prediction (a decline in per capita VMT of 1.1 percent per year) would suggest that the real trend for regional driving would be a decline to 17 miles per day by 2021, and a further decline to 14 miles per person per day by 2035.

Whatever this is, it should be apparent that it’s nothing resembling bold climate leadership; if anything it is technocratic foot-dragging, providing an opaque statistical rationalization for actually slowing the rate of progress we’ve already made as a region in addressing the problem of climate change.

And there’s one more thing the Metro largely plan overlooks. Reducing VMT doesn’t just reduce carbon emissions. It also saves the region’s households money. Lots of money. Cutting VMT by additional one mile per person per day would save the region’s households roughly $250 million per year, every year, in reduced fuel and auto costs. Setting a more aggressive target for VMT reductions would actually be good for the local economy–because it would mean local consumers have more money to spend on things other than cars and gasoline.

My colleagues working in the education field often talk of the “soft bigotry of low expectations”–that we don’t ask much of students from challenged schools, and that as a result, they have little incentive or motivation to dramatically improve their performance. The Portland region has a proud history of being a risk-taking pioneer in the environmental field, with its original goal of reducing greenhouse gases, implementing an urban growth boundary and cleaning up the Willamette. Arguably, this is the time to be bold. If it were serious, Metro could explore setting a goal of reducing driving to twelve or even ten miles per person per day by 2035. It’s likely that the public savings from lower road construction costs and the household savings from less spending on cars and gasoline would add up into billions of additional resources for the local economy–not to mention lower greenhouse gas emissions.

Climate change may be the most profound existential challenge we’ve ever faced, but the proposed Metro plan sets the bar for progress so low as to be meaningless. There’s certainly nothing in this plan that expresses any ambition to do more than is already baked in the cake. In fact, it does a lot less than we’ve managed with no plan, and a lot less that our neighbors to the North predict will happen, even with no further policy intervention. Paradoxically, it may end up being used to plan for higher levels of driving than can reasonably be forecast to occur if we do nothing. There are a lot of phrases that could be used to describe such a plan, but “climate smart” isn’t one of them.

Why bother?

Our Shortage of Cities: Portland Housing Market Edition

The big idea: housing in desirable city neighborhoods in getting more expensive because the demand for urban living is growing. The solution? Build more great neighborhoods.

To an economist, prices are an important signal about value:  rising prices for an object or class of objects signal increasing value relative to other objects.  In our conventional supply and demand framework, rising prices are often symptomatic of a growing demand or a limited supply:  that consumers now want more of some commodity or product than is currently available in the market.

Trends in housing prices point to some significant shifts in consumer demand, especially in the value that consumers attach to urban, as opposed to suburban, locations.  The rising relative price of housing in cities is a strong indication of the growing demand for urbanity–and its unfortunate short supply.

Case in point:  Portland, Oregon.  Let’s take a quick glance back at housing prices in the Portland area over the past decade (courtesy of Zillow’s comprehensive archive of monthly housing price estimates).  For simplicity, we’ll look at four Portland metro area sub-markets–the central city of Portland (home to a little more than a quarter of the region’s population), and the region’s three principal suburban counties–Clackamas and Washington Counties in Oregon, and Clark County, Washington.  The following table shows single-family home prices for 2005, 2007 (the peak year for the region’s housing market), 2010 (the bottom of the bust) and the data for the latest quarter (3rd Quarter 2014).  To simplify comparison between the city and suburbs, I’ve calculated the un-weighted average price for for the three suburban counties.

In 2005, in the heyday of the housing bubble, the city of Portland’s housing prices were $236,000, on average about $20,000 lower than the three suburban jurisdictions, ranging from $3,000 lower than Clark County, to $30,000 lower than Clackamas County.  According to Zillow’s latest estimates the average Portland single family home is now worth about $309,000. Portland’s prices today are about $20,000 higher than the average of the three suburban counties, and its price level is equal to that of the Clackamas, the priciest suburban county.

Not only have houses in the City of Portland re-couped all the value lost to the collapse of the housing market, they are now worth on average about 6 percent more than they were at the peak of the housing bubble.  Meanwhile, the average suburban home is still about 7 percent below its peak price.

The verdict of this shift in housing markets is unequivocal:  housing in the city is now more valuable, and has appreciated faster than suburban housing.  In less than a decade, the city has reversed geographic polarity of the regional housing market:  the average city house sold at a nine percent discount to the average suburban house in 2005; today the average city house commands a seven percent premium.

There are doubtless many reasons for this shift.  We know that young, well-educated workers are increasingly choosing to live in close-in urban neighborhoods.  Over the past decade, the big increase in gasoline prices has made car-dependent suburban locations more expensive and less attractive than urban living.  My 2008 CEOs for Cities report “Driven to the Brink,” reported some early evidence that housing prices fell most on the suburban fringe, and held up best in urban centers.

The falling price of suburban housing relative to city housing is the most persuasive evidence possible about consumer preferences.  Citing the results of a recent opinion survey, some have claimed that Portland-area consumers allegedly prefer suburban locations to urban ones. But the fact that consumers are not willing to pay as much for suburban housing as they are for urban housing, and that while urban home prices are setting new highs, suburban prices are still well below their peaks, shows that the reverse is actually true:  consumers value urban single family housing more than its suburban alternative.

The rising price of urban housing is a market signal that housing in the city has value to consumers–and that we should be making more of it.  Prices are rising because the demand for city living is rising faster than we’ve expanded the supply of urban housing.  The clear public policy implication of the market data is that city and regional governments should be looking seriously at ways to expand the supply of urban housing.  And expanding supply in the city is especially important to addressing concerns about housing affordability:  unless supply expands, we can reasonably expect the growing demand for urban living to push prices still higher, reducing affordability.

While this commentary focuses on the city of Portland, there are good reasons to believe that the nation is experiencing a significant and growing shortage of cities as well.  As Americans rediscover–and recreate–the attractiveness of urban living, this shortage is likely to grow.  Paying close attention to the signals provided by the housing market is key to understanding the nature of this challenge and implementing appropriate solutions.

Technical Notes:  Data for this analysis were obtained from Zillow.com’s city and county ZHVI-Single Family Residential index.  Values are annual averages of monthly data, rounded to the nearest $1,000.  The suburban average presented here is the unweighted average of the price index values reported for the three suburban counties.  Since these four areas are all part of a single, larger metropolitan economy that shares the same industrial and job base, its pretty straightforward to interpret the change in relative prices among sub-markets as indicative of the relative change in consumer preference for these areas.  Also, because so little new single family housing has been constructed in the past several years, these numbers are not meaningfully skewed by the construction of a large number of new houses in any one jurisdiction.

The four biggest myths about cities – #3: Crime is rising in cities

robocop watch

The Myth: Crime in cities is on the rise

The Reality: Cities are getting safer

For decades, the common perception about cities is that they were dangerous, dirty, and crowded. A look at the facts tells a different story: our cities are cleaner, safer, quicker, and healthier than ever. Today I’ll take a look at how urban neighborhoods have become safer despite public attitudes to the contrary.

On the whole, violent crime is declining in the Unites States. The overall murder rate has dropped by more than half since 1991 and property crimes like burglary have been on the decline. As a result, American concern about crime has ebbed: in 1994 a majority of Americans told Gallup crime was the nation’s most pressing issue; only 1 percent gave that answer in 2011. Even though we individually regard crime as less of a problem, people still tend to think of big cities as somehow dangerous. Consider the New York paradox: According to YouGov, Americans who have never been to the Big Apple are evenly divided on whether its safe or not, while those who have traveled their regard it as safe by a two-to-one margin.

This drop in crime has been greatest in the nation’s largest cities. Violent crimes of all kinds declined 29 percent in the central cities of the nation’s 100 largest metropolitan areas — a significantly steeper decline than in the nation’s suburbs (down 7 percent). Property crimes in central cities fell even more — down 46 percent, compared to a 31 percent decrease in suburbs.

Survey evidence demonstrates that the drop in crime is not widely understood by the general public. A September 2014 survey by YouGov found that most Americans believe crime rates have increased over the past two decades. Their data show that 50 percent of Americans think crime rates are up; 22 percent think they are down, 15 percent think crime rates are unchanged, and 13 percent don’t know.

Hollywood continues to peddle the storyline of cities of the future as savage, crime-ridden dystopias (see for example this year’s remake of Robocop). Meanwhile the good news about safer cities goes almost unnoticed. A 2011 study by the Brookings Institution pointing to significant declines in 80 of the nation’s 100 largest cities has gone practically unnoticed, garnering just seven citations in other work, according to Google Scholar. (Google Scholar, August 19, 2014).

While crime has dropped, it’s not the only factor making cities better places to live. Wednesday, I’ll conclude the series by showing how traffic jams aren’t actually as bad as they used to be.

Photo courtesy of Danni Naeil on Flickr.

The four biggest myths about cities – #1 Cities aren’t safe for children

If your impression of cities came entirely from watching the evening news, you might think that cities are saddled with ever-increasing traffic congestion and rising crime rates. From talking to your Great Aunt Ida at Thanksgiving, you’d think that New York was more dangerous for children than the suburbs and that Los Angeles was still covered in a cloud of smog.

But the truth is quite different. A look at the facts shows that cities are cleaner, safer, quicker, and healthier than ever.

Urban crime has fallen sharply over the past two decades, and many of the nation’s biggest cities, like New York, are statistically the safest.

And while the thought of raising kids in the city makes some parents quake with fear for the personal safety of their children, there’s growing evidence that city living is safer and healthier for kids than growing up in the suburbs. This week and next, I’ll be taking a look at some of the most common, and most mistaken views about cities.

Up first: Safety.

KidFountain

The Myth: Cities are dangerous places to live, especially for children

The Reality: Cities are actually quite safe while suburbs and rural areas are more dangerous

For decades, the common perception about cities is that they were dangerous, dirty, and crowded. A look at the facts tells a different story: our cities are cleaner, safer, quicker, and healthier than ever. Urban neighborhoods are some of the safest places to raise a family.

For an entire generation of Americans, safety and serenity meant living on a quiet, wide suburban street, where trips by car were a necessity to avoid the vulnerability of pedestrian travel.

Most of us still feel pretty safe getting into our cars, but traffic crashes are actually the leading cause of “non-intentional” death (i.e. not from disease) in the United States – and people who live in suburbs and rural areas are much more likely to die in car crashes. For those under 25, car crashes are the leading cause of death.

A University of Pennsylvania study looked at the geographic location of nearly 1.3 million injury deaths in the United States over two decades. It found that, on average, death rates from injuries were about 22 percent higher in the least dense counties than in the most dense counties.

Not surprisingly, there is a strong correlation between sprawling metropolitan areas, where people have to drive further on an everyday basis, and death rates from car crashes. Data from the National Highway Traffic Safety Administration’s Fatality Analysis Reporting System, shows that each additional mile driven per capita daily in a metropolitan area was associated with five additional car crash deaths per million in population. Comparing two metropolitan areas with populations of 2 million, a metro area where people drove 30 miles per person per day would be expected to have 100 more car crash fatalities annually than a metro area where people drove only 20 miles per person per day.

OK, so cities have fewer car crashes than the suburbs, but won’t we all die of lung cancer from all the smog? Next week, I’ll take on the myth that urban neighborhoods are choked by air pollution.

Photo courtesy of Wolfy (Pete) Hanson on Flickr

The four biggest myths about cities – #2: Cities are dirty

city clouds

The Myth: Cities are polluted and have dirty air

The Reality: Urban air quality has improved dramatically since 1990

For decades, the common perception about cities is that they were dangerous, dirty, and crowded. A look at the facts tells a different story: our cities are cleaner, safer, quicker, and healthier than ever. Today I’ll take look at the story of smog and how smart policy has cleaned the urban air we breathe.

Over the past two decades, the United States has made significant progress in reducing air pollution. Emissions of the six major air pollutants—including nitrogen oxides, carbon monoxide, hydrocarbons and particulates–are collectively down 56 percent since 1990. We’ve also made tremendous progress in reducing toxic air pollutants: lead levels in ambient air have been reduced 84 percent; benzene levels are down 66 percent. This progress has been widespread in the nation’s cities. The number of days in which air quality failed to meet health standards has declined in all 35 of the nation’s largest metropolitan areas.

While we often perceive that cities are dirtier and produce more pollution than the suburbs, cities actually generate significantly less air pollution than suburbs on a per person basis. Urban residents drive fewer miles, are more likely to use public transit, consume less electricity, and have smaller heating bills than their suburban counterparts.

Every day, the storyline of American cities is changing. We’re hearing more about the social and economic opportunities that cities provide, but it’s important to recognize that urban neighborhoods become some of the best places to raise a family.

On Wednesday Thursday, I’ll take on the myth that criminals are running rampant in the streets of the country’s major cities. Spoiler alert: Robocop isn’t coming to a city near you.

Photo courtesy of Holly Clark on Flickr

The four biggest myths about cities – #4: Traffic is getting worse

Fast_Lane

The Myth: Traffic congestion is getting worse

The Reality: Congestion has declined almost everywhere

It’s a common movie trope – a busy commuter rushes out of his downtown office at 5pm, hoping to get only to enter a citywide traffic jam. In reality, traffic congestion across the country has been in steady decline thanks to Americans choosing to drive fewer miles every year and increasingly biking, walking and taking transit for many of their trips—especially in cities.

Using data from GPS devices in millions of vehicles, Inrix tracks highway travel times in the nation’s large metropolitan areas (when they aren’t fear mongering about the costs of congestion). In its past two annual reports, Inrix pointed out that time lost to congestion has fallen dramatically in the United States. In 2011, congestion levels declined 30 percent nationally, and they declined a further 22 percent in 2012. Their travel time index measures the additional time that a typical peak hour trip takes compared to the same trip taken during free-flowing road conditions. A travel time index of 12 means that a trip that takes 20 minutes during free flowing travel conditions takes 12 percent longer—about 22 and a half minutes—during the peak travel period. Traffic congestion, as measured by the travel time index has fallen by about forty percent, from between 11 and 12 in 2010 to about 7 in 2013.

The distance we’re driving has decreased as well. Americans have cut their driving from a peak of 27.5 miles per person per day in 2005, to about 25.5 miles per person per day now.

traffic_chart

Cities are remarkably effective at reducing commute times – the closer you live to work, the less time you spend in the car.

You can learn more about traffic congestion and the dreaded “Carmageddon” in the Questioning Congestion Costs card deck.

Congestion and crime are dropping, kids in cities are safer and healthier than their suburban counterparts, and urban air quality is better than it’s been in decades. The sooner we can shed these outdated myths about city living, the sooner we’ll be on a path to building better places for Americans to live.

Photo courtesy of Neil Kremer on Flickr.

Parking: The Price is Wrong

One of the great ironies of urban economies is the wide disparity between the price of parking and the price of housing in cities. Almost everyone acknowledges that we face a growing and severe problem of housing affordability, especially in the more desirable urban neighborhoods of the nation’s largest and most prosperous metropolitna areas.  As we’ve frequently pointed out at City Observatory, much of this affordability problem is self-inflicted, due to the severe limits that local zoning codes put on new development.  In sharp contrast, to the high cost of housing is the low, and mostly zero price we charge for parking in the public right-of-way.  This under-pricing of parking is a central and unacknowledged problem in urban transportation: The price is wrong. Underlying traffic congestion, unaffordable housing, and the shortage of great urban places is the key fact that we charge the wrong price for using roads.

the-carey-price-is-wrong_design

Nowhere are the effects of mispriced roads more apparent than on-street parking. Only for car storage do we regularly allow people to convert a scarce and valuable public space to exclusively private use without paying for the privilege. In neighborhoods that don’t charge for on-street parking, we have have a system that can only be described as socialism for private car storage. The public sector pays for the entire cost of building and maintaining roads, and even in dense urban settings with high demand, we allow cars to occupy those without paying a cent.

As chronicled in painstaking detail by the godfather of parking wonks, UCLA professor Don Shoup, free parking encourages additional driving, reduces the vitality of urban neighborhoods, makes it harder for local retailers to survive and needlessly drives up the cost of housing. A growing number of urbanists are coming to embrace Shoup’s viewpoint, spelled out in his 700-page tome “The High Cost of Free Parking,” but many of us still cling to the outdated illusion that parking is and should forever be “free.”

For the most part, the pitfalls of poorly priced parking go unrecognized and unexamined — we get stuck in congestion and complain about the shortage of parking. But we don’t typically recognize how the wrong price is the root cause of these problems.

Every once in a while though, there’s an event that shines a bright light on the consequences of parking socialism, and demonstrates how getting the prices right can fix things in a hurry. The most recent example is Portland Oregon’s reform of its handicapped parking system.

For years, the rampant abuse of Portland’s generous handicapped parking system was obvious and well-known. On downtown streets, a blue handicapped placard traditionally entitled users to park for free, as long as they liked. In Oregon, all that is required to get a handicapped permit is a note from one’s doctor and a trip to the DMV. As casual visitors to downtown have observed, entire blocks were occupied from early morning until night by rows of cars, each with a deep blue handicapped placard hanging in its rear view mirror.  In an apparent epidemic of frailty, the number of handicapped permits in use in downtown Portland almost doubled between 2007 and 2012. In September 2013, handicapped placard users occupied fully 1,000 of the central city’s 8,000 metered on street spaces.  Portland’s situation is hardly unique, the City of San Francisco reports that 20 percent of its on-street parking spaces are occupied by vehicles with handicapped parking permits.

In July 2014, that all changed. Led by Commissioner Steve Novick (full disclosure: Steve is a long time friend), the city limited free parking to wheelchair users who possessed a special permit. Those with generic handicapped placards can still largely ignore maximum time limits, but they have to pay for the space they use. The city even created special “scratch off” parking tags so that users wouldn’t have to walk to meters to pay:  you can see all the details of the new system here.

Overnight, the parking landscape in downtown Portland changed. Spaces occupied by placard users dropped 70%. Getting the price right freed up 720 parking spots for other, paying users, expanding the effective supply of parking by nearly 10%. The results of the change are described in a report prepared by the Portland Bureau of Transportation.

Press reports of the days following the policy reported an eery abundance of vacant on-street parking spaces.

Two weeks after Portland began charging drivers with disabled placards to park in the city’s metered spots, enforcement officer J.C. Udey says he doesn’t recognize his downtown beat. The days of patrolling block after block — after block — lined with the same cars displaying blue placards appear to be over. “It’s open spaces,” he said. “We have so much more parking.”

Brad Gonzalez of Gresham, who was shopping at the Portland Apple Store on Monday, said he couldn’t remember the last time he found a curbside parking spot in downtown so easily. “I found one right away near the store,” Gonzalez said. “It used to be about circling the block and getting lucky, and getting frustrated. Most of the spots were taken by cars with disabled parking signs. It was obvious that there was a lot of abuse.”

The change is even more remarkable in the heart of the central business district. I looked at the six most central parking beats in the city—for those familiar with Portland, an area bounded by Burnside Street on the north, the Willamette River on the east, Jefferson and Market Streets on the south, and 10th and 11th Avenues on the west. (These are beats 1,2,3,4,6 and 11). This area contains a total of about 1,850 on-street metered parking spaces. A year ago, 450 spaces–nearly a quarter of them–were occupied by vehicles with handicapped placards. That’s fallen to 105 placard users–a reduction of 75 percent from the free-parking era. This is the equivalent of adding about 350 parking spaces to the supply of street parking in the heart of downtown Portland.

Freeing up on-street parking spaces makes the transportation system work better: people don’t circle endlessly searching for a “free” parking space; paying customers eager to make purchases can park closer to their destinations, and local governments can use meter revenue to make improvements to the neighborhood that make it more pleasant for residents.

The city’s new report doesn’t spell out how much additional revenue the city stands to make as a result of the change.  A good rough estimate would be that the city nets about $10 per meter per business day; if so, it would clear an additional $1.4 million per year (700 meters times $10 times 200 business days). Parking meter revenues help pay for street maintenance and improvements, which the city says are badly under-funded—so this change will help reduce that gap.

Portland’s Bureau of Transportation is currently undertaking an effort to study and recommend new parking tools policies for city neighborhoods. Hopefully they recognize the lessons from pricing handicap spaces downtown and apply sensible pricing schemes in other areas to make the city’s neighborhoods even greater.

The larger lesson here should be abundantly clear: charging users for something approaching the value of the public space that they are using produces a transportation system that works better for everyone. When we get the prices right, or even closer to right, good things happen. We can’t solve our parking problems until we admit that when it comes to city streets, the price is wrong.   

And the Talent Dividend Prize Winner is . . .

Akron, Ohio!  With a 20.2 percent increase in post-secondary degrees awarded over the past three years, Akron outpaced the 56 other metro areas entered in the Talent Dividend Prize contest.  As the winner of the Talent Dividend Prize, Akron will receive one million dollars to promote further efforts to raise college attainment in Northeast Ohio.  For more details about the prize winners, visit Living Cities.

The prize contest was launched four years ago with the support of the Kresge and Lumina Foundations.

The competition was built on the observation that education is the single most important factor in driving metropolitan economic success.  Research shows that about 60 percent of the variation in per capita personal income among large U.S. metropolitan areas is the fraction of the adult population that has earned a four-year college degree.

As Bill Moses of the Kresge Foundation noted at the prize announcement ceremony, one of the hallmarks of the Talent Dividend prize is cross-sector collaboration.  The competition recognized from the beginning that increasing educational attainment was not just an issue of interest or importance to colleges and universities, but in order to be successful has to engage the business community local government and others and to tie education directly to the economic development agenda.  By offering the award for a collective, community-wide performance, the prize competition catalyzed broader partnerships in participating communities.

The prize was awarded to Akron because it was the city which achieved the largest population-weighted increase in 2-year, 4-year and advanced degrees awarded between 2009-10 and 2012-13.  (Four-year and advanced degrees are double-weighted in this calculation, reflecting their greater economic impact).  Over the past three years, the 57 competing cities have increased the number of 2-year degrees by 69,000 and the number of 4-year degrees by 55,000.  In the aggregate, the competing cities increased the number of degrees awarded by 7.6 percent more than their growth in population.

So the great thing about the Talent Dividend Prize is that every community is a winner.  Increasing educational attainment pays a big dividend to communities.  On average, every one percentage point improvement in educational attainment is associated with a $835 dollar increase in per capita personal income.  The participating cities also have strengthened their capability for local collaboration around these issues, which will likely pay additional dividends going forward.

(Updated October 30, to reflect the announcement of the winner).

 

Parking: The Price is Wrong

There is a central and unacknowledged problem in urban transportation: The price is wrong. Underlying traffic congestion, unaffordable housing, and the shortage of great urban places is the key fact that we charge the wrong price for using roads.

the-carey-price-is-wrong_design

Nowhere are the effects of mispriced roads more apparent than on-street parking. Only for car storage do we regularly allow people to convert a scarce and valuable public space to exclusively private use without paying for the privilege. In neighborhoods that don’t charge for on-street parking, we have have a system that can only be described as socialism for private car storage. The public sector pays for the entire cost of building and maintaining roads, and even in dense urban settings with high demand, we allow cars to occupy those without paying a cent.

As chronicled in painstaking detail by the godfather of parking wonks, UCLA professor Don Shoup, free parking encourages additional driving, reduces the vitality of urban neighborhoods, makes it harder for local retailers to survive and needlessly drives up the cost of housing. A growing number of urbanists are coming to embrace Shoup’s viewpoint, spelled out in his 700-page tome “The High Cost of Free Parking,” but many of us still cling to the outdated illusion that parking is and should forever be “free.”

For the most part, the pitfalls of poorly priced parking go unrecognized and unexamined — we get stuck in congestion and complain about the shortage of parking. But we don’t typically recognize how the wrong price is the root cause of these problems.

Every once in a while though, there’s an event that shines a bright light on the consequences of parking socialism, and demonstrates how getting the prices right can fix things in a hurry. The most recent example is Portland Oregon’s reform of its handicapped parking system.

For years, the rampant abuse of Portland’s generous handicapped parking system was obvious and well-known. On downtown streets, a blue handicapped placard traditionally entitled users to park for free, as long as they liked. In Oregon, all that is required to get a handicapped permit is a note from one’s doctor and a trip to the DMV. As casual visitors to downtown have observed, entire blocks were occupied from early morning until night by rows of cars, each with a deep blue handicapped placard hanging in its rear view mirror.  In an apparent epidemic of frailty, the number of handicapped permits in use in downtown Portland almost doubled between 2007 and 2012. In September 2013, handicapped placard users occupied fully 1,000 of the central city’s 8,000 metered on street spaces.

In July, that all changed. Led by Commissioner Steve Novick (full disclosure: Steve is a long time friend), the city limited free parking to wheelchair users who possessed a special permit. Those with generic handicapped placards can still largely ignore maximum time limits, but they have to pay for the space they use. The city even created special “scratch off” parking tags so that users wouldn’t have to walk to meters to pay:  you can see all the details of the new system here.

Overnight, the parking landscape in downtown Portland changed. Spaces occupied by placard users dropped 70%. Getting the price right freed up 720 parking spots for other, paying users, expanding the effective supply of parking by nearly 10%. The results of the change are described in a new report released by the Portland Bureau of Transportation.

Press reports of the days following the policy reported an eery abundance of vacant on-street parking spaces.

Two weeks after Portland began charging drivers with disabled placards to park in the city’s metered spots, enforcement officer J.C. Udey says he doesn’t recognize his downtown beat. The days of patrolling block after block — after block — lined with the same cars displaying blue placards appear to be over. “It’s open spaces,” he said. “We have so much more parking.”

Brad Gonzalez of Gresham, who was shopping at the Portland Apple Store on Monday, said he couldn’t remember the last time he found a curbside parking spot in downtown so easily. “I found one right away near the store,” Gonzalez said. “It used to be about circling the block and getting lucky, and getting frustrated. Most of the spots were taken by cars with disabled parking signs. It was obvious that there was a lot of abuse.”

The change is even more remarkable in the heart of the central business district. I looked at the six most central parking beats in the city—for those familiar with Portland, an area bounded by Burnside Street on the north, the Willamette River on the east, Jefferson and Market Streets on the south, and 10th and 11th Avenues on the west. (These are beats 1,2,3,4,6 and 11). This area contains a total of about 1,850 on-street metered parking spaces. A year ago, 450 spaces–nearly a quarter of them–were occupied by vehicles with handicapped placards. That’s fallen to 105 placard users–a reduction of 75 percent from the free-parking era. This is the equivalent of adding about 350 parking spaces to the supply of street parking in the heart of downtown Portland.

Freeing up on-street parking spaces makes the transportation system work better: people don’t circle endlessly searching for a “free” parking space; paying customers eager to make purchases can park closer to their destinations, and local governments can use meter revenue to make improvements to the neighborhood that make it more pleasant for residents.

The city’s new report doesn’t spell out how much additional revenue the city stands to make as a result of the change.  A good rough estimate would be that the city nets about $10 per meter per business day; if so, it would clear an additional $1.4 million per year (700 meters times $10 times 200 business days). Parking meter revenues help pay for street maintenance and improvements, which the city says are badly under-funded—so this change will help reduce that gap.

Portland’s Bureau of Transportation is currently undertaking an effort to study and recommend new parking tools policies for city neighborhoods. Hopefully they recognize the lessons from pricing handicap spaces downtown and apply sensible pricing schemes in other areas to make the city’s neighborhoods even greater.

The larger lesson here should be abundantly clear: charging users for something approaching the value of the public space that they are using produces a transportation system that works better for everyone. When we get the prices right, or even closer to right, good things happen. We can’t solve our parking problems until we admit that when it comes to city streets, the price is wrong.   

Is Portland really where young people go to retire?

Forget the quirky, slacker stereotype, the data show people are coming to Portland to start businesses.

Portland

A recent New York Times magazine article “Keep Portland Broke,” echoed a meme made popular by the satirical television show “Portlandia” asking whether the city will always be a retirement community for the young.

Far from being a retirement venue for the precocious indolent, the city is in fact a beehive of social and cultural innovation and entrepreneurship.

Critics are to be forgiven if they mistake a different set of interests and sometimes values for a disinterest in “traditional” work.

And we’re not talking individually pedigreed free-range chickens or artisanal pickles (although you’ll find those, too).

The truth is the young adults in Portland are disproportionately entrepreneurial. Among college educated 25 to 34 year olds, fully nine percent are self-employed, a rate half again greater than that of other large metropolitan areas—and ranking Portland third for self-employment among metros with a million or more population.

Among the nation’s 51 largest metro areas—all those with a million or more population, Portland ranks fourth in small businesses per capita, fourth in self-employment, seventh in patents per capita and fourteenth in venture capital per capita. And true to form, the city shines when it comes to edgy and creative: according to Forbes, Portland ranked seventh in Bandcamp, third in Kickstarter, and third in Indie-Go-Go among US cities.

And the city is alive with creative endeavors of all kinds. The city has more than 600 food carts, the largest concentration of microbreweries of any large city in the US.

Portland is home to the nation’s leading cluster of athletic and outdoor gear and apparel firms, including the world headquarters of Nike and Columbia Sportswear and the North American headquarters of adidas. And there are more than 400 other firms in the industry cluster—most of them started locally, and making Portland one of the hottest places on the planet for designers.

Arts and culture. Indy bands. A prolific, inventive food scene. Strong and innovative clusters of software and semiconductor firms. A robust, world-class athletic gear and apparel cluster.

And, at the end of the day, the claims of indolent retirement fall in the face of simple and compelling data about the region’s unemployment rate. In 2012, the unemployment rate for 25 to 34 year olds with a four-year degree or higher level of education in Portland was 4.8 percent—a bit higher than the average for large metropolitan areas (4.0%), but the same as Houston, and lower than Atlanta and Chicago (5.2%), Los Angeles (8.3%) Las Vegas (7.2%) and—attention New York Times—New York (5.7%).

This new generation is doing new and different things. It is keeping Portland weird. And some of what is happening will seem bizarre or disconcerting to those who’ve grown settled in their ways. But Oregonians are pretty much oblivious to this kind of derision from outsiders: we’re content to do what we want because it makes sense to us, not because it passes muster with critics from somewhere else.

There’s actually a long history of that here in Portland. Back in the 1960s, at a time when adult Americans generally didn’t sweat in public if they could avoid it, people in this area started running and jogging for health. What was originally odd behavior—grown men and women in shorts and t-shirts out running along public streets—presaged a national and global trend in fitness. And one guy started a business selling Japanese sneakers to these joggers out of the back of his 1964 Plymouth Valiant station wagon. The company Phil Knight started—Nike–is today one of the world’s most recognized brands and the global leader in sports apparel. Not every weird little habit ultimately leads a Fortune 500 company, but a surprising number of successes emerge people were weren’t afraid to be different.

Joe Cortright’s earlier rejoinder to the New York Times article on Portland appears on CityLab.

Photo courtesy of Frank Fujimoto at Flickr Creative Commons