The High Cost of Segregation

A new report from the Urban Institute shows the stark costs of economic and racial segregation

Long-form white paper policy research reports are our stock in trade at City Observatory. We see dozens of them every month, and usually read them with great interest, and flagging the best one’s for the “must read” list we publish as part of the Week Observed. Usually that’s enough. Yesterday’s report from the Urban Institute–The Cost of Segregation– is different.  It’s not just a must read: It’s a must read, digest, understand, and use.

We’ve known for a long time that segregation is “a bad thing.” But the new Urban Institute report offers a stark, comprehensive and compelling calculation of the economic and social costs that segregation imposes every day on the residents of nation’s large metropolitan areas. Higher levels of segregation are associated with lower levels of black per capita income, lower rates of educational attainment, and higher levels of crime. As a result, segregation is more than just wrong or unfair, it imposes serious economic costs. Conversely, more inclusive metropolitan areas are more prosperous.

Urban Institute has computed how large the gains might be from simply reducing the level of segregation in some of the more segregated cities to the level typically found in large metro areas. In the case of Chicago–one of the dozen or so most segregated metro areas–lowering economic and racial segregation to the national median would have these effects:

  • raising black per capita income $3,000 per person (for a total metro gain of $4.4 billion)
  • increasing the number of college graduates by 80,000
  • reducing the number of homicides by almost one-third (from about 6.6 per 100,000 to 4.6 per 100,000 per year.

While the report is ostensibly about the Chicago metropolitan area, what you’ll really find in a careful tabulation of segregation data for all of the nation’s 100 largest metropolitan areas, plotting trends over the 20 year period from 1990 through 2010. As a quick summary, they’ve mapped the ranking of metro areas based on a composite measure the combines economic and racial ethnic segregation. On this map, reddish brown areas have the highest levels of segregation, and dark blue areas have the lowest segregation.

Some Technical Details

The report has a wealth of data on segregation. It uses a slightly different geography than most other analyses of segregation, reporting data for commuting zones, city-centered regions that are somewhat larger that federally defined metropolitan statistical areas. (Economist Raj Chetty and his colleagues used this same geography for their Equality of Opportunity analysis). The report also uses two new measures of segregation.  Its measure of racial and ethnic segregation is the Spatial Proximity Index, which is computed for pairs of groups (Whites and Blacks and Whites and Latinos). The SPI is one if two groups are clustered in the same neighborhoods, values higher than one indicate the degree to which members of each group are more clustered with others in their group (whites with whites, and so on). Higher values indicate greater degrees of segregation between groups.

For economic segregation, the report uses the Generalized Neighborhood Sorting Index, which measures the extent to which high income and low income groups tend to live in the same or different parts of a metropolitan area. The GNSI runs from zero (evenly distributed) to 1 (completely segregated). The index has a spatial component, considers whether, for example, poor neighborhoods are primarily adjacent to other poor neighborhoods, or are more intermingled with higher income neighborhoods.

The report includes detailed data for each of the nation’s 100 largest commuting zones, as well as a clearly constructed on-line calculator that illustrates where a selected metropolitan area stands in relation to all others.  Here are the calculator’s data for Chicago:

This is just a quick overview of what’s in the report. We’ll be digging into its content more in the next few days, and sharing some of our thoughts. But don’t wait for our analysis, there’s lots to learn by downloading the report and pouring over the data for your metro area.

Autonomous vehicles: Peaking, parking, profits & pricing

13 propositions about autonomous vehicles and urban transportation

It looks more and more like autonomous vehicles will be a part of our urban transportation future. There’s a lot of speculation about whether their effects will be for good or ill. While there’s a certain “techno-deterministic” character to these speculations, we’re of the view that the policy environment can play a key role in shaping the adoption of AVs in ways that support, rather than undermine the transportation system and the fabric of cities.

A rocky road for autonomous vehicles? A March 24, 2017 crash of an Uber self-driven vehicle in Tempe Arizona via REUTERS.

Our thinking is still evolving on this subject, but to start the conversation, we’ll pose 13 propositions about the nature of urban travel demand, autonomous vehicles, and what we’ll need to do to change our policies and institutions to cope with them. Given that we think that many of the persistent problems with our current transportation system stem from getting the prices wrong, we think that the way that autonomous vehicles will change the cost and price of urban transportation will be key to shaping their impacts.

  1.  Urban travel demand is highly peaked. As a rule we have plenty of capacity in our transportation system for about twenty of the twenty four hours of the day.  Because we all disproportionately tend to travel at the same times, in the morning and afternoon peaks, streets are taxed to their limits at peak hours, usually for an hour or an hour and a half in the morning, and for two and a half to three hours in the late afternoon. As Jarrett Walker observes, this is a geometry problem, single occupancy vehicles are not sufficiently space efficient that they can accommodate all travelers in peak periods in most urban environments. But it would be more accurate to call this a “space-time” problem:  we don’t have enough space at certain times.  The analyses of AV adoption and deployment routinely abstract from these issues.  The peaked nature of demand has important implications:  more economic value is associated with peak period travel than travel at other times of the day, both due to its volume, and also to the nature of demand. Demand for peak period travel is more inelastic—which is why travelers routinely endure longer travel times in peak hours rather than simply waiting and making those trips at some other hour when congestion is less and travel times are faster:  we willingly endure an extra 5 or 10 minutes in our commute traveling at the peak, when if we waited an hour or ninety minutes, we could shorten our trip by that amount of time.
  2. Parking costs shape mode choice decisions.  Where parking is “free” to end users, they are far more likely to drive. More than four-fifths of all households own automobiles. The costs of owning cars are largely fixed (depreciation, insurance), and the marginal cost of taking a trip by car is often regarded by users as largely just the incremental cost of fuel. The major additional cost to many trips, especially to urban environments is the cost of paying for car storage when the vehicle isn’t being used.  The cost of parking in city centers is a major incentive to using other modes of transportation. There is a very strong correlation  between parking costs and transit use.  In effect, parking costs act as a surrogate road pricing mechanism for trips with origins or destinations in the CBD. The advent of autonomous vehicles (AVs) will greatly reduce or entirely eliminate the cost of parking as a factor in mode choice.  Many people who would not drive to the central business district, in order to avoid parking costs will want to choose AVs.
  3. Autonomous vehicles costs will be low enough to compete against transit. The cost of AV travel may be something on the order of 30 to 50 cents per mile (and could be considerably less).  Most transit trips are less than four miles in distance.  Most transit fares are in excess of two dollars per ride.  AV’s may be cost competitive, and potentially offer much better service (i.e. point-to-point travel, less or no waiting, privacy, greater comfort, etc).  Its fair to assume that the advent of a widespread deployment of fleets of AVs will stimulate a huge demand for urban travel, both among car owning households who don’t currently drive because of parking costs (because parking will be reduced to nil), among car owning households who do commute by car (because they can avoid the cost of parking),
  4. Suburbs will be relatively poor markets for autonomous vehicles. Conversely, where parking is free, and where density is low, fleet AV service will be a far less attractive option for travelers and a far less lucrative market for fleet AV operators.  Because they don’t have to pay for parking currently, commuters don’t save this cost when paying for an AV. Also, less dense areas will by definition be “thinner” markets for car sharing, for companies this means less revenue per mile or per hour and lower utilization; for customers it means longer waits for vehicles. People who live and work in low density areas may find it more attractive to own their own vehicle.
  5. AVs will tend to concentrate in urban centers:  The markets are denser there. The technical challenges of mapping the roadway are more finite, and the cost of mapping can be spread over more trips per road mile traveled. And, importantly, they will be able to surge price in these locations. Surge pricing is possible because the demand for travel, particularly at the peak hour, is higher.  Demand is greater (more people traveling) and people attach a greater value to their travel time.  Companies will want to concentrate their fleets in places that have lots of customers, both to optimize utilization (less waiting, dead-heading) and also to maximize revenue (surge priced trips are more profitable than regular fares).
  6. The demand for peak period travel in urban centers will tend to overwhelm available road capacity, even mores than it does today.  More commuters will seek to travel by AV; AV fleet operators will concentrate their vehicles in lucrative dense locations.
  7. Surge pricing by AV operators will help equilibrate supply and demand. While AVs may only cost 30 to 50 cents to operate, surge prices in dense urban environments could be many times higher than this amount.  Operators will use dynamic pricing to ration vehicles to the highest-value users. Others who might like to travel by AV will choose other modes or times (travel by transit; pay the price of parking and drive one’s own car, wait for a cheaper AV at an off-peak time, walk, bike, etc).  AV’s will tend to fill up existing road capacity.
  8. AV fleet operators will capture a significant portion of the economic rent associated with use of the limited peak period capacity of roads.  Pricing will result in a more efficient allocation of road use among users (in a technical sense, and abstracting from distributional issues).  But the profits from the limited capacity will go to the AV fleet operators, and not the public sector, which is responsible for building and maintaining the roadway, and is typically asked to incur huge expense for additions to capacity to lessen congestion.
  9. Under current road financing policies, AVs might end up paying almost nothing for the use of the public roadway. The gasoline tax is the principal source of revenue for road construction and maintenance. Electric AVs pay nothing in most states toward road costs.  A hallmark of current transportation network companies has been their “disruptive” policies of avoiding (or shifting) the fees and taxes imposed on conventional taxis. We assume this behavior will continue.
  10. In addition, AVs will disproportionately make use of the most congested, most expensive parts of the public street and road system. Unlike typical vehicles, which as widely noted are parked 90+ percent of the time, AVs will receive much higher use, and as noted here, will tend to gravitate toward the densest markets, and due to surge pricing, will be drawn to the most congested locations. With fuel taxes, the privately owned vehicles pay the same per mile cost for road use whether they use lightly trafficked roads at off-peak times, or use congested urban roads at peak times.  As noted, parking costs effectively discourage peak use in dense locations.  And to some extent, the off-peak and low density use of cars means that some roads cross-subsidize others. Parking fees and private ownership of cars have in effect limited the ability of cars to overwhelm city streets. Both of these constraints will be largely erased by fleets of autonomous vehicles.
  11. Some regime change in road pricing is needed.  The gasoline tax won’t work for electric vehicles. Fees tied simply to energy consumed or vehicle miles traveled ignore the very different system costs imposed by travel in different places and at different times.  A VMT fee still allows private fleet operators to capture all or most of the economic rent associated with peak travel in dense urban places, and provides no added revenue to address road or transportation system capacity constraints.
  12. What we really need is surge pricing for road use. The key constraint on urban transportation system performance is peak hour capacity. Single occupancy vehicles represent a highly inefficient way to make use of very expensive peak hour capacity. Without surge pricing for roads, AV fleet operators have strong incentives to capture the economic rents associated with peak period travel, shifting costs and externalities to the public sector and non-user travelers.
  13. Surge pricing should be established before AV fleets are widely deployed.  Once deployed, AV fleet operators will have a powerful incentive to fight surge pricing because it will reallocate economic rents from them to the public sector.

Please consider this a first draft. We invite your comments, and expect to periodically revise, expand and annotate these 13 propositions.

Breaking Bad: Why breaking up big cities would hurt America

New York Times columnist Russ Douthat got a lot of attention a few days ago for his Johnathan Swiftian column–”Break up the liberal city“–suggesting that we could solve the problems of lagging economic growth in rural and small town America by whacking big cities into pieces and spreading their assets more widely. Douthat views himself as a latter day Teddy Roosevelt, busting up the big concentrations of urban power, the way Roosevelt took on Standard Oil. Simply put, this is one of the most spectacularly wrong-headed policy prescriptions for economic development that has ever been offered. Far from spreading wealth, diminishing cities would actually destroy value and make the nation worse off.

Cities don’t extract rent, they create value

Douthat’s reasoning is based on a simplistic zero sum view of economic assets like industries and universities: cities have somehow unfairly monopolized the nation’s wealth, and we ought to redistribute it. The implied analogy here is to anti-trust law: cities have somehow cheated to monopolize resources. What this misses is that cities actually create value through increasing returns, what economists call agglomeration economies. People in cities are more productive, more innovative, and have higher skills because they live in cities.  Absent cities, the innovation and productivity upon which these industries depend for their success, they simply wouldn’t exist. As Ed Glaeser told the Washington Post:

“Cities enable workers to search over a wider range of firms, and to hop from one firm to another in case of a crisis. They enable service providers to reach their customers, and customers to access a dizzying range of service providers. Perhaps most importantly they enable the spread of ideas and new information. . . . cities are forges of human capital that enable us to get smart by being around other smart people.”

Economists have come to widely embrace the view advanced by Jane Jacobs that cities succeed in large part because of their diversity and density, which produces the kinds of spontaneous collisions of people that give rise to new ideas and new industry (what Jacobs called “new work”).  The nation’s largest metros produce a disproportionate share of its new patents, and economically successful new businesses because of these agglomeration economies: just 20 metros produce 63 percent of all patents. In biotechnology, for example, just three metro areas (Boston, San Diego and San Francisco) produce a majority of new biotech firms. Dispersing these researchers–who rely on critical mass and close and serendipitous interaction–would reduce the flow of new ideas that drive economic growth.

The signal characteristic of our economic recovery is that it has been led and driven by the nation’s large metros. Since the economic peak of the last expansion, large metro areas have accounted for about 87 percent of net new jobs in the US economy. This isn’t because they’ve somehow unfairly monopolized resources, but because the kinds of knowledge-based industries that we depend on to propel economic growth–software, business and professional services, and creative industries–all flourish in dense urban environments. Disperse these industries and you undercut the agglomeration economies that underpin their success.

The economic problem with cities is that we don’t have enough of them, or rather, that its so difficult and expensive to accommodate more people in the places with the highest levels of productivity.  The definitive bit of research on this subject comes from University of California Berkeley economist Enrico Moretti and his University of Chicago colleague Chang-Tai Hsieh, who have estimated how much less productive the US is than it might be if the growth of the most productive metro economies weren’t limited. Their estimate: 13.5 percent of annual GDP, or more than $1.6 trillion annually.

The irony here, also, is that the wealth and productivity of the nation’s cities underwrite a disproportionate share of the cost of the national government. The nation’s largest metropolitan areas have higher incomes and given the progressivity of the federal income tax pay a larger share of the nation’s income taxes. Rural areas–and red states generally–are net recipients of redistribution produced by federal taxing and spending. Shifting economic activity way from metro areas would reduce productivity and federal tax revenues. And one final twist: current federal tax and spending policies (including the home mortgage interest deduction and highway spending effectively penalize city dwellers who are more likely to be renters and depend on transit).

And finally, it would be worth considering the environmental consequences of dispersing economic activity in cities. Because cities residents drive less, walk, bike and take transit more, and live in smaller and more energy efficient dwellings, large cities turn out to be much more energy efficient and produce fewer greenhouse gases per capita than smaller cities and rural areas. So redistributing city assets would increase carbon emissions and accelerate global warming.

The lesson is not that we need to break up cities, but create more of them

More and more Americans are looking to move to cities.  This is especially true of younger, well-educated workers. Because the growing demand for urban living is facing a slowly changing supply of urban housing, rents are rising, effectively pricing some workers out of the opportunity to live in these highly productive places. At City Observatory we’ve called out the nation’s “Shortage of Cities,” and argued for policies that would help create more housing opportunities in the most productive places, and promote reinvestment and revitalization of lagging cities.



Transit and home values

Homes with better transit access command higher prices, especially in cities with good transit.

Our friends at Redfin, the real estate data and analytics company, have an interesting new report exploring the connection between transit access and home prices. Redfin computes and freely publishes a Transit Score for all of the nation’s houses. Transit Score is a sibling of Walk Score, and is a measure of the number and frequency of bus, train and other transit lines that run in close proximity to a given property.  The closer your house is to a bus stop or train station, and the more lines that serve that stop, the higher your Transit Score.

Moving people and moving real estate markets? (Flickr: BeyondDC)

Their new study uses house price data from 14 cities to look at the relationship between home prices and Transit Score. Redfin uses a workhorse economic tool, called hedonic modeling, to tease out the contribution of different aspects of a home (its size, number of bedrooms, bathrooms, age, neighborhood characteristics, etc.) to the home’s sales price. What they find is that, after controlling for these other observable characteristics that we know influence home values, that each additional point of Transit Score is associated with an average of about $2,000 in higher home value.  They’ve modeled the impact of an additional point of Transit Score on different cities, and shown how it relates to local housing prices.

Selected city level results, from Redfin

The value of accessibility

If that finding sounds a bit familiar, it should. We, and separately later, others at Zillow and Redfin, have replicated this kind of analysis looking at the connection between home values and Walk Score.  Although there’s been a bit of variation in the exact details, all three researchers have found a statistically significant connection between home prices and walkability. The more walkable neighborhood a home is located in, the higher its value tends to be.

So does this mean that you can add the value associated with a point of Transit Score to the increase value associated with Walk Score?  Probably not. First, this Redfin study looked only at Transit Score, and not at Walk Score, so statistically, it can’t say anything about the separate (or joint) impacts of the two different measures.  Second, and more importantly, we know that there’s a pretty strong correlation between transit access and walkability. Using transit and walking are strong complements (most bus and train riders are also pedestrians at both ends of their trip), and transit lines tend to provide higher levels of service to denser and more walkable neighborhoods. As a result, given the overlap between walkable neighborhoods and transit-served ones, its probably difficult, and maybe impossible, to tease out the separate contribution of the two factors.

While that may be frustrating statistically, its probably much less of a problem for policy. Given the complementarity between walking and transit, we probably want to build neighborhoods that foster both kinds of transportation.  In effect, measures like Walk Score and Transit Score (and a third sibling, Bike Score) are actually alternate ways of measuring a home’s accessibility: how easy it is to reach a range of common destinations. The greater the level of accessibility, by all these modes, as well as by automobile, the greater the value a home is likely to command.

Is transit worth more in transit-rich cities?

Keeping in mind that some of what is being measured in the Redfin study is the lagniappe from walkability and bikeability being greater in transit-served areas, we thought there was one other interesting aspect of the study’s data. The effect of Transit Scores on home values varied considerably across cities. While the average effect of an additional one point of transit score was to increase home values by $2,000 or between one-half of one percent and one percent in most cities, the impact was noticeably greater in some cities than others.  In Boston, where citywide transit grades out at a very high Transit Score of 74, the impact of a point of Transit Score is 1.1 percent.  In the cities with the lowest levels of city-wide transit accessibility, the value impact of one point of Transit Score was much less. In Phoenix (citywide Transit Score 32) it was just 0.14 percent; in San Diego (citywide score 37) only 0.18 percent. In Orange County (an outlier), where the citywide Transit Score was just 27, the effect of transit access was negative. This suggests of couple of things: First, in some place (Orange County) access to transit is a disamenity: homes are worth less, all other things equal if they’re near the (relatively few and infrequent) bus lines. Second, and more optimistically, transit is even more valuable in cities where the whole transit system works well. This suggests a kind of network externality:  If a city has a good, well-connected transit system, access to transit has an even bigger impact on an individual house’s value.

This latest research is another reminder that consumers place a positive value on city living.  Neighborhoods that are walkable, have great transit access, and are bikeable command higher prices because consumers value them more that places without these characteristics. Even though we face some statistical obstacles to separating out the different contributions of each ingredient, its clear that the combination of biking, walking and transit help underpin urban property values. And you can take that to the bank.

Going faster doesn’t make you happier; you just drive farther

Speed doesn’t seem to be at all correlated to how happy we our with our local transportation systems. 

Yesterday, we presented some new estimates of the average speed of travel in different metropolitan areas developed by the University of California’s Victor Couture. His data shows that average travel speeds in some metropolitan areas (like Louisville) are 22 percent faster than in the typical large metro area; while in other areas they are slower. Miami’s speeds average about 12 percent less than the typical metro. We’ve long assumed that one of the goals of our transportation system is to enable us to move as quickly as possible when we travel, so it stands to reason that the people who live in “faster” cities ought to be happier with their transportation systems.

Faster, but not happier. (Flickr: Opengridscheduler)

To test that hypothesis, we had a look at some survey data generated by real estate analytics firm Porch. They commissioned a nationally representative survey of residents of the nation’s large metropolitan areas and asked them how they rated their satisfaction with their local transportation system on a scale of 1 to 5, with 5 being very satisfied.  We compared these metro level ratings of satisfaction to Couture’s estimates of relative speeds in each metro areas. There’s a bit of a time lag between the two data sources: the survey data is from 2015 while the speed data is from 2008; but as we showed yesterday, the 2008 speed data correlates closely with an independent study of traffic congestion levels in 2016, suggesting that the relative performance of city transportation systems hasn’t changed much in that time period.

Faster Metros don’t have happier travelers

The following chart shows happiness with the regional transportation system on the vertical axis, and average speed on the horizontal axis.  Higher values on the vertical (happiness) scale indicate greater satisfaction; larger values on the horizontal (speed) scale indicate faster than average travel speeds.  The data show a weak negative relationship that falls short of conventional significancel tests (p = .16).  While there isn’t a strong relationship between speed and happiness, if anything it leans towards being a negative one; those who live in “faster” cities are not happier with their transportation system than those who live in slower ones.


We have a strong hunch as to why traveling faster might not generate more satisfaction with the transportation system. Faster travel is often correlated with lower density, and longer travel distances to common destinations, such as workplaces, schools and stores. If you have a sprawling, low density metropolitan area, with great distances between destinations, much of the potential savings in travel time may be eaten up by having to travel longer distances. A complementary explanation is that places with faster speeds, may be ones where proportionately more travel occurs on higher speed, higher capacity roads, such as freeways, parkways and major arterials, as opposed to city streets. The higher measured speed may a product of traveling long distances at high speeds in some cities, as opposed to cities with much shorter average trips on slower city streets.

Faster travel is correlated with more driving

To explore this hypothesis, we compared average vehicle miles traveled (VMT) per person per day, as reported by the US Department of Transportation, to the average estimated speeds for metropolitan areas.  Both of these sets of observations are for 2008. The following chart shows VMT per capita on the vertical axis and average speed on the horizontal axis. As we thought, there’s a strong positive relationship between speed and distance traveled. People who live in places with faster speeds drive more miles per day.

More driving is associated with less satisfaction with metro transportation

To tie this all together, we thought we’d look at one more relationship:  How does distance traveled affect happiness with an area’s transportation system? This final chart shows the happiness (on the vertical axis) and vehicle miles traveled (on the horizontal axis). Here there is a strong negative relationship: the further residents drive on a daily basis, the less happy they are with their metro area’s transportation system.

We think this chart has an important implication for thinking about cities and transportation. Instead of focusing on speed, which seems to have little if any relationship to how people view the quality of their transportation system, we ought to be looking for ways to influence land use patterns so that people to have to travel as far. If we could figure out ways to enable shorter trips and less travel, we’d have happier citizens.

Are restaurants dying, and taking city economies with them?

Alan Ehrenhalt is alarmed. In his tony suburb of Clarendon, Virginia, several nice restaurants have closed. It seems like an ominous trend. Writing at Governing, he’s warning of “The Limits of Cafe’ Urbanism.” Cafe Urbanism is a  “lite” version of the consumer city theory propounded by Harvard’s Ed Glaeser, who noted that one of the chief economic advantages of cities is the benefits they provide to consumers in the form of diverse, interesting and accessible consumption opportunities, including culture, entertainment and restaurants.

While the growth of restaurants has coincided with the revival of Clarendon in the past decade, all this seems a bit insubstantial to Ehrenhalt. He worries that if the urban economic revival, is built upon the fickle tastes of restaurant consumers–as it were on a foundation of charred octopus and bison carpaccio–city economies could be vulnerable. What, Ehrenhalt worries, will happen if the growth of these restaurants peters out?

That may already be happening. In 2016, according to one reputable study, the number of independently owned restaurants in the United States — especially the relatively pricey ones that represent the core of café urbanism — declined by about 3 percent after years of steady growth. The remaining ones were reporting a decline in business from a comparable month in the previous year.

There are a couple of problems with this “restaurant die-off” story.  First, its a bit over-generous to suggest that restaurants themselves are the principal economic force behind urban economic revival. The growth of restaurants is more a marker of economic activity than the driver. Restaurants are growing because cities are attracting an increasing number of well-educated and productive workers, which drives up the demand for a range of local goods, including restaurants. While the restaurants contribute to the urban fabric, they are more a result of urban rebound than a cause.

Second, the data clearly show that the restaurant business continues to expand. If anything, nationally, we’re in the midst of a continuing and historic boom in eating out. In 2014, for the first time, the total amount of money that Americans spent on food consumed away from home exceeded the amount that they spend on food for consumption at home. There may come a time when Americans cut back and spend less on eating out, but that time is not now at hand: According to Census Bureau data, through January 2017, restaurant sales were up a robust 5.6 percent over a year earlier.

Ehrenhalt’s data about the decline in independent restaurants is apparently drawn from private estimates compiled by the consulting firm NPD, which last spring reported a decline of 3 percent in independent restaurants, from 341,000 units to 331,000 units in the prior year. NPD’s data actually compared 2014 and 2015 counts of restaurants. But the NPD estimates aren’t borne out by data gathered by the Census Bureau and Bureau of Labor Statistics, which show the number of restaurants steadily increasing. The counts from the BLS show the number of restaurants in the US increasing by about 2 percent in 2016, an acceleration in growth from the year earlier.



At City Observatory, we’ve seen a steady stream of articles lamenting the demise of popular restaurants in different cities, each replete with its tales from chefs telling stories of financial woe and burdensome regulation. (The reason never seems to be that the restaurant was poorly run, served bad food, had weak service, or simply couldn’t compete). The truth is failures are commonplace in the restaurant business. No one should be surprised that an industry that puts such a premium on novelty has a high rate of turnover. Government data show that something like 75,000 to 90,000 restaurants close each year, which means the mortality rate, even in good years is around 15 percent. The striking fact about the closure data is that the trend has been steadily downward for most of the past decade.

So nationally, here’s what we know about the restaurant industry:

  • Americans are spending more at restaurants now than ever before, and now spend more eating out than eating at home
  • The number of restaurants is at an all time high, having increased by 40,000 net over the past five years.
  • Restaurant closings are common, but declining.

None of this is to say that Ehrenhalt isn’t right about the restaurant scene in his neighborhood. The fortunes of neighborhoods, like restaurants themselves, wax and wane. But even in Ehrenhalt’s upscale Virginia suburb, which is part of Arlington County, government data show no evidence of a widespread restaurant collapse. Data from the Bureau of Labor Statistics show that there’s been a sustained increase in the number of restaurants in Arlington County. Arlington County now has 580 restaurants, an increase of about 10 percent from its pre-recession peak.


It appears that we’re still moving in the direction of what some have called an “experience economy.” And there are few more basic (or enjoyable) experiences than a good meal. One of the economic advantages of cities is the variety and convenience of dining choices. While individual establishments will come and go, the demand for urban eating seems to be steadily increasing. So far from being a portent of economic decline, we think cafe urbanism will be with us–and continue to grow–for some time.

What Travis Kalanick’s meltdown tells us about Uber

As has been well chronicled in the media, it’s been a tough month for Uber. The company’s CEO, Travis Kalanick was vilified in the press for the company’s tolerance for sexual harassment of its female employees, and derided for his participation in President Trump’s business advisory council (from which he resigned after an estimated 200,000 people cancelled their accounts with Uber). Finally, he was recorded in a shouting match with a San Francisco Uber driver, who claimed to have lost $7,000 because of Kalanick’s changes to Uber’s reimbursement policies.

Kalanick is shown telling the driver, Fawzi Kamel, to take responsibility for his own “s***”, and storming out of the car.  Kalanick has since apologized.

But tirade and tempers aside, the conversation between driver Kamel and CEO Kalanick, is actually very revealing about Uber’s financial predicament.  Kamel is complaining that while Uber started as a premium service and paid driver’s relatively high rates, that over time the company has been cutting back on the amount it pays drivers.

Kalanick bristles at this criticism (argung that Uber still pays higher rates for its premium “black” service), but also concedes that he’s been pushed to lower rates to meet the competition provided by Lyft and other transportation network companies.  Bloomberg Business Week has transcribed their conversation:

Then Kamel says what every driver has been dying to tell Kalanick: “You’re raising the standards, and you’re dropping the prices.

Kalanick: “We’re not dropping the prices on black.”

Kamel: “But in general the whole price is—”

Kalanick: “We have to; we have competitors; otherwise, we’d go out of business.”

Kamel: “Competitors? Man, you had the business model in your hands. You could have the prices you want, but you choose to buy everybody a ride.”

Kalanick: “No, no no. You misunderstand me. We started high-end. We didn’t go low-end because we wanted to. We went low-end because we had to because we’d be out of business.”

This, in a nutshell, is Uber’s problem: It’s losing money, and its competition is forcing it to lose even more money, in order to stay in business. In an effort to stay afloat, Uber’s passing its pain on to drivers, inventing a raft of lower-priced services (UberX, UberPool) and offering lower reimbursements to their drivers.  Kalanick’s admission that competition is putting a cap on Uber’s prices–and profits–suggests that Uber’s $69 billion valuation may be excessive and that Uber’s critics may be right about the viability of its business model.  The most strident critics maintain that the company will likely implode from its growing losses. Jalopnik’s Ryan Felton has been unstinting in his criticism of the company. Leaked financial reports from the company, analyzed by Hubert Horan at Naked Capitalism  make a strong case that the company’s investors are subsidizing something like 59 percent of the cost of rides.

(Flickr: Kaysha)

Two Questions for Uber

It remains to be seen whether the ride-sharing model is really economically viable, especially in face of competition. Our view at City Observatory has been that promoting competition among providers is a good thing, as a way of lowering prices and encouraging innovation: ‘Let a thousand Uber’s bloom‘ we said. And ultimately competition will help determine whether this business model actually makes any sense. To date, the companies have been propped up by the influx of money from venture capitalists, and, arguably, the willingness of driver/contractors to work for modest (and perhaps exploitative) wages. Ultimately, investors will have to have to ask themselves two questions:

Question 1:  What happens if you have dominant market share in a money-losing industry?

Answer:   You lose more money than your competitors.

Question 2:  What happens when demand for your product increases in a money-losing industry?

Answer:  You lose even more money, faster.

In theory, you can make the argument that paying independent contractor drivers is just a short-term strategy for Uber until it perfects self-driving cars, at which point it will be spared the expense of paying (and also arguing with) Mr. Kamel and several hundred thousand other drivers. The success of that strategy depends on Uber overcoming yet another group of competitors, including other technology companies and auto makers to build and operate fleets of self-driving cars. Of course, the latest bit of news is that Google has accused Uber of stealing intellectual property relating to autonomous vehicles.

There’s no question that ride-sharing and transportation network companies are “disruptive technologies.” But how disruptive they are depends directly on the prices they charge. The growth of Uber and Lyft is significantly due to the fact that their fares are lower than taxis and their service is better than taxis or transit. Earlier this week, a study of New York traffic trends attributed the rise in transportation network companies to the relatively low price of their service. The impact, and ultimately the success of these companies depends on what fares their customers are willing to pay.If Uber’s fares were say to double, its likely that its growth would decelerate significantly, and its mode share might actually decline.


Twilight of the NIMBYs? LA’s Measure S Fails

La-La Land voters deal a crushing defeat to a “NIMBYism on steroids” 

The latest returns show Los Angeles’ Measure S–the self-styled “Neighborhood Integrity Initiative”–failing by a 31 percent “Yes” to 69 percent “No” margin. If it had passed, Measure S was predicted  to bring new housing development in Los Angeles to a screeching halt for the next two years, and probably longer.

This was a landslide vote against one of the most NIMBY ballot measures we’ve ever seen. And strikingly, it came under a set of circumstances that should have greatly favored the NIMBY cause. The proponents of the ballot measure cleverly chose to place it on the March local election ballot, rather than on last November’s general election ballot.  Not only is turnout in local elections much lower (it was an estimated 13 percent of registered voters in Los Angeles yesterday*, compared to about 74 percent in the Presidential election), but the demographics of the local election voters skew much more heavily to older, whiter voters, and importantly to homeowners.

One of the few. (Twitter: Nowayendi)


For a long time, the “homevoter” theory has held that restrictive local zoning regulations represent profit-maximizing behavior by local homeowners. Restricting the supply of new housing in your neighborhood, the argument goes, not only means that there are more free-curb parking spaces, but the value of your home increases.

While that logic holds strongly at the neighborhood level, the defeat of Measure S signals that at a larger level (and Los Angeles is a city of 3.9 million residents) it’s possible–just possible–for people to recognize that the policies that might make a neighborhood more livable or valuable, only result in higher rents and displacement when applied to a larger geography.  Opponents of Measure S argued that it would have blocked new development, aggravating the region’s housing shortage and further driving up rents.  Apparently their arguments were successful, even with this very much smaller segment of the electorate who ought to have been more pre-disposed to accept the NIMBY arguments.  Like the 2015 rejection of a proposal in Boulder, Colorado, to allow a neighborhood level veto of new development, there’s growing public support for policies that enable housing supply to increase. This also tends to confirm the logic that if we make land use decisions at larger geographic levels (city-wide, rather than by neighborhood), we tend to get results that are more inclusive.

It’s too early to call this a turning point, but the strong rejection of this measure in a setting that should have maximized the chances of NIMBY success is a hopeful sign that American’s are recognizing that we have a shortage of cities, and that our affordability problems are a manifestation of the need to accommodate the growing demand for urban living. Who knows, maybe we can actually talk about “supply and demand” in the context of housing markets, and voters will respond.

What Measure S would have done

While nominally aimed at correcting the supposed abuses of the city’s re-zoning process–it’s common for many new developments to have to seek rezoning to move forward–the effect of Measure S provisions would have been much more sweeping. The proponents of Measure S made a  lot of political hay by pointing out how out-dated and broken the city’s land use plans have become. Most of the city’s neighborhood plans are hopelessly out of date, and as a result, most sizable new developments require separate city council approval of plan amendments to move forward. To be sure, this is a political process, where the City Council gets to flex its muscles, and look out for the interests of citizens and constituents on a case-by-case basis. And when new development does move forward, there’s always the implication that developers curried political favor to get approval. The Yes on Yes on Measure S campaign made this a key talking point:

“Yes on Measure S released today a special report of official city information that reveals how L.A. City Hall works behind closed doors, on behalf of developers and usually without the knowledge of the public, to get around the city’s zoning rules. Most developers donate to L.A. elected leaders throughout the backroom process”


Oh. Was there an election yesterday? (Variety)


Nominally, Measure S would have called a time out–in the form of a two-year moratorium on most spot-zoning type plan amendments–until such time as full neighborhood plans are up dated.  But the measure would have done more than that. Others have already offered up keen analyses of the flaws in Measure S. Planetizen’s Reuben Duarte explains that the Measure’s requirement that comprehensive plan amendments involve not less than 15 acres, and that they may not allow an increase in overall density:

More ominously, and less discussed than the moratorium, Measure S amends Los Angeles’ Charter to require all future potential general plan amendments meet a minimum threshold of 15 acres, and require all future plan updates to limit increases in density based on the existing average density of the planning area.

An impressive group of Los Angeles based academicians excoriated Measure S as likely to lead to increased sprawl and traffic, to worsen the city’s affordability problems, and make it harder for the children and grandchildren of today’s Angelenos to live in the city.

Measure S wouldn’t have made Los Angeles any less desirable as a place to live, but it will surely would have made it much harder to build new housing. As we’ve seen in cities around the country, the combination of rising demand and fixed supply has the fully predictable effect of driving up rents and making housing less affordable. Had it passed, it seems like a near certainty that Measure S would have made the plight of Los Angeles renters even worse.

California is one state where ballot box zoning has become increasingly common. In San Francisco, voters have decided height limits for individual projects. Voters in Davis cast up or down votes on subdivisions. These initiatives have made it difficult, expensive and highly uncertain for new development to move forward, limiting the growth in housing supplies and driving up rents.

Who votes in local elections

The defeat of Measure S is all the more surprising, because of the sponsor’s decision to place it on the March municipal election ballot, rather than last fall’s general election ballot. Not only was turnout in yesterday’s municipal election dramatically lower, it skewed heavily toward older, whiter voters and homeowners, the heart of the NIMBY constituency. As our friends at Portland State University have shown, the electorate in local elections is, on average, a generation older than those voting in general elections. Los Angeles has just shy of 2 million registered voters. In last November’s general election, about 58 percent of them (1.1 million) made it far enough down the ballot to vote on city measure JJJ, an affordable housing measure (it passed 65 percent to 35 percent). But in yesterday’s election, it looks like total voter turnout will be something like 13 percent.

While we won’t have final data on turnout for a few more days, the data on the characteristics of early voters clearly suggested this election was headed in an ominously NIMBY-leaning direction. In Los Angeles, more than 40 percent of registered voters received their ballots in advance. As of March 6, about 117,000 ballots had been returned. The election consulting firm  Political Data, Inc, tracks these ballot returns.  They reported than about 47 percent of the ballots were cast by those 65 or older (who represent about 20 percent of registered voters), and only 13 percent were cast by those 34 and younger (who constitute 34 percent of those registered.  The firm also estimates turnout by homeowners and renters.  While homeowners make up about 48 percent of those sent ballots in advance, they represented fully 60 percent of the ballots returned as of March 6. (Hat tip to Jordan Fraade for pointing us to this excellent data).

Ballots returned, by age of voter, City of Los Angeles, as of March 6. (Political Data, Inc).



* – As of Wednesday March 8, about 250,000 ballots had been tabulated from about 2 million registered Los Angeles voters. In California, voters who receive vote by mail ballots have to have the ballots post-marked not later than election day, and received by the election office not later than three days after the election.  Some additional vote-by-mail ballots will come in during the next few days and increase the turnout slightly.

Getting to critical mass in Detroit

Last month, we took exception to critics of Detroit’s economic rebound who argued that it was a failure because the job and population growth that the city has enjoyed has only reached a few neighborhoods, chiefly those in and around the downtown. A key part of our position was that successful development needs to achieve critical mass in a few locations because there are positive spillover effects at the neighborhood level. One additional house in each of 50 scattered neighborhoods will not have the mutually reinforcing effect of building 50 houses in one neighborhood. Similarly, building new housing, a grocery store, and offices in a single neighborhood makes them all more successful than they would be if they were spread out among different neighborhoods. What appears to some as “unequal” development is actually the only way that revitalization is likely to take hold in a disinvested city like Detroit.  That’s why we wrote:

. . . development and city economies are highly dependent on spatial spillovers. Neighborhoods rebound by reaching a critical mass of residents, stores, job opportunities and amenities.  The synergy of these actions in specific places is mutually self-reinforcing and leads to further growth. If growth were evenly spread over the entire city, no neighborhood would benefit from these spillovers. And make no mistake, this kind of spillover or interaction is fundamental to urban economics; it is what unifies the stories of city success from Jane Jacobs to Ed Glaser.  Without a minimum amount of density in specific places, the urban economy can’t flourish.  Detroit’s rebound will happen by recording some small successes in some places and then building outward and upward from these, not gradually raising the level of every part of the city.

While this idea of agglomeration economies is implicit in much of urban economics, and while the principle is well-understood, its sometimes difficult to see how it plays out in particular places. A new research paper prepared by economists Raymond Owens and Pierre-Daniel Sarte of the Federal Reserve Bank of Richmond and Esteban Rossi-Hansberg of Princeton University  tries to explore exactly this issue in the city of Detroit. If you don’t want to read the entire paper, CityLab’s Tanvi Misra has a nice non-technical synopsis of the article here.

The important economic insight here is the issue of externalities: In this case, the success of any persons investment in a new house or business depends not just on what they do, but whether other households and businesses invest in the same area. If a critical mass of people all build or fix up new houses in a particular neighborhood (and/or start businesses) they’ll benefit from the spillover effects of their neighbors. If they invest–and others don’t–they won’t get the benefit of these spillovers.

Analytically this produces some important indeterminacy in possible outcomes. Multiple different equilibria are possible depending on whether enough people, businesses, developers and investment all “leap” into a neighborhood at a particular time. So whether and how fast redevelopment occurs is likely to be a coordination problem.

Without coordination among developers and residents Owens, Rossi-Hansberg and Sarte argue, some neighborhoods that arguably have the necessary fundamentals to rebound won’t take off. Immediately adjacent to downtown Detroit, for example, there are hundreds of acres of vacant land that offer greater proximity to downtown jobs and amenities than other places. Why, the authors ask, “do residential developers not move into these areas and develop residential communities where downtown workers can live?”

To answer that question, the NBER paper builds a very complex economic model that represents these spillover effects, and estimates the potential for each neighborhood to add value if it can move from its current underdevelopment equilibrium. In this map illustrating their findings, the neighborhoods with the darkest colors have the highest potential value if development takes place.

The authors measure the potential for future growth by estimating the total increase in rents associated with additional housing development and population growth in each neighborhood. Some neighborhoods are well-positioned for development to take-off, and would show the biggest gains in activity, if the coordination problem could be overcome. That coordination problem is apparent in neighborhoods near downtown Detroit: even though it would make sense to invest, no one wants to be the first investor, for fear that other’s won’t invest.  So Owens, Rossi-Hansberg and Sarte suggest this obstacle might be overcome if we could create a kind of  “investment insurance”–if you invest in this neighborhood, then we’ guarantee a return on your home or business.

As a thought experiment, the authors estimate the amount of a development guarantee that would be needed to trigger a minimum level of investment needed to get a neighborhood moving toward its rebuilding. In theory, offering developers a financial guarantee that their development would be successful could get them to invest in places they wouldn’t choose to invest today. That investment, in turn, would trigger a kind of positive feedback effect that would generate additional development, and the neighborhood would break out of its low-development equilibrium. If the author’s estimates are correct, its unlikely that the guarantees would actually need to be paid.

While this concept appears sound in theory, much depends on getting the estimates right, and also on figuring out how to construct a system of guarantees that doesn’t create its own incentive problems. In effect, however, this paper should lend some support to those in Detroit who are attempting to make intensive, coordinated investments in a few neighborhoods.

More broadly, this paper reminds us of the salience of stigma to neighborhood development. Once a neighborhood acquires a reputation in the collective local consciousness for being a place that is risky, declining, crime-ridden or unattractive, it may be difficult or impossible to get a first-mover to take the necessary investment that could turn things around. The collective action problem is that no one individual will move ahead with investment because they fear (rationally) that others won’t, based on an area’s reputation.  A big part of overcoming this is some action that changes a neighborhood’s reputation and people’s expectations, so that they’re willing to undertake investment, which then becomes a self-fulfilling prophecy.  While economists tend to think that the only important guarantees are financial , there are other ways that city leaders could actively work to change a neighborhood’s reputation and outlook and give potential residents and  investors some assurance that they won’t be alone if they are among the first to move.   New investments, for example, like the city’s light rail system, may represent a signal that risks are now lower in the area’s it serves than they have been.

The implications of shrinking offices

The amount of office space allotted to each worker is shrinking. What does that mean for cities?

Last week a new report from real estate analytics firm REIS caught our eye. Called “The Shrinking Office Footprint” this white paper looks at changes in the demand for office space over the last couple of business cycles.  The full report is available free (with registration) from REIS.

An increasing share of jobs in the US economy are in the kinds of industries and occupations that are housed in leased office buildings. Knowledge-based industries like finance, software, business and management consulting services, market and communications and a range of similar businesses house most of their employees in commercial offices. Of course, investors in the real estate business keenly follow data on office lease rates and vacancy trends to look to see where it is most profitable to buy or build new office buildings. And the leasing of commercial offices is a useful indicator of changes in economic activity.

Cube farm (Flickr: Steve)

The REIS report offers up a number of interesting findings. Overall, their data (which stretch back to 1999) illustrate the depth and severity of the great recession. When the economy nose-dived in 2008, businesses laid off employees, and lots of office space went begging. And while vacancy rates shot up, they actually understated the extent of the impact on real estate.  Many firms had five-, ten- or even fifteen-year leases on their office space, and we’re stuck with “shadow” space.

As a result, as the economy began to recover, there was lots of room (literally) for companies to expand their payrolls without expanding their real estate footprint. As a result there’s a clearly cyclical pattern to the relationship between hiring and new office space leasing.  Early on in a recovery, when firms are filling up un-used or under-used shadow space, they consume relatively small amounts of additional office space per new employee.  As the recovery grows, more firms reach or outgrow their capacity, and then lease additional space.  (You see this pattern clearly in the REIS data: square feet absorbed per new employee rises through the business cycle.

What’s more interesting though, is how the amount of office space per employee has been steadily declining in each successive business cycle.  The metric to pay attention to here is “net absorption” per added office employee.  Net absorption is the different in the number of square feet office space that is leased compared that which becomes vacant. In the expansion of the late 1990s, REIS reports that the average additional employee was associated with about 175 additional square feet of office space.  In the late 2000s, up until the Great recession, the typical employee was associated with about 125 square feet of additional space.  During this decade, each added employee has been associated with only about 50 square feet of additional office space. Nationally, we’ve added about 3.5 million office workers, and leased about 180 million additional net square feet of office space. (See the red lines on the following chart).

Declining space per employee (REIS)

What this reflects is a number of things: the companies that are growing may be those that make the most efficient use of space.  While space intensive industries like manufacturing have been growing slowly or declining, space efficient industries like software have been growing. We also know that hotelling and remote work–arrangements where employees share space, rather than having dedicated offices or cubicles–enables firms to accommodate more workers in any given amount of space. While some of the relatively low rate of absorption per worker represents a hangover from the recession’s “shadow space,” Reis believes that much of the decline in space per worker is permanent: they conclude that “lower net absorption is likely a lasting trend.”

What does this mean for city economies?  While it may mean that fewer office buildings get built than would have been the case if the old space-per-worker ratios had held, it also suggests that the current building stock has more capacity to accommodate additional jobs than it did in the past. Even without building new offices, cities can expand employment. Greater space efficiency also means that companies will have to pay to rent fewer square feet per employee, meaning that the cost of office space is a relatively less important factor in driving business costs. Commercial real estate brokerage CBRE estimates that for a typical 500-employee software firm, office expenses represent just 6 percent of costs, compared to 94 percent for employee labor.

And notice that this analysis estimates that firms have an average of about 150 square feet per employee (75,000 / 500).  If occupancy rates are as low as 50 to 100 square feet per employee as the new Reis analysis suggests, office costs may be even lower as a fraction of total costs. More space efficient businesses are more likely to locate in urban centers, where the advantages of accessibility, and proximity to a wide range of services and activities is an advantage in attracting and retaining talent.

Ultimately, this should also make a significant difference to how we plan and build our cities. Many land use plans assume fixed, or even growing, ratios of space per worker. That’s part of what prompts cities to plan for growth at the “extensive margin”–i.e. by setting aside more land for further commercial and industrial uses, often at the urban fringe. But the shrinking footprint per worker suggests that we’ll need a lot less land for this kind of extensive growth because we can accommodate more employment by using our existing lands and buildings more efficiently, i.e. growing at the intensive margin.


What we know about rent control

Today, partly as a public service, we’re going to dig into the academic literature on an arcane policy topic: rent control. We also have a parochial interest in the subject: the Oregon Legislature is considering legislation that would lift the state’s ban on cities imposing rent control. The legislation is being proposed by Oregon House Speaker Tina Kotek.

Indeed. But how to fix it is still a question. (Flickr: Tiger Pixel)

For decades, one of the few topics on which nearly all economists agree is that rent control is bad thing: it discourages new investment in housing and in housing maintenance; it tends to reduce household mobility, encourages the conversion of apartments into condominiums (removing them from the rental housing supply), and leads to the misallocation of housing over time.

In response to the economist’s objections, one of the arguments that rent control advocates make is to draw a distinction between “bad, old first generation rent control” and “new, improved second generation rent control.” Yes, these advocates concede, back in the day there were poorly designed systems that involved rent freezes, and which had the effect of reducing housing supply.  But today, we are told, rent controllers are smarter, and have developed new types of rent control that are supposedly free of these negative effects.  As Speaker Kotek described it to The Portland Oregonian:

What you’re hearing from landlords about rent control is they have an idea of it that’s very much the model that began right after World War II where properties had hard, fast caps on rents. That’s not the kind of rent control we’re talking about. We’re talking about second-generation rent stabilization where there’s a process for managing rent increases that protects investors and tenants.

And to bolster their point, rent control advocates will sometimes quote from two economists–Tony Downs and Richard Arnott–who’ve explored the differences between first and second generation rent controls. They point out–correctly–that Downs and Arnott have identified some important differences in rent control regimes. But neither of them actually endorses rent control, especially of the kind that’s likely to be on offer under the proposed Oregon legislation.

First Generation and Second Generation Rent Control

The first article that some point to is a 1988 Urban Institute report authored by Brookings Institution economist Anthony Downs: Residential Rent Controls: An Evaluation. Downs distinguishes between “strict” and “temperate” rent control regimes. But Downs is clear that there’s a wide continuum and that many different features of rent control affect its stringency, including the share of the housing stock that is covered, whether there is vacancy decontrol, whether the ordinance allows automatic rent increases and generous allowances for increases to cover the cost of maintenance or improvements. Stringent rent control has the worst effects, temperate rent control the least.

As Michael Lewyn notes, the reason that “temperate” rent control regimes haven’t been shown to have much of an adverse effect on supply is because they don’t control rents.  Let’s take a close look at what Anthony Downs had to say about differences in rent control regimes in different cities.  New York’s stringent rent control holds rents for controlled apartments about 57 percent below market rates. In contrast, the more temperate controls in Los Angeles reduced rents by only about 3.5 percent.

Similarly, advocates of rent control sometimes point to the work of Richard Arnott, who like Downs, distinguishes between different levels of stringency in rent control regimes. In 1995, Arnott wrote an article for the Journal of Economic Perspectives asking “Time for revisionism on rent control?” Arnott has argued that systems of rent control that include vacancy decontrol (i.e. that let landlords raise rents on vacant apartments to whatever level they like) would be unlikely to have the same kind of negative effects as first generation rent control schemes.

Many people seized on Arnott’s article as an endorsement of second-generation strategies. So much so, that in 2003, Arnott went to the trouble to specifically deny any support for such strategies, writing an article entitled “Tenancy Rent Control” in The Swedish Economic Policy ReviewWhile advocates imply that Arnott therefore supports rent control, Arnott himself made it clear he does not, stating that “most second generation rent control programs .  .  . have been on balance harmful”:

Far from endorsing rent control, both Downs and Arnott make it clear that rent control regimes that actually have the effect of lowering rents significantly below what the market would otherwise provide would have negative economic consequences.

Proposed Legislation: Promising Benign, Enabling Malignant

More to the point, in the legislation on offer in Oregon, Speaker Kotek’s HB 2001 and HB 2004, there are actually no provisions that precludes cities from enacting the damaging kind of strict or first generation rent control. The first bill, HB 2001, begins with a moratorium on rent increases of more than five percent. And nothing in the legislation precludes Oregon cities from adopting rent control with the most demonstrably damaging features, including applying rent controls to new construction. The other bill, HB 2004, simply repeals the state ban on city and county imposed rent controls altogether.

Plainly there are better and worse forms of rent control: those that do not apply to many (or most units), that allow landlords to raise rents regularly in line with inflation, and that fully decontrol apartments when they become vacant, arguably have fewer negative effects than more stringent measures. But these two proposed bills allow “bad” rent control just as much as they allow “less bad” rent control). Consider just one feature of rent control:  “vacancy decontrol.” Under vacancy decontrol, an incumbent tenant has a protection against rent increases above some level, but when a unit becomes vacant landlords are free to raise the rent to whatever level they want. Downs views this a an essential element of temperate regimes; Arnott makes it the centerpiece of his definition of “second generation” rent control. Nothing in HB 2001 or 2004 requires vacancy decontrol; as a result its simply not accurate to cite either Downs or Arnott’s research as supporting such legislation.

So here’s the takeaway: Neither Downs nor Arnott endorse second generation or less stringent rent controls. Both agree that measures that effectively limit rent have negative consequences and are, in Arnott’s words “on balance harmful.”  But even if there were some forms of rent control that had fewer negative effects, there’s nothing in the legislation that’s so far been proposed in Oregon that would preclude cities or counties from adopting some of the worst, most-disruptive forms of rent control.


The Week Observed, March 17, 2017

What City Observatory did this week

1. Are restaurants dying and taking city economies with them? In a column at Governing, Alan Ehrenhalt raises the alarm that a city economic revival predicated on what he calls “cafe urbanism” is at risk if there’s a collapse in the restaurant sector. Apparently, a number of restaurants in his local Virginia suburb have failed. We look at the national data on restaurants and find that overall, they’re still increasing robustly. In fact, Americans now spend more on food consumed outside the home than they do on groceries. This sector of the economy appears to be doing quite well, at least so far.

2. Affordability beyond the median.  The standard yardstick for judging housing affordability is to look at the median level of rents or home prices. As we all remember from statistics, the median is the observation in the middle of the distribution. And while for many purposes, it’s a reliable indicator of typical prices, in some neighborhoods, particularly those with a mix of expensive and cheap housing, the median is actually a weak indicator of affordability. We illustrate with some real world examples how this problem arises, and discuss how to look past the median to better understand neighborhood affordability.

3. How fast are America’s metropolitan areas? If often seems like transportation policy is obsessed with how fast we move. But which metropolitan areas have the fastest travel speeds? Using data from the US DOT’s national transportation survey, and adjusting for typical trip lengths, the University of California’s Victor Couture and co-authors have estimated the average speed of travel in each of the nation’s largest metros. We have the complete ranking: Louisville tops the list, traveling on average 22 percent faster than the typical metro area; while Miami travels the most slowly, about 12 percent slower.

4. Going faster doesn’t make us happier. We combined Victor Couture’s estimates of the relative speed of travel in different metropolitan areas, with survey data on the level of satisfaction local residents report in their transportation system.  We found that places with faster travel times didn’t have higher levels of satisfaction. What they did have, however, was longer trips, as measured by vehicle miles traveled (VMT) per person per day.  Simply put: faster speeds are correlated with more driving. And interestingly, VMT has a strong negative correlation with satisfaction with the transportation system. People who live in metros with the highest levels of VMT per capita are generally the least satisfied with their transportation systems.

Must read

1. The Trump Budget and Transit. The Trump Administration’s budget for the coming fiscal year would effectively wipe out funding for new transit projects under the New Starts and Small Starts programs that have been the linchpin for funding most of the nation’s rail transit, bus rapid transit and streetcar projects. Yonah Freemark lists the projects now in the pipeline that are in jeopardy. Of course, Congress will have to agree to these cuts, but its an ominous sign from the new administration. Further bad news for cities: big cuts at the Department of Housing and Urban Development, including zeroing out the community development block grant program which is a major funder for neighborhood redevelopment and housing improvement.

2. Mini-Ubers are blooming in Austin.  Last year, Lyft and Uber both pulled out of Austin after voters there turned down an initiative that would have exempted transportation network companies from the city’s existing driver finger-printing requirements and other regulations. At the time, we wrote that the city should view this as an opportunity to encourage the growth of home-grown alternatives. “Let a thousand Uber’s bloom,” we said. And according to a new article in CNNTech, that’s just what seems to be happening.  As we suggested, it was probably a bad idea to pull out of a city full of independent minded customers, smart programmers and serial entrepreneurs.

3. The run down on Uber.  Uber has been in the news so much lately that its hard to keep track of what’s being said. For a great synopsis of all of these events, read CityLab’s run-down, with copious links to media coverage. Laura Bliss has covered the waterfront, from the company’s financial problems to charges of sexual harassment.

4. Are tiny houses an answer to homelessness? In Portland, Multnomah County leaders have proposed an innovative program to buy tiny houses–those super compact, space efficient living units–and give them to local homeowners, provided they place them in their yards and allow an otherwise homeless family to live in them for at least five years. The county calculates that housing people in tiny houses is less expensive than the cost of shelters, and new units could be put in place much more rapidly than with new construction. Already, the city of Portland has permitted one innovative project to develop a cluster of tiny houses to house the homeless.

Flickr: Bill Dickinson

New research

The importance of economic integration to academic success. A new paper in the Journal of Urban Economies from George Galster and three co-authors looks at the results of a natural experiment in Denver. (Hat tip to the Chicago Policy Review for flagging this work). The Denver Housing Authority had a program that quasi-randomly assigned eligible households to housing in a variety of neighborhoods. Galster and his co-authors looked at data on the school performance of children in these families, and examined the correlations between neighborhood characteristics and educational performance. This avoids the problem of “selection bias” that plagues many studies of such relationships, i.e.  that families that move to higher opportunity neighborhoods self-select and may be different in unobservable ways from other families. They find that “neighborhoods having less social vulnerability, higher occupational prestige and lower percentages of African American residents robustly predict superior secondary educational performance.”  Its yet more evidence that economically integrated neighborhoods are a key to overcoming the disadvantages of poverty.



The Week Observed, March 31, 2017

What City Observatory did this week

1. 13 propositions about autonomous vehicles. Despite occasional setbacks–like last week’s crash of an Uber self-driving car in Phoenix–it looks increasingly likely that autonomous vehicles will play an important role in urban transportation in the not-too-distant future.  There’s a lot to ponder about what effects they might have on cities, on land use patterns, on transportation and public finance.  To help fuel your thinking, we outline 13 propositions about the character of travel demand, the costs of new technology, and the kinds of public policies that will shape its implementation. There’s a much longer conversation to be had here, but we invite your reactions to our initial thoughts.

2. Breaking Bad: Why breaking up big cities would hurt America.  New York Times columnist Russ Douthat thinks that big, liberal cities have garnered an unfair monopoly on the nation’s economic assets (like universities and productive industries) and that we’d be better off if we dispersed these resources to the rest of the country. In our view, this fundamentally misunderstands the economic role cities play; they actually foster the entrepreneurship, knowledge creation, and productivity that drive the US economy. Making cities smaller would actually reduce national income and productivity.

3. The cost of segregation. Its widely acknowledged that racial and economic segregation are unfair, unwise and at odds with our democratic values, but a new report from the Urban Institute shows that segregation imposes its own significant economic costs. This statistical study shows that higher rates of segregation are associated with lower earnings for African Americans, lower educational attainment and higher crime.  In just one city–Chicago–simply reducing segregation from its current high level to the level found in the typical metro area would be expected increase the average income of African Americans by $3,000 annually and cut homicides by nearly a third.

4. Revisiting the cappuccino congestion index. Looking to get the jump on April Fool’s Day, we’ve dusted off our cappuccino congestion index, which turns a traffic engineer’s eye on those annoying lines that we have to wait in to get our daily coffee fix. If you apply the same tools and assumptions to assess the “cost of congestion” at your neighborhood coffee house that is commonly asserted to apply to roads, you could conclude that we waste $4 billion annually waiting in line for coffee. But the fact that Starbucks (and all its competitors) don’t build enough stores, buy enough espresso machines, and hire enough baristas that we never have to wait in line is a really good indication that the cost of added capacity is much larger than the supposed costs in time lost. The same conclusion applies to roads.

Must read

Trump and Clinton Counties compared. Brookings demographer Bill Frey has a terrific statistical profile comparing the differences in age, education, race, ethnicity, and place of birth between counties that voted for Hillary Clinton as opposed to Donald Trump in the 2016 election. Trump won more counties, but Clinton won the more populous ones, and the population of Clinton Counties (177 million) was vastly greater than Trump counties (146 million). Counties which voted for Trump were older, and whiter. Counties that voted for Clinton were better educated and had more immigrants. Just imagine if we apportioned electoral votes according to county population, rather than by state.

A road to nowhere. Building more roads and bridges is a favorite old-timey elixir for rural economic development, and there’s a born-again enthusiasm for thinking that one more road will somehow turn around a struggling town. But as Tony Dutzik patiently explains at The Frontier Group, highways can’t heal what ails the rustbelt. Not only do they fail to address the root problems of economic decline, but also add their own negative effects (often undercutting the viability of existing city centers), and necessarily consume a huge amount of resources that could be better used elsewhere.

Making roads more dangerous to keep us safe. Its a perennial battle: traffic engineers and fire chiefs are telling us that wider roads will make us safer. Wider travel lanes mean more space between cars and (theoretically) should reduce collisions. Wider roads mean giant fire engines can roar faster to emergencies, with that time savings saving lives. This drama is playing out, once again, in Celebration, Florida the new urbanist model town. The trouble is, as Steve Mouzon reports at CNU, wider roads encourage cars to travel faster, making roads more dangerous. That, and lopping off trees and removing on-street parking makes pedestrians feel even more vulnerable so they walk less. Less walking and faster cars turns out to more than erase suppose the health and safety advantages of wider roads.

New ideas

Segregation.  See this week’s post on the Urban Institute’s new report, the Cost of Segregation, (above).  Its a comprehensive statistical analysis of economic and racial/ethnic segregation in the nation’s 100 largest metro areas, with detailed data for 1990, 2000 and 2010.

City Observatory in the news

In an article entitled “This is the start of the retail store reckoning,” Forbes cites our recent commentary looking at which metropolitan areas are most over-stored and are likely to see a decline in retail employment as that sector continues to restructure.

Milwaukee Sentinel Journal featured research findings from our study of concentrated poverty, Lost in Place, in their article “An intractable problem,” looking at economic restructuring and the persistence of poverty in Milwaukee.



The Week Observed, March 24, 2017

What City Observatory did this week

1. The US retail industry is getting marked down in a big way, with hundreds of stores operated by well-established chains including Macy’s, J. C. Penney, and the Gap, as well as others, closing or slated for closing in the next few months. By global standards the U.S. is “over-stored” with far more square feet of retail space per person than similarly rich countries. There’s also a big variation in retail square footage among U.S. metro areas. We review the data for the top 50 metro areas and handicap the cities with the biggest space overhang which may be in line to experience the biggest declines as retailing retrenches.

2. Critical mass and neighborhood revitalization. A few weeks ago, we pushed back on those who criticized Detroit’s nascent recovery as being too unequal, with only a few neighborhoods showing improvement, while others continue to languish or decline. We argued that that city’s rebound, when it comes, will only take off in places where there’s a critical mass of new residents, businesses and investments. A new economic study tries to more precisely estimate the nature and size of these agglomeration economies, and predict which neighborhoods in Detroit have the greatest potential to trigger a positive feedback loop that will drive development forward.

3. How transit influences home values. For a long time, we’ve been highlighting research that shows that walkability is positively correlated with home values. A new study from real estate mavens at Redfin also shows that there’s a positive link between that company’s “Transit Score” measure (which looks at the proximity of transit stops and frequency of bus and train service) and home values. They find that in the typical city, each additional one point on the transit score measure increases a home’s value by about $2,000. Once caveat, in our view: the value gains associated with transit score at least partly reflect the walkability of the most transit served neighborhoods; the underlying source of the gain in both cases is the improved accessibility via all modes of transportation.

4. Big city metros are increasingly driving the US economy. In the recovery from the Great Recession, US job growth has been led chiefly by the nation’s largest metro areas. We show that since the economy last peaked in December 2007, about 87 percent of all net new jobs have been created in metro areas with a population of one million or more. Smaller metro areas have grown only about half as fast as larger metros, and non-metro/rural America is still more than 2 percent below the level of employment it had a decade ago. This is further evidence of how city-centered US economic growth has become.

Must read

1. Building codes that ban small tiny apartments aggravate housing affordability and homelessness. Writing at the Sightline Institute, David Nieman relates latest chapter in the the continuing saga of how building code regulations in otherwise progressive Seattle are making that city’s affordability problems worse. His article–”How Seattle Killed Micro-Housing, Again“–tells how the city has made most forms of congregate housing (where residents may share a bathroom or cooking facilities with neighbors) simply illegal. It’s tried, largely in vain, to legalize 225 square foot efficiency apartments (which must have kitchens, full plumbing and closets), but the city’s specific requirements have steadily (and stealthily) inflated the size and budget of such units.  Its a tragic self-inflicted wound.

2. Boring tunnels won’t solve traffic congestion: only road pricing will. UCLA’s Herbie Huff writing the Los Angeles Times schools Elon Musk on the idea that building more roads (whether under or above ground) holds any prospect of ameliorating traffic congestion. This is a nice, well-argued, and non-technical summary of why road pricing can work, and why adding more capacity, by whatever means, will simply be swamped by induced demand unless consumers are asked to pay.  Huff pushes back on the usual arguments offered against tolls, noting that they’re both fairer and more efficient than our current system of taxing everyone, and letting queueing ration use.

3. LaCrosse doesn’t want more highway lanes, thank you. Next City’s Rachel Kaufman tell’s of LaCrosse, Wisconsin’s efforts to drag that state’s highway department kicking and screaming into the 21st Century. Like many cities, it has witnessed a big increase in pedestrian and bicycle travel, and the city’s population growth has exceeded that of surrounding suburbs. But state highway planners, still relying on pre-recession forecasts of ever increasing car travel, are calling for more highway capacity through the city.

New ideas

More evidence for the “time-budget” hypothesis. One theory of transportation and commuting behavior holds that humans have a rough optimum daily time budget for travel for regular activities like work. (This is also often called Marchetti’s constant, and is based on the observation that as transportation technologies have changed, from walking, to horse-drawn streetcars, to electric streetcars, then automobiles and mass transportation, that most commute trips average about 30 minutes each direction per day. A new study of travel behavior in Hungary between 1988 and 2010 by Tamas Fleischer and Melinda Tir, finds more evidence of this effect. Even though the total share of travel by private car more than doubled during this period (displacing slower public transit, cycling and walking), total daily travel time stayed almost constant, between 60 and 65 minutes. This suggests that travelers used improved technology (the car) not to save time, but to travel further (to more distant homes or jobs) than they did previously.  A relatively fixed time budget means that transportation investments change land use patterns rather than save time.



The Week Observed, March 3, 2017

What City Observatory did this week

1. More flawed congestion rankings. Traffic analysis firm Inrix released yet another report purporting to estimate the dollar cost of congestion and ranking the world’s cities from most to least traffic burdened. Our review shows that the report suffers from many of the same problems that plagued its predecessors. Chief among them: congestion indices ignore differences in commute distances among cities, understating the travel time advantages enjoyed by residents of more compact metro areas. The new Inrix report differs just enough from previous versions that its not possible to do time series analysis, which undercuts the practical value of the data. Finally, despite claims that congestion costs commuters billions, the report never identifies any feasible set of investments that would provide enough capacity to alleviate congestion at a cost lower than the supposed cost of congestion, and without triggering additional induced traffic.

2. Uber & Lyft swamping New York streets. For a while, the growth of transportation network companies, aka “ride-sharing,” mostly took market share away from yellow cabs. But as they’ve continued to grow, they’re actually increasing the total volume of vehicles on New York City streets, pushing the city toward gridlock, according to a new report from Bruce Schaller, a former city transportation official. Ride-sharing companies have added 600 million vehicle miles of travel to city streets, and their growth has pushed reversed a 24 year trend in which most additional trips were taken by transit.

3. The real welfare cadillacs have 18 wheels. Truck freight movement gets a subsidy of between $57 and $128 billion annually in the form of uncompensated social costs, over and above what trucks pay in taxes, according to the Congressional Budget Office. If trucking companies paid the full costs associated with moving truck freight, we’d have less road damage and congestion, fewer crashes, and more funding to pay for the transportation system.

4. What the meltdown by Uber’s CEO tells us about the company’s prospects. Between sexual harassment charges, a powerful #DeleteUber campaign, and defending claims that it has stolen another company’s self-driving car technology, it’s been a tough month for Uber. Tellingly, the video of a blow-up between CEO Travis Kalanick and one of the company’s drivers reveals that competition is having a material effect on the company’s strategy and profits. Kalanick’s admission that they’ve had to cut rates to meet competition seems to confirm many of the doubts that analysts have expressed about the long-term financial viability of the firm’s business model. By some estimates, Uber’s investors are underwriting 59 cents of every dollar it costs to provide a ride.


Must read

1. Another take on Detroit. Pete Saunders, who blogs at the Corner Side Yard, weighs in with his views on last week’s brouhaha over whether to view Detroit’s economic recovery as a glass half-full or half-empty. Pete’s clearly in the “half-full” camp: “Detroit’s recovery? O Yeah, its Real Alright.” By his reckoning, in its decline, Detroit’s fallen further than just about any other city in the US. And that’s what makes its rebound all the more remarkable. As one measure of the city’s challenges, Saunders charts the change in the city’s white population compared to other older industrial cities. While other cities such as Cleveland, St. Louis and Buffalo all experienced white flight, none experienced anything like Detroit, where each decade after 1970, the white population fell by half, for a cumulative 96 percent decline in the white population from 1950 through 2010. Given this backdrop, the modest gains recorded in recent years are a clear reversal of a well-established downward trend.


2. Why aren’t we building more middle income housing? At Rooflines Rick Jacobus tries to answer this perennial question about housing markets. The framing goes like this: while the private market mostly builds housing for upper income households, and we have public and subsidized housing for low income households, who’s building housing for those in the middle? Part of the answer, Jacobus points out, is that the opportunities to build new units are numerically constrained by local zoning laws. If the market is constrained to a certain number of houses, builders find it most profitable to build for the high end of the market, unlike say in the market for automobiles, where car makers, who don’t face a numeric limit on their output build different models for different segments of the market. Its a bit more complicated than that, as Jacobus acknowledges, because houses are much more long-lived than cars, and also because their are important spillover effects (your neighbor’s house has a much bigger impact on your house than your neighbor’s car). While this is an important question to contemplate, Jacobus doesn’t really offer up any answers, and one suggestion–that planner’s spend more time talking explicitly about the class identity neighborhoods–seems fraught with controversy.

New research

1. Immigrants and crime. Despite the President’s claims to the contrary, immigrants to the United States are less likely than native born Americans to commit crimes. Writing in the Journal of Ethnicity in Criminal Justice, Robert Adelman and co-authors look at 40 years of data on crime rates and immigrant status by metropolitan area.  They find that immigration is linked to lower levels of  violent crimes and property crimes. “The results show that immigration does not increase assaults and, in fact, robberies, burglaries, larceny, and murder are lower in places where immigration levels are higher.

2. Who are your peer cities? The Federal Reserve Bank of Chicago has developed an interesting new tool that let’s you see which cities score similarly to yours on a series of indicators, including equity, resilience, economic outlook and housing. In each of these four areas, the tool identifies cities that are roughly similar as measured by selected data in that category.  Housing comparisons, for example, are based on the share of pre-1980 housing, the vacancy rate, the home price to income ratio, the homeownership rate and the share of rent-burdened households.  Comparisons are just for cities (i.e. using municipal boundaries), not metropolitan areas, and is therefore subject to our usual warning about the highly variable nature of city geographies. The housing peers of Los Angeles include a number of other California cities, and somewhat surprisingly, Lawrence Massachusetts.

City Observatory in the news

An an Associated Press feature published in the Detroit News and other metropolitan dailies, City Observatory’s research on the “Young and Restless” was featured. “The Plight of company towns” looked at the growing trend of company’s to move headquarters operations from smaller cities and suburban locations to urban centers in order to be able to easily hire talented young workers.



The Week Observed, March 10, 2017

What City Observatory did this week

1. Shrinking offices: What it means for cities. Its not just you’re imagination: offices are becoming less common and smaller, and a variety of space-sharing and space-saving practices are taking hold in businesses around the nation. The number of square feet of office space leased per new office employee has fallen from about 175 square feet in the late 1990s, to only about 50 square feet in the past six years. This greater space efficiency coincides with the growth of city focused industries like software and professional services, and suggests we can accommodate a good deal of job growth within the existing footprint of the nation’s cities.

2. What economists say about rent control. Rising rents around the country are bringing back demands for a long-discredited policy: rent control. While economists have been almost unanimously opposed to rent control schemes, some advocates have pointed to more nuanced analyses which concede that “temperate” or “second-generation,” rent controls have fewer adverse effects. A close reading of key studies by economists Tony Downs and Richard Arnott shows that actually neither author endorses these new age forms of rent control, and in fact the absence of negative effects from some schemes is mostly because they don’t actually limit rents much below market rates.

3. LaLaLand: The Triumph/Twilight of NIMBYism? Los Angeles voters resoundingly rejected Measure S, an initiative that would have imposed a two year moratorium on most plan changes, and which would have made it much harder to increase density in the city. The measure’s proponents campaigned on a range of issues, including traditional NIMBY-concerns about effects of new development and a city planning process seen as highly ad hoc and politicized. Nonetheless voters rejected the measure with 69 percent voting “No.” The election’s outcome is even more surprising given the proponents maneuver to have the measure voted on at this March local election, rather than at the November general election. Turnout in March was not only much lighter, but heavily skewed toward older voters and homeowners, much more of a NIMBY friendly demographic. One key to fighting NIMBYism seems to be to tackle planning issues at the broadest possible geographic scale, rather than development by development or neighborhood by neighborhood.

Must read

So much to read this week: There must be something in the water.

It’s the prices, stupid.  Land Use and Vehicle Miles Traveled.  The past week has been dominated by discussions of a usually arcane, chiefly academic topic: how does urban form, including density, influence how much people travel. The debate played out initially in a meta-analysis of studies by Mark Stevens showing the connection between urban form and driving was less strong than earlier claimed.

It was challenged by excellent commentaries by Chris Nelson, Susan Handy and Reid Ewing all pushing back on the thesis in one way or another.  All are well-argued and worth reading.  But to our way of thinking, Michael Manville‘s paper drops a scimitar through the Gordian knot here.  Manville points out that all of the guessing about how aspects of the built environment influence driving (density, design, mixed use, etc) ignore the central point, which he neatly summarizes in just a few words. It’s short enough to make a haiku or a plausible tattoo:

Governments give drivers free land; people as a result drive more than they otherwise would.

That’s it.

The rest is commentary.

We give away road space for free, both for travel and private car storage. We also require most new stores and houses to build and maintain free parking at private expense.These policies–which are routinely ignored in the land use/VMT studies–trump all of the other variations in policy and form in incentivizing driving. Manville’s observation squares with the evidence we printed earlier about the strong connection between parking prices and transit ridership. Build as dense and well-designed, as walk friendly and transit-served as city as you like, and so long as you subsidize road space and car storage, you’ll get lots of driving.

Tactics, strategy and vision: In a blog post neatly reviewing this whole contretemps at the Frontier Group, Tony Dutzik agrees with the logic of Manville, but raises doubts about the political feasibility of implementing road pricing–at least any time soon. The underlying problem, as Dutzik eloquently explains, too much of our thinking about planning and transportation is predicated on the unrealistic assumption that we can regain a largely fictional world of cheap (heavily subsidized) housing, short commutes and negligible traffic. Until we develop more realistic expectations of what’s possible, we’re just courting disappointment and increasing distrust, which will make political agreement about solutions even harder.

Exaggerating the economic benefits of highway projects. Our friend Chuck Marohn of StrongTowns takes out the trash on the inflated claims that are made about the economic benefits of building highways. Like every big highway project, Shreveport’s ‘s I-49 connector has an accompanying economic impact analysis which claims that the project will produce millions in benefits from time savings and greater productivity. The productivity claim is based on a very expansive reading of a French economic study called “Size, speed and sprawl.” Chuck shows how highway advocates have twisted and exaggerated this academic work to gin up big numbers to support their project.

Self-driving cars can’t cure congestion, but pricing roads can. The New York Times Upshot has a nice synthesis of the recent discussion of the impact of transportation network companies (and someday, maybe soon, autonomous vehicles) on urban traffic. They’ve presented to the mass audience several of the key points we’ve discussed here at City Observatory: notably, the recent Schaller study showing that Uber, Lyft and other ride-sharing services are responsible for growing traffic and slower travel times in New York. Conor Daugherty concludes that self-driving vehicles likely won’t solve this problem, but as many economists have argued, pricing the roads could.

Glaeser: Cities are the lifeblood of our nation’s economy. Jennifer Rubin interviews Ed Glaeser for the Washington Post. She gets Glaeser to dive deep into the critical role that cities play, not just in attracting and aggregating talent, but making us smarter. Cities are forges for human capital, and the longer workers spend in cities, the more productive and better paid they become, as the acquire more skills and find their way to the institutions and endeavors that best match their strengths. Its the dynamic, self reinforcing aspect of talent in cities that makes them especially important for national economic success.

New research

Economic development incentives. The Upjohn Institute’s Tim Bartik, perhaps the nation’s leading scholar on economic development, has new research and a publicly available database on the number and extent of state business incentives.  Over at CityLab, Richard Florida summarizes Bartik’s research in a short essay. Bottom line: in 2015 tax breaks cost more than $45 billion, and were overall, ineffective.  Given the variation in state tax systems, the complexity of different businesses and the arcane nature of tax breaks, this has been a prodigious undertaking. The full database, which covers 33 states, and two decades, is available through the Upjohn Institute.



Speed: Fast cities

Which cities move the fastest? Does it matter?

The raison d’etre of the highway engineer is making cars go faster. That’s reflected in chronic complaints about traffic congestion, and codified in often misleading studies, like those produced by the Texas Transportation Institute.

The latest contribution to the literature on inter-metropolitan differences in transportation system performance is titled “Speed.”  This new paper from Victor Couture, Gilles Duranton, and Matthew Turner, presents a  more systematic set of estimates for comparing travel speeds in different metro areas. The names Duranton and Turner should be familiar to City Observatory readers: they’re the co-authors of “The Fundamental Law of Road Congestion,” which persuasively shows how additional road capacity leads to longer trips and more traffic.  One of the complicating factors of speed estimation is that speeds vary by length of trip, time of day, and trip purpose.  In general, shorter trips involve lower speed travel. That makes sense:  if you’re just traveling a mile or two, especially between your home and some other destination, its likely you’ll travel mostly on local streets, frequently encountering stop signs and traffic signals. But for longer trips, it makes more sense, even if its not the shortest distance, to travel part way on higher speed arterials or limited access freeways. Couture and his co-authors use detailed micro-data on trip taking in the National Household Transportation Survey to estimate variations in speed across metropolitan areas, after controlling for differences in trip distances and other demographic factors.

Some metros have faster roads than others, but speed isn’t everything.

After crunching all the data, they come up with their estimates of which metropolitan areas have the fastest highways, and which have the slowest.  Their estimates are expressed as a relative travel time, indexed to the average speed for the 50 large metropolitan areas in their study.  Values greater than one represent faster than average speeds (traffic in Louisville travels 22 percent faster than the typical metropolitan area). Values less than one represent slower average speeds (traffic in Miami moves, on average, about 12 percent faster than in the typical metro).  Here are the largest metros, ranked from fastest to slowest.

A couple of observations are in order about these data.  First, it’s worth nothing the dispersion between the fastest and slowest metropolitan areas.  Speeds in the fastest metropolitan area (Grand Rapids) are about 22 percent greater than the median; speeds in the slowest metro area (Miami) are about 12 percent less than the median.  Second, the largest, densest and most economically vibrant metropolitan areas have among the lowest speeds.  The three largest (New York, Los Angeles and Chicago) are among the six slowest.  Seattle and San Francisco, famous for livability and technology clusters are also slower.  Third, the fastest speeds tend to be a combination of smaller sunbelt metropolitan areas (Raleigh, Oklahoma City, Nashville), and slower growing smaller cities in the Northeast/Midwest: Grand Rapids, Buffalo, Rochester.  Couture and his co-authors were also able to look at differences in travel speeds as they relate to demographic, as well as geographic characteristics. Earlier, we reported their striking finding that African-American drivers travel on average about 8 percent more slowly that white drivers, which strikes us as strong evidence that they’re fastidiously trying to avoid being pulled over for driving while black.

Speeds seem highly correlated with measured traffic congestion. You may recall the recent estimates from Inrix on how congested roads were in major metropolitan areas. We’ve plotted the Couture et al estimates of average speeds against the Inrix estimates of metro area congestion. Keep in mind that the Inrix data are for 2016, while the Couture speed data are from 2008.  Despite the 8 year difference in the data estimates, there’s a fairly strong correlation. Traffic moves fastest in the cities that Inrix reports have the least congestion (Louisville, Kansas City) and slower in cities that Inrix says are more congested (Miami, Los Angeles).  But as the line on the chart suggests, the relationship between average speed and this particular congestion index is non-linear:  the big differences in average speeds is among cities with relatively low levels of congestion; as the congestion index rises (across cities) the speed index falls, but more slowly. This suggests that unless you get a very large reduction in congestion, you don’t see much of an increase in measured speeds.

Just like Keanu Reeves and Sandra Bullock, everyone seems deathly of afraid of going slower. But for cities, and their inhabitants, its far from clear that being the fastest actually gets you anywhere. We’ll take a closer look at the implications of speed tomorrow.

Editors note: In the original version of this post, we got Victor Couture’s surname wrong. We apologize.


Affordability beyond the median

For a long time, we’ve been critical of the way we commonly talk about housing affordability.  We’ve published a threepart series about why the way we measure housing affordability is all wrong. In particular, we objected to using the 30 percent ratio of housing prices to income as the benchmark of “affordable,” basically because depending on income and other necessary expenses, a given household might actually be able to spend way more than 30 percent of their money on housing—or way less.

But now we have another nit to pick. To wit, the nit: measuring housing prices by only looking at the median. Recall from math class that in any given area, the median-priced home is the one for which an equal number of homes are more and less expensive. But most of the time—San Francisco and New York and their peers in housing market dysfunction notwithstanding—we’re not mostly concerned with the median housing purchaser; rather, affordability problems will be concentrated in the lower part of the earnings scale, and so what really matters is the lower end of the home price scale.

For an illustration of this problem, imagine two neighborhoods. In both places, the median home costs $300,000. But in the first neighborhood, every home costs exactly $300,000, while in the second, there are a range of homes from $100,000 to $500,000. Although both neighborhoods have the same median home price, the second neighborhood has some homes affordable to low-income people, while the first neighborhood does not.

So rather than the median, or 50th percentile, home price, we really care about something like the 25th percentile: the home for which 75 percent of homes are more expensive.

Now, to be fair, the price of a place’s 50th percentile home strongly predicts the price of the 25th percentile home. If the median housing price hits a million dollars—as it does in, say, San Francisco—you can be quite confident that the 25th percentile home is also far, far too expensive.

But in more normal markets, there is enough variation in 25th percentile prices in places with the same 50th percentile price to make a meaningful difference in affordability for lower-income people. Let’s take Chicago, for example. Happily, the Census has home price estimates for both the 25th and 50th percentile homes (and 75th—the home for which just a quarter of all homes are more expensive—but we’ll leave that to another post). If you plot these two prices by ZIP code in Cook County, which includes Chicago and its inner-ring suburbs, this is what you get:

As we said, the median home price of a given ZIP code very strongly predicts its 25th percentile price. But let’s zoom in a bit—to, say, anywhere with a median price within $25,000 of $250,000.


Now we can see a strong amount of difference. Right on the $250,000 median price line, for example, ZIP code 60645 has a 25th percentile home price of about $149,000, while ZIP code 60422 has a 25th percentile home price of $187,000. In the simplest terms, then, a low-price home in 60422 is a full 25 percent more expensive than a low-price home in 60645—even though an average-price home is almost exactly the same in both places.

How does that happen? Well, zooming in on what these two ZIP codes actually look like on the ground, we can hazard a guess. ZIP code 60422 is in suburban Flossmoor, which has a small, walkable, “illegal neighborhood”-type downtown around a commuter rail station, but otherwise is almost entirely made up of single family homes, ranging from medium-sized to large.


ZIP code 60645 is in the West Ridge neighborhood of Chicago’s far North Side. It has its share of medium-to-large single family homes; but it also has a large number of multi-family buildings, from a large tower-in-a-park development to copious numbers of two-flats, three-flats, and larger lowrise apartment buildings.


The contrast is actually a brilliant illustration of one of our favorite topics, which is the way that traditional, “illegal neighborhoods” can use a diversity of buildings to create a diversity of people. West Ridge simply has many more kinds of homes than Flossmoor: there are single family houses for parents and their children who can afford a $250,000 (or more) home; there are relatively large apartment and condo units in lowrise buildings; and there are smaller apartment and condo units in lowrise and highrise buildings that provide a stock of affordable housing without subsidies.

It helps, too, that West Ridge has buildings from a variety of time periods, dating back to the early 1900s. As we’ve written before, new construction is almost never affordable to people of low or moderate incomes—but often homes that are built for the upper middle class become much more affordable after a generation or two. Much of the low-priced housing in West Ridge fits this story.

All these differences add up to very different neighborhood demographics. West Ridge has a median age, income, and poverty level that are all fairly close to those of the city of Chicago or metropolitan area averages. In Flossmoor, by contrast, the median age is 46—more than a decade older than the metro area median—in part because there are few homes suitable for young people who might not yet have a stable middle-class income. Moreover, just 2.9 percent of residents in ZIP code 60422 live below the poverty line. As we’ve covered before, while that might sound like a good thing, in a region where nearly 14 percent of people live below the poverty line, such a low level in an individual city isn’t a sign of economic success—it’s a sign that Flossmoor has (in part) used its built environment to exclude low-income people from one of the most opportunity-rich parts of Chicago’s south suburbs. Economically integrated West Ridge, by contrast, manages to keep a strong middle class while remaining inclusive of people whose incomes are much lower.

But none of these differences can be explained by median home price—because West Ridge and Flossmoor have almost identical median prices. Instead, we need to look at the price of a typical low-priced home to understand what’s going on.

Big city metros are driving the national economy

The nation’s largest city-centered metro areas are powering national economic growth.

2017 will mark a decade since the peak of the last economic cycle (which according to the National Bureau of Economic Research was December 2007.  Since then, we’ve experienced the Great Recession (the biggest economic downturn in eight decades), and a long and arduous recovery.

We’ve always maintained that the word “recovery” is a misleading term, because it seems to imply that we get back exactly the same economy, industries and jobs that we lost to the recession. In fact, that’s not true:  the jobs created since the bottom of the recession in 2009 are in different firms, in different industries, and importantly, in different places than the jobs we lost to the recession.

It’s illuminating to look to see where the jobs that have been created in this recovery are actually located. There’s no question that large metros are important to the economy. These 51 metros with a million or more population are home to 168 million Americans, and account for about 65 percent of the nation’s gross domestic product. But the big question is, how important are they to national growth? It turns out that in this particular recovery, big city metros, those with a population of 1 million or more, have dramatically outperformed the rest of the nation’s economy.

Today we’re revisiting a data series that has been compiled and tracked by our friend Josh Lehner, an economist in the Oregon Office of Economic Analysis. Josh uses Bureau of Labor Statistics employment data to track employment growth by size of metropolitan area. His analysis divides the nation into four groups: the 51 metropolitan areas of 1 million population or more, two groups of mid-sized and smaller metropolitan areas, and nonmetropolitan America.

The latest data show that, as a group, large metropolitan areas have dramatically out-performed the rest of the country in the last economic cycle (dating from the peak of the economy in December 2007). In the aggregate, metros with 1 million or more population have fully recovered the employment lost in the Great Recession, and grown to 6.6 percent above their pre-recession peak. As of September, 2016, smaller and mid-sized areas, collectively were about 2-3 percent above their 2007 peak level of employment. And non-metropolitan America is still 2.3 percent below where it was in 2007.

Here’s another way to think about this same data.  As of September 2016, total U.S. employment was up about 5.3 million jobs from the previous peak recorded in December 2007 (133.03 million jobs in 2007; 138.37 million jobs in 2016).  The fifty-one largest metropolitan areas recorded an increase of 4.66 million jobs between 2007 and 2016. Collectively, these big city metro’s accounted for about 87 percent of the net job growth nationally.

This time is different

What’s new and different here is that big city metros haven’t been the one’s that have driven US economic growth in previous cycles.  It’s usually the case that small and mid-sized metros, as a group, have grown faster than big city metros.  Using Josh’s data, we prepared a second chart, showing the growth in employment for large, middle-sized and small metros and non-metros, for the period 2002 through 2007. During that growth period, small and mid-sized metros decidedly out-performed larger metros in job growth.  The smallest metros grew their employment by 7.5 percent, mid-sized metros grew about 6.5 percent, and large metros grew about 5.5 percent.


As we’ve pointed out in our report, Surging City Center Job Growth, the last few years have witnessed a historic reversal in the patterns of job growth within large metropolitan areas. After decades of steady decentralization, employment growth in urban centers substantially outpaced that in more peripheral locations from 2007 through 2011. We think there’s strong evidence that this process is driven by employers looking to tap the growing labor market in city centers, which itself is a product of the movement of well-educated young adults back to cities (as we documented in Young and Restless).

All this evidence points to one thing: City centers are the big drivers of national economic growth. Big metros are significantly out-performing smaller metros, which in turn are out-performing rural areas. Within large metros, the decades-long pattern of job decentralization has reversed—and jobs are growing faster in city centers than in the metropolitan periphery. In this economic expansion, the nation’s economic growth is tied to the performance of its large metros and their robust city centers.

Many thanks to Josh Lehner for compiling this data and sharing it. Be sure to visit the Office of Economic Analysis blog for more detail, including a national map showing patterns of county-level job growth since 2007.

The real welfare Cadillacs have 18 wheels

  • Truck freight movement gets a subsidy of between $57 and $128 billion annually in the form of uncompensated social costs, over and above what trucks pay in taxes, according to the Congressional Budget Office.
  • If trucking companies paid the full costs associated with moving truck freight, we’d have less road damage and congestion, fewer crashes, and more funding to pay for the transportation system.


Screen Shot 2015-06-01 at 2.02.10 PM

What with all the speculation about a possible trillion dollar spending package for infrastructure, we’ve been hearing a lot about about crumbling bridges, structurally deficient roads, and the need for more highway capacity.

It’s clear that our transportation finance system is broken. To make up the deficit, politicians frequently call for increased user fees – through increased taxes on gasoline, vehicle miles traveled, or even bikes. All the while, one of the biggest users of the transportation network – the trucking industry – has been rolling down the highway fueled by billions in federal subsidies.

A 2015 report from the Congressional Budget Office estimates that truck freight causes more than $58 to $129 billion annually in damages and social costs in the form of wear and tear on the roads, crashes, congestion and pollution – an amount well above and beyond what trucking companies currently pay in taxes.

CBO doesn’t report that headline number, instead computing that the external social costs of truck freight on a “cents per ton mile basis” range between 2.62 and 5.86 cents per ton mile. For the average heavy truck, they estimate that the cost works out to about 21 to 46 cents per mile travelled.

That might not sound like a lot, but the nation’s 10.6 million trucks travel generate an estimated 2.2 trillion ton miles of travel per year (Table A-1, page 32). When you multiply the per ton mile cost of 2.52 to 5.86 cents per mile times 2.2 trillion ton-miles, you get an annual cost of between $57 and $128 billion per year.

Unfortunately, trucking companies don’t pay these costs. They are passed along to the rest of us in the form of damaged roads, crash costs, increased congestion and air pollution. Because they don’t pay the costs of these negative externalities, the firms that send goods by truck don’t have to consider them when deciding how and where to ship goods. This translates into a huge subsidy for the trucking industry of of between 21 and 46 cents per mile.

For comparison, CBO looked at the social costs associated with moving freight by rail. Railroads have much lower social costs, for two reasons: first, rail transport is much more energy efficient and less polluting per ton mile of travel; second, because railroads are built and maintained by their private owners, most of the cost of wear and tear is borne by private users, not the public. Railroad freight does produce social costs associated with pollution and crashes, but the social costs of moving freight by rail are about one-seventh that for truck movements: about 0.5 to 0.8 cents per ton mile, compared to 2.52 to 5.86 per ton mile for trucks.

Screen Shot 2015-06-01 at 2.02.36 PM

As we always point out, getting the prices right – whether for parking or road use – is critically important to creating an efficient transportation system. When particular transportation system users don’t pay their full costs, demand is too high, and supply is too low.  In this case, large federal subsidies for trucking encourages too much freight to be moved by truck, worsening congestion, pollution and road wear, while the fees and taxes paid by trucking companies aren’t enough to cover these costs. The classic solution for these currently unpriced external costs is to impose an offsetting tax on trucks that makes truck freight bear the full cost associate with road use, crashes and environmental damage. The CBO report considers a number of policies that could “internalize” these external costs associated with trucking – including higher diesel taxes, a tax on truck VMT, and even a higher tax on truck tires.

The revenues produced would be considerable: a VMT tax that internalized social costs of trucking would generate an estimated $68 billion per year. To put that number in context, consider that in 2014, total public spending – federal, state and local – on roads and highways was $165 billion. In addition, the higher tax would reduce freight moving by road – mostly by shifting cargo to rail – and lead to benefits of lower pollution, less congestion and less wear and tear on roads. We’d also save energy: net diesel fuel consumption for freight transportation would fall by 670 million gallons per year – a savings of about $2 billion annually at current prices.

There are good reasons to believe that the CBO report is conservative, and if anything, understates the social costs associated with trucking. For example, the report estimates that social costs associated with carbon emissions at somewhere between $5 and $45 per ton. Other credible estimates – from British economist Nicholas Stern – suggest that the cost today is about $32 to $103 per ton, rising to $82 to 260 per ton over the next two decades.

The external social costs of truck and rail freight, per ton mile, are estimated as follows:

Screen Shot 2015-06-01 at 2.03.03 PM
Source:  Congressional Budget Office, 2015


Such a tax would make truck freight more expensive, but other costs – now borne by the rest of us – would go down by a comparable amount. And there would be important savings in costs for freight either moved by other modes (especially rail, which is about two-thirds cheaper), or sourced from closer locations.

There’s a clear lesson here: It may seem like we have a shortage of infrastructure, or lack the funding to pay for the transportation system, but the fact that truck freight is so heavily subsidized means that there’s a lot more demand (and congestion) on the the roads that there would be if trucks actually paid their way. On top of that, there’d be a lot more money to cover the cost of the system we already have.

So the next time someone laments the sad state of the road system, or wonders why we can’t afford more investment, you might want to point out some 18-wheelers who are now getting a one heck of a free ride, at everyone’s expense.

View the full report: “Pricing Freight Transport to Account for External Costs: Working Paper 2015-03

The Cappuccino Congestion Index

April First falls on Saturday, and that’s a good reason to revisit an old favorite, the Cappuccino Congestion Index

We’re continuing told that congestion is a grievous threat to urban well-being. It’s annoying to queue up for anything, but traffic congestion has spawned a cottage industry of ginning up reports that transform our annoyance with waiting in lines into an imagined economic calamity. Using the same logic and methodology that underpins these traffic studies, its possible to demonstrate another insidious threat to the nation’s economic productivity: costly and growing coffee congestion.


Yes, there’s another black fluid that’s even more important than oil to the functioning of the U.S. economy: coffee. Because an estimated 100 million of us American workers can’t begin a productive work day without an early morning jolt of caffeine, and because one-third of these coffee drinkers regularly consume espresso drinks, lattes and cappuccinos, there is significant and growing congestion in coffee lines around the country. That’s costing us a lot of money. Consider these facts:

  • Delays waiting in line at the coffee shop for your daily latte, cappuccino or mocha cost U.S. consumers $4 billion every year in lost time;
  • The typical coffee drinker loses more time waiting in line at Starbucks than in traffic congestion;
  • Delays in getting your coffee are likely to increase because our coffee delivery infrastructure isn’t increasing as fast as coffee consumption.

Access to caffeine is provided by the nation’s growing corps of baristas and coffee bars. The largest of these, Starbucks, operates some 12,000 locations in the U.S. alone. Any delay in getting this vital beverage is going to impact a worker’s start time–and perhaps their day’s productivity. It’s true that sometimes, you can walk right up and get the triple espresso you need. Other times, however, you have to wait behind a phalanx ordering double, no-whip mochas with a pump of three different syrups, or an orange-mocha frappuccino. These delays in the coffee line are costly.

To figure out exactly how costly, we’ve applied the “travel time index” created by the Texas Transportation Institute to measure the economic impact of this delay on American coffee drinkers. For more than three decades TTI has used this index to calculate the dollar cost of traffic delays–here we use the same technique to figure the value of “coffee delays.”

The travel time index is the difference in time required for a rush hour commute compared to the same trip in non-congested conditions. According to Inrix, the travel tracking firm, the travel time index for the United States in July 2014  was 7.6, meaning that a commute trip that took 20 minutes in off-peak times would take an additional 91 seconds at the peak hour.

We constructed data on the relationship between customer volume and average service times for a series of Portland area coffee shops.  We used the 95th percentile time of 15 seconds as our estimate of “free flow” ordering conditions—how long it takes to enter the shop and place an order.  In our data-gathering, as the shop became more crowded, customers had to queue up. The time to place orders rose from an average of 30 to 40 seconds, to two to three minutes in “congested” conditions. The following chart shows our estimate of the relationship between customer volume and average wait times.


Following the TTI methodology, we treat any additional time that customers have to spend waiting to place their order beyond what would be required in free flow times (i.e. more than 15 seconds) as delay attributable to coffee congestion.

Based on our observations and of typical coffee shops and other data, we were able to estimate the approximate flow of customers over the course of a day. We regard a typical coffee shop as one that has about 650 transactions daily. While most transactions are for a single consumer, some are for two or more consumers, so we use a consumer per transaction factor of 1.2. This means the typical coffee shop provides beverages (and other items) for about 750 consumers. We estimate the distribution of customers per hour over the course of the day based on overall patterns of hourly traffic, with the busiest times in the morning, and volume tapering off in the afternoon.

We then apply our speed/volume relationship (chart above) to our estimates of hourly volume to estimate the amount of delay experienced by customers in each hour.  When you scale these estimates up to reflect the millions of Americans waiting in line for their needed caffeine each day, the total value of time lost to cappuccino congestion costs consumers more than $4 billion annually. (Math below).


This is—of course—our April First commentary, and savvy readers will recognize it is tongue in cheek, but only partly so.  (The data are real, by the way!) The real April Fools Joke here is the application of this same tortured thinking to a description and a diagnosis of the nation’s traffic problems.

The Texas Transportation Institute’s  best estimate is that travel delays cost the average American between one and two minutes on their typical commute trip. While its possible–as we’ve done here–to apply a wage rate to that time and multiply by the total number of Americans to get an impressively large total, its not clear that the few odd minutes here and there have real value. This is why for years, we and others have debunked the TTI report. (The clumping of reported average commute times in the American Community Survey around values ending in “0” and “5” shows Americans don’t have that precise a sense of their average travel time anyhow.)

The “billions and billions” argument used by TTI to describe the cost of traffic congestion is a rhetorical device to generate alarm. The trouble is, when applied to transportation planning it leads to some misleading conclusions. Advocates argue regularly that the “costs of congestion” justify spending added billions in scarce public resources on expanding highways, supposedly to reduce time lost to congestion. There’s just no evidence this works–induced demand from new capacity causes traffic to expand and travel times to continue to lag:  Los Angeles just spent a whopping billion dollars to widen Interstate 405, with no measurable impact on congestion or traffic delays.

No one would expect to Starbucks to build enough locations—and hire enough baristas—so that everyone could enjoy the 15 second order times that you can experience when there’s a lull. Consumers are smart enough to understand that if you want a coffee the same time as everyone else, you’re probably going to have to queue up for a few minutes.

But strangely, when it comes to highways, we don’t recognize the trivially small scale of the expected time savings (a minute or two per person) and we don’t consider a kind of careful cost-benefit analysis that would tell us that very few transportation projects actually generate the kinds of sustained travel time savings that would make them economically worthwhile.

Ponder that as you wait in line for your cappuccino.  We’ll be just ahead of you ordering a double-espresso macchiato (and holding a stopwatch).

Want to know more?

Here’s the math:  We estimate that a peak times (around 10am) the typical Starbucks makes about 100 transactions, representing about 120 customers.  The average wait time is about two and one-half minutes–of which about two minutes and 15 second represents delay, compared to free flow conditions.  We make a similar computation for each hour of the day (customers are fewer and delays shorter at other hours).  Collectively customers at an typical store experience about 21 person hours of delay per day (that’s an average of a little over 90 seconds per customer).  We monetize the value of this delay at $15 per hour, and multiply it by 365 days and 12,000 Starbucks stores.  Since Starbucks represents about 35 percent of all coffee shops in the US, we scale this up to get a total value of time lost to coffee service delays of slightly more than $4 billion.

How much could US retail shrink? And where?

The first quarter of 2017 has marked a parade of announced store closures. The long awaited axe has fallen on 68 more Macy’s stores around the country. J.C. Penney has announced it will close another 138 stores. Other major national retail chains, including The Limited, Gap, Walgreens, Aeropostale and Chico’s, have also announced similarly large closures.  These are just the latest moves in a shifting, mostly shrinking retail landscape in the United States.

One retailer, Target is not just downsizing its store count, it’s shifting to smaller urban stores–Target Express. Other retailers like Walmart and Office Depot have have also been developing smaller stores. The days of big boxes and power centers seem to be giving way to to more urban-centered and smaller-footprint retailing, undermining the economics of larger-scale retailing. It’s estimated that there are over 1,200 dead or dying malls in the U.S. It appears that we’re way overbuilt for retail space. Finding productive uses for these disused spaces is now a major undertaking for communities around the nation.

Several factors seem to be driving the tectonic shifts in retailing. Part of the problem is that retail, like housing,was overbuilt during the bubble: commercial developers typically followed new housing development, and as the housing stock sprawled in the last decade, so too did the expansion of retail space.

Another important factor is the technological change in the form of growing e-commerce. More and more, we’re purchasing goods and services via the Internet and mobile devices. Census Bureau data on retail sales show that e-commerce continues to increase its market share.  Excluding restaurant sales, and sales of vehicles and gasoline, e-commerce now accounts for about 12 percent of all retailing, a figure that has effectively doubled in the past six years.

There’s a bit of irony to the technological displacement at work here: big box stores only became economically feasible thanks to earlier technologies, like universal product codes, computerized inventory management, real-time ordering, and global data networks. These same technologies now help enable smaller stores (tailoring inventory to localized demand) and empower consumers to order online at home and via pervasive mobile devices.

The shifting retail environment will have impacts on the transportation system as well. The latest transportation data show a decline in the number and length of shopping trips (which decreases transport intensity of retailing), but this is at least partially offset by more travel by commercial delivery vehicles (like UPS and Fedex). It’s an open question as to how this will play out: will these shifts encourage (more) fleets of smaller transit trucks, or will increasing e-commerce retail sales and smaller urban stores mean larger trucks on urban roads? There’s some evidence that Internet delivery will mean less car travel, as the decline in shopping travel will more than offset the increased vehicle travel associated with deliveries. And delivery efficiency actually increases as volumes increase.

To judge who’s most likely to be affected by these trends, we compiled some metropolitan level data on the amount of retail space per capita. The data come from Co-Star, a private firm that tracks retail space leasing throughout the nation. (They helpfully make their market reports available here). These data are for 2007 and we’ve computed retail space per capita in each market by dividing total square footage by each metropolitan area’s 2007 population.

The national average is about 46 square feet of retail space per capita, with most metropolitan areas having between 40 and 55 square feet per capita. There are a number of outliers, however.

Milwaukee/Madison has the highest amount of retail space per capita, and many southern, sprawled metros rank higher on this metric as well. These are the places most likely to struggle with a dwindling appetite for retail space, and the economic consequences that follow, be it in fewer retail jobs, large swathes of unused space, or transportation costs. At the other end of the spectrum, some metropolitan areas have far more space-efficient retailing: Portland has just 30 square feet of retail space per capita, fully one-third less than the national average.

By global standards, the U.S. has much more space devoted to retailing than anyone else: comparable estimates for other countries include: 23 square feet per capita in the United Kingdom, 13 square feet per capita in Canada, and 6.5 square feet per capita in Australia. If the experience of these countries is any indication, it’s a good bet that there’s lots there’s still lots of room for downsizing in the U.S. retail sector. However, despite these trends, Miami apparently isn’t concerned.