Equity and Homelessness

What’s equitable about spending six times as much per homeless person in the suburbs as in the city?

The “equity” standard that’s guiding the division of revenue for Metro’s housing initiative is based on politics, not need.

Portland’s regional government Metro is rapidly moving ahead with a proposed $250 million per year program to fight homelessness.

It’s plainly motivated by the fact that the growth of homelessness in Portland is a highly rated public concern-a third of Portland residents identified it as a #1 concern, up from just 1 percent nine years ago.   Public concern with homelessness is such that  Metro fears it will have trouble getting widespread support for its $4 billion transportation initiative if something isn’t done to address homelessness.

So after saying the homeless measure could wait until after the Transportation measure is put to the voters (in November), the Metro Council is moving on a hurried schedule to craft a homeless measure to appear on the region’s May 2020 primary election.

It’s plainly a rush job:  the proposed ordinance creating the program was first made public on February 4, had its first public hearings on February 14,  and would need to be adopted by the Council by February 27 in order to qualify for the May ballot.  The measure is very bare bones, it provides only the loosest of definitions about who is eligible to receive assistance and what money can be spent on.

The thing it is clear about, however, is how the money will be allocated,  Each of Portland’s three counties (Clackamas, Multnomah and Washington) will split 95 percent of the funds raised region-wide in proportion to their counties share of the Metro district’s population. And counties will be in the driver’s seat for deciding how funds are spent in their counties.  Aside from 5 percent reserved for regional use (including administration), Metro is acting solely as the banker for homeless services.

SECTION 7. Allocation of Revenue  Metro will annually allocate at least 95 percent of the allocable Supportive Housing Services Revenue within each county based on each county’s Metro boundary population percentage relative to the other counties.

DRAFT EXHIBIT A TO RESOLUTION NO. 20-5083
WS 2-18-2020

Concern about homelessness is, perforce, an equity issue.  Those who are living on the street or in shelters are plainly the among the worst off among us, and dedicating additional public resources to alleviate their suffering and provide them shelter seems like an intrinsically equitable endeavor.

Metro’s proposed adopting resolution is outspoken about clothing the entire effort as the alleviation of vast and historic wrongs:  It finds:

WHEREAS, communities of color have been directly impacted by a long list of systemic inequities and discriminatory policies that have caused higher rates of housing instability and homelessness among people of color and they are disproportionately represented in the housing affordability and homelessness crisis

(Draft Resolution No. 20-5083 WS 2/18/20)

That principle is followed up in Metro’s proposed enacting ordinance which would requireeach of the counties receiving funds adhere closely to Metro’s own statements about what constitutes equitable planning processes.  Specifically, Metro mandates that counties allocate funds in a way that redresses inequities.

A local implementation plan must include the following:

…..
2. A description of how the key objectives of Metro’s Strategic Plan to Advance Racial Equity, Diversity, and Inclusion have been incorporated. This should include a thorough racial equity analysis and strategy that includes: (1) an analysis of the racial disparities among people experiencing homelessness and the priority service population; (2) disparities in access and outcomes in current services for people experiencing homelessness and the priority service population; (3) clearly defined service strategies and resource allocations intended to remedy existing disparities and ensure equitable access to funds; and (4) an articulation of how perspectives of communities of color and culturally specific groups were considered and incorporated.

DRAFT EXHIBIT A TO RESOLUTION NO. 20-5083 WS 2-18-2020

Counties have to “remedy existing disparities and ensure equitable access to funds,” within their counties. That policy doesn’t apply, however, with Metro’s allocation of funds within the metropolitan area.  That’s because  95 percent of the funds raised by Metro (after deducting its administrative costs) are to be allocated to counties based solely on population.  But by every imaginable definition of homelessness, the homeless are not evenly distributed proportional to the overall population.  Homelessness in all of its forms, and particularly in its most serious forms–the unsheltered living on the streets and the chronically homeless, who’ve been without a home for a year or more–are dramatically more concentrated in Multnomah County than in the two suburban counties (Clackamas and Washington).  In addition, three-fourths of African American homeless and seven of eight Latino homeless persons in the region live in Multnomah County. The county with the largest burden of dealing with homeless persons of color gets far less resources, per homeless person, than surrounding suburbs.

Where the homeless live in Metro Portland?

Here we explore the data gathered in the 2019 “Point-in-Time” surveys of the homeless population in Clackamas, Multnomah and Washington Counties.  The Point-in-Time data collection effort probably understates the magnitude of the homelessness problem, but provides the best data on its location within the region, and the clearest picture of the race and ethnicity of the homeless.

We focus on two data points from the Point-in-Time Survey :  the unsheltered population (people living in the streets) and the total homeless population, which includes sheltered homeless (in shelters, missions, or temporary accomodation).  The following table shows the latest data on the populations of the three counties, and the number of persons counted as homeless in the latest (2019) Point-in-Time survey.  The first panel of the table shows the actual counts; the second panel shows the percentage distribution by county.Multnomah County constitutes 44 percent of the region’s population, but is home to about 77 percent of the region’s unsheltered homeless and about 70 percent of the region’s total homeless population.  Unsurprisingly, homelessness in Portland, as in most of the United States is concentrated in urban centers.

Notice that the Metro ordinance provides that funds will be allocated not according to the homeless population in each county, but the total population in each county.  What this means in practice is that some counties will get much more than others relative to the size of their homeless population (and by implication, homeless problems). Multnomah County also has a higher proportion of homeless people who are “unsheltered” than Clackamas or Washington Counties. Multnomah County also accounts for three-quarters of those in the three counties who are classified as “chronically homeless.”

We’ve computed the estimated allocation of a $250 million per year program based on overall county population, as shown the the following table.  Each county gets the same share of the $250 million total as its share of total population:  Multnomah County gets 44 percent or about $110 million, and the other counties get proportionate amounts.  We’ve also computed the amount that each county gets per homeless person and per unsheltered person. We use the relative size of the unsheltered and homeless population in each county to index the  need, and show how much is available in each county relative to that need.

These data show that on a per homeless person basis, Washington County gets about five to six times as much as Multnomah County, and that Clackamas County gets about two to three times as much as Multnomah County.  Per unsheltered homeless person, Multnomah County gets $54,000; while Washington County gets more than six times as much:  $355,000.

To be clear:  This table is using a “per homeless person” measure not as an absolute indicator of spending per person.  The scope of the homeless problem overall is larger than captured by the point-in-time survey, and also the measure is intended to be broader, i.e. providing rent subsidies to keep households from becoming homeless. But we take this figure as a robust indicator of the relative need in each county, and the resources that each county has relative to the need, as identified by the most severe and acute aspect of the homelessness problem.

(Note that using county totals doesn’t correspond exactly to the provisions of the Metro ordinance.  Only the population within the Metro boundary (which excludes outlying cities in each county) constitutes the basis for the distribution.  Excluding these areas would affect both our estimates of the allocation of funds and our estimates of the homeless population in each county.  For this analysis, we have relied on published and available county totals.)

This system of allocation works to the disadvantage of persons of color.  Multnomah County accounts for more than three-quarters of the homeless persons in the region who are Black or Latino.  According to the three county’s Point-in-Time surveys for 2019, Multnomah County accounted for 373 of the region’s 582 Black homeless (76 percent) and 648 of the region’s 743 Latino homeless (88 percent).  What Metro’s measure does is to replicate, if not amplify, the racial and ethnic inequity of the homeless in Multnomah County by providing them vastly smaller resources than it provides per homeless person in the suburban counties.

If this effort is all about political coalition building, and if each county is viewed as a separate fiefdom, then slicing the revenue pie proportional to population makes sense.  But if homelessness is really a shared regional concern, and if Metro is really a regional government–rather than just a revenue-raising and pie-slicing middleman for counties, then it really ought to embrace the logic of its own rhetoric about equity, and about impact.

If three-quarters of the homeless are in Multnomah County, then Metro ought to spend most of the region’s resources there.  There’s no plausible regional argument for spending five or six times as much on a homeless person in Washington County than on an otherwise similar homeless person in Multnomah County.  If Metro is serious about overcoming decades of discrimination against communities of color who’ve been disadvantaged in their access to resources then it should devote at least as much, per homeless person to those communities as to others.

Funding in search of policy and results

Everyone recognizes that homelessness is a complex, gnarly problem.  But aside from sketching the outer boundaries of what’s permissible, and asking counties to report what they spent the money on, there’s nothing in this measure that spells out a strategy, or expected results, or really established any real accountability.  The “outcome-based” portion of the ordinance actually says nothing about outcomes.  It basically just says “counties, do what you think best.”

SECTION 11. Outcome-Based Implementation Policy
Metro recognizes that each county may approach program implementation differently depending
on the unique needs of its residents and communities. Therefore, it is the policy of the Metro Council that there be sufficient flexibility in implementation to best serve the needs of residents, communities, and those receiving Supportive Housing Services from program funding.

What this tells us about equity

It’s become increasingly fashionable to talk about the equity implications of public policy. For many governments, and Metro in particular that takes the form of long lamentations about past injustices and ornately wordy but nebulous commitments to assure engagement and participation and to hear the voices of the those who’ve been disadvantaged by past policies.  But the clearest way to judge the equity of any proposal is to look past the rhetoric and look to see where the money is going.  And in the cases of this measure, which is pitched at dealing with a problem that disproportionately affects the region’s central county, it ends up devoting a disproportionate share of resources to suburban counties.

Equity has to be more about allocating resources to areas of need and insisting on measurable results, rather than just performative virtue signaling and ritual incantations of historic grievances.  When you look at the numbers associated with this measure, you see that real equity considerations take a back seat to the lofty rhetoric.

If you’re homeless in the Portland area, it shouldn’t matter what county you live in.  But Metro’s proposed allocation system would provide vastly fewer resources per homeless person in Multnomah County. That’s particularly ironic in light of the fact that most of the persons of color who are homeless in the region live in Multnomah County.  If you’re looking to redress the resource inequities that have worked to the disadvantage of these groups, this approach doesn’t do that.

Editor’s Note:  As of this writing, its unclear how much money the voters will be asked to provide.  As Oregon Public Broadcasting reported:

Metro was hoping to raise $250 million to $300 million per year, but it appears the tax it proposed might raise only about half that amount.  Latest estimates are it would provide $135,000,000 per year.

This post has been revised to correct formatting errors.

Why Atlanta’s anti-gentrification moratorium will backfire

Blocking new development will only accelerate demand for existing homes

The moratorium makes flipping houses even more lucrative

Atlanta’s making a major investment in Westside Park at Bellwood Quarry, not far from the Beltline that has triggered a wave of redevelopment around the city.  It’s going to be a gem. When completed, Westside Park will be the city’s largest, at 280 acres and will transform a former quarry into a spectacular water feature. The city’s first stage investment of more than $25 million is expected to lead to other investment, both in and around the park. The concern with the improved park, as with the Beltline, is that these improvements will trigger gentrification, and lead to the displacement of existing residents, who are disproportionately low income and Black.

In response, Mayor Keisha Lance Bottoms has issued a moratorium on new building permits, zone changes and similar land use actions in the area around the park. The moratorium is scheduled to last six months, and serve as a kind of “time out” to consider what might be done.

If the Mayor’s objective is to keep existing residents in their homes and discourage speculation, blocking the construction of new housing in and around Westside Park is actually likely to backfire. The problem is a moratorium does nothing to address the growing demand for urban living, which we’re seeing not just in this neighborhood, but in many places in Atlanta and in cities throughout the US.

Americans, particularly well-educated young adults are looking for neighborhoods that offer diversity, walkability, density and other urban amenities like great parks. The improvements to Westside Park, make it a powerful draw to new residents.

That’s true whether you build new houses and apartments for them or not.  And by not building new houses, you intensify the competition for the existing housing stock. If people who want to live in the neighborhood can’t buy a new condo or rent a newly-built apartment in the area, they may be able to rent or buy an existing home. It’s a good bet that lots of people will be looking for “fixer-uppers” whether they’re planning to become long-term neighborhood residents, flip the property for a profit, or rent it out for income. If you are a flipper, knowing that there won’t be any nice, new apartments or condos down the street means you’ll be able to charge even more when you finish upgrading the property. In all these cases, the moratorium has the effect of focusing the demand from investors, homebuyers and prospective  renters entirely on existing homes.

A moratorium on new construction just makes existing homes a juicier target for flippers. (HGTV)

Writing in The Atlantic, the Brookings Institution’s Jenny Schuetz explained how just this dynamic plays out in the market for apartments. Many cities make it difficult to build new apartments, which actually makes affordability problems worse because investors choose to fix up old apartments (and raise their rents) rather than building new ones:

In places where regulation limits new apartment construction, acquiring existing buildings is less risky than trying to build new rental housing. There are stronger financial incentives to maintain and upgrade old apartments in tightly regulated markets, because they face less competition from new, high-amenity buildings. This process of upward “filtering” among existing apartments is particularly harmful to housing affordability because it results in higher rents without expanding the number of homes available.

Blocking one path (new construction) doesn’t ease gentrification pressures, it intensifies them.  In all likelihood, the moratorium will lead to more property price appreciation and more displacement than if new construction were allowed to go forward. Studies by the Anti-Displacement Project at the University of California show that the construction of two new market rate dwellings has the same effect in reducing displacement as building a single unit of affordable housing.  Given the city’s limited resources for affordable housing, relying on private developers to construct new homes to accomodate new residents is a more effective and cheaper way of reducing displacement.

A moratorium is a flashy way of exhibiting concern about gentrification, but isn’t a solution.  If anything, its likely to make all of the negative aspects of change worse. A more constructive approach would focus on ways to acquire or build more affordable housing in the neighborhood. One promising practice is to capture the tax increment from rising property values (and new construction) to create a fund to help subsidize affordable housing.

Mapping Walkable Density

Walkable density mapped for the nation’s largest metropolitan areas

by DW Rowlands

Editor’s Note:  We’re pleased to offer this guest commentary by DW Rowlands.  DW Rowlands is a human geography grad student at the University of Maryland, Baltimore County.  Her current research focuses on characterizing neighborhoods based on their amenability to public transit and on the relationship between race and the distribution of grocery stores in the DC area. She also writes on DC transportation, history, and demographic issues for Greater Greater Washington and the DC Policy Center. Follow DW on twitter at @82_Streetcar or contact her by email at d.w.rowlands<at>gmail.com.

In a companion commentary, DW Rowlands describes a technique for adjusting density measurements to account for the connectedness of local street networks. This measurement shows the difference between the actual walkable density (how many people live nearby based on how far one can walk) rather than straight-line or “ideal” density (how many people are nearby based on a measure that considers only straight-line distances).  The difference between the two measures (walkable density vs. ideal density) is an indicator of how well connected a neighborhood is for people walking.  This page shows maps of each of the nation’s largest metropolitan areas, with census tracts shaded based on how closely each neighborhood’s actual walkable density approaches its ideal (straight-line) density.  Areas shaded dark blue are those where realized walkable density comes closest to ideal density; areas shaded light blue are those where the disconnectedness of the street network means that actual walkable density is dramatically less than idea density.

Atlanta

Baltimore

Boston

Chicago

Cleveland

Denver

Detroit

Houston

Los Angeles

Memphis

New York

Philadelphia

Phoenix

Portland

San Francisco

Seattle

 

Understanding Walkable Density

A new way of measuring urban density that explicitly considers walkability

by DW Rowlands

Editor’s Note:  We’re pleased to offer this guest commentary by DW Rowlands.  DW Rowlands is a human geography grad student at the University of Maryland, Baltimore County.  Her current research focuses on characterizing neighborhoods based on their amenability to public transit and on the relationship between race and the distribution of grocery stores in the DC area.  She also writes on DC transportation, history, and demographic issues for Greater Greater Washington and the DC Policy Center. Follow DW on twitter at @82_Streetcar or contact her by email at d.w.rowlands<at>gmail.com.

Conventional density measures versus walkable density measures

Density is one of the most fundamental properties of urban areas: what makes a city different from a suburb, and suburbs different from rural areas is chiefly how many people there are, and how close they are to each other.  The fact that people in cities live and work near each other is both economically important—it makes it easier for specialized jobs and workers and stores and customers to find each other—and culturally important—it exposes people to those different from them while also making it easier to find others who share their interests, needs, and cultural traditions.

Maps of Walkable Density vs. Ideal Density for US Metro Areas

Density is also very important for transportation.  Most people are unwilling to walk much more than half a mile on a regular basis, which means that destinations—jobs, stores, transit stops, and so on—are only within “walking distance” of people within a half mile of them.  However, while it’s common to calculate the density of a city, or a metro area, by dividing the total population by the total area, this method isn’t always informative. If 80% of the people in a city live in 20% of the city’s area, then the average person is experiencing much higher population density than this average value would suggest.

A more-useful way to measure the density that people actually experience is to calculate the “population-weighted average” of population density.  The population density of each Census tract or block group is calculated individually, and the densities are averaged together, weighted by their population.  A population-weighted average shows the level of density experienced by the typical resident in his or her census tract. This means that the densities of block groups with more residents have a bigger influence on the overall value.

This measure has limitations as well.  A particularly significant one is the fact that not everywhere within a certain straight-line distance of a location can be reached by walking that distance: in cities, pedestrians are generally limited to following the street network.  In a city with a traditional street grid, this doesn’t make that big a difference, though it means you have to walk further to get somewhere diagonal to the street grid than somewhere you can get to by following a single street.

Most neighborhoods in American metro areas don’t have ideal street grids, however: winding roads and cul-de-sacs force pedestrians to take indirect trips, and bodies of water, hills, freeways, industrial areas, and superblocks often pose barriers.  This reduces the number of destinations that can be reached by walking a given distance. To take this into account, I’ve developed a statistic called “Percent Ideal Walkshed” to measure the fraction of locations within a half mile of the center a block group that are actually within a half-mile walk of it.

Calculating percent ideal walkshed as measure of walkable density

To calculate percent ideal walkshed, I began by finding the center of every Census block group in a metropolitan area.  (Here, I’ve selected four block groups in downtown Denver, outlined in green, to use as examples.) If the street grid didn’t constrain where one could walk, everywhere in the beige half-mile-radius circles would be within a half-mile walk of the centers of the block groups.

However, since pedestrians can’t fly, they do have to walk along the street grid.  The streets within a half-mile walk of the centers of the block groups are highlighted in red.  In an area with a good street grid, like the left-most block group, this “walkshed” forms a diamond shape with its tips nearly at the edge of the half-mile circle.  If the street grid is broken up by freeways, railroad tracks, or other obstacles, though, the walkshed may cover much less of the circle.

I then created buffers around these streets to convert the one-dimensional walkshed of streets into a two-dimensional area—here shown in brown—that is a half-mile walk from the center of the block group.  The better-connected the street grid is in the region around a block group, the closer the area of this brown buffer will be to that of the beige half-mile circle.

Finally, I color-coded each block group by the ratio of the area of the brown walkshed buffer to the area of the beige half-mile circle.  I call this value “Percent Ideal Walkshed,” because it measures the fraction of area within a half mile of the center of the block group—the ideal walkshed—that is within the true walkshed of the center of the block group.

When this analysis is performed over the whole metropolitan area, it shows that the areas with the most-ideal walksheds are mostly concentrated in the core—which one would expect to be densest—but that clusters with high percent ideal walkshed values are found elsewhere, too, indicating suburban areas with good, well-connected local street grids.  We’ve mapped walkable density in 17 of the nation’s largest metropolitan areas; you can see those maps here.

One very consistent pattern in these maps is that urban cores have much higher values of percent ideal walkshed than their suburbs do.  This means that core areas—which tend to be the densest parts of a metro area—have walkable densities closer to their conventionally-measured densities than suburbs do: conventional density measures understate the walkable density of urban areas relative to suburban areas.

Metro areas ranked by walkable density

Maps of percent ideal walkshed give a convenient way to tell how much the structure of the street network reduces effective “lived” density on a local scale.  However, they don’t give a very useful sense of how good or bad the street network in a metropolitan area overall is. In calculating an average value of percent ideal walkshed, we run into the same issue we saw with average population density: what really matters is the quality of the walkshed of where the average person lives.

To determine the quality of the walkshed experienced by the average resident, I calculated the population-weighted average population densities—the average population densities of block groups, weighted by the populations of the block group—and walkshed-adjusted population-weighted average population densities—the same calculation, but with each block group’s density multiplied by its percent ideal walkshed—for each of the 25 largest metropolitan statistical areas (MSAs) in the United States.


In addition to the population weighted figures, I performed the same calculations with job-weighted job density, finding the connectedness of the street network where the average job in a metro area is located.  Unsurprisingly, since jobs tend to be more concentrated in downtowns with good street grids than population is, these values are generally higher than the population-weighted figures.
The ratio of the walkshed-adjusted to non-walkshed-adjusted density for a metro area is equivalent to the population-weighted average of the percent ideal walkshed for the metro area.  Unsurprisingly, the metro areas with the highest ratios are New York, Philadelphia, Chicago, and San Francisco: all cities with dense urban cores with good grid networks. Next on the list are less-dense metro areas in the Midwest and West that have regular grid networks that extend even into their low-density suburbs, as well as Boston, which has a dense urban core that has a well-connect, but non-grid, street network.  The cities that do the worst by this measure—Charlotte, Orlando, and Atlanta—are all low-density cities in the Southeast that lack significant street grid networks.


The pattern here is fairly similar, but there are some differences.  Washington does significantly better in the job-weighted rankings, presumably because its urban core, where most jobs are, has a regular grid, while most of the MSA’s residents live in post-World War II suburbs with much less well-connected street grids.  (Unlike other major Northeastern and Midwestern cities, most of the Washington MSA’s population growth has happened in the automobile era, due to the expansion of the Federal government during the New Deal and World War II.)

Notably, only two metro areas have worse-connected street grids where the average job is located than where the average resident is located: Detroit and Riverside.  In Detroit, this is likely because the urban core, which has a regular street grid, has lost the vast majority of its jobs, while employment is decentralized to suburban locations.  While the urban core has also lost significant population, its job loss has been worse, and many of the residential suburban areas still have regular street grids.  

In Riverside, the effect is more likely related to the fact that the Riverside MSA is, to a significant degree, a suburb of the Los Angeles MSA. Jobs in the Riverside metro area has a disproportionately large, low density distribution industry and a large share of retail and other jobs that serve residents, which are more likely to be found in car-oriented suburban areas with disconnected street grids (for example, near freeway interchanges) than jobs in metro areas that have strong downtown commercial cores.

Both the population and the job density data tell roughly the same story, however.  Just as conventional density measures understate the walkable density of urban cores compared to suburbs, they understate the walkable density of older traditional metro areas compared to newer sunbelt ones.

Appendix: Technical Notes

Walksheds were calculated with the “service area” tool in QGIS 3; all other data processing was done in R, primarily using the sf package for data analysis and the tigris, tidycensus, and lehdr packages to download data.  All calculations were done using the local state plane that contained the metro area (or the CBD of the metro area, for metro areas that extend across state plane boundaries) as calculations in a US-wide planar projection caused too much distortion to get accurate and consistent area values.

Walksheds were calculated from the point on the street network nearest to the centroid of the land portion of each block group.  All walksheds were based on a travel distance of 800 m and a 10-m tolerance for geometry gaps. Although this is narrower than the width of many roadways, it was found that using a larger tolerance caused issues in certain areas—particularly in Portland—where block sizes were very small and a larger tolerance could cause the QGIS service area tool to randomly jump across blocks.  An 80-m buffer around the walkshed line features was used to calculate the area of the walkshed.

Data Sources

My walksheds were calculated using US Census 2018 Tiger-Line street networks with roads marked as freeways and freeway ramps removed.  Unfortunately, this data does not include information on the presence or absence of sidewalks.

Population and job data were also taken from US Census products: the 2018 5-year American Community Survey estimates for population and the 2015 Longitudinal Employment-Household Dynamics data for jobs.  This jobs data includes private, state government, and Federal employees, but excludes active-duty military personnel and some civilian Federal employees in defense and national security jobs.

Formulas

Population-weighted average population density was calculated with the formula

where pk and dk are the population and population density of the kth block group and Ptot is the total population of the metro area. 

Walkshed-adjusted population-weighted average population density was calculated with the formula

where wk, pk, and dk are the percent ideal walkshed, population, and population density of the kth block group and Ptot is the total population of the metro area.  The formulas for job-weighted average job density and walkshed-adjusted job-weighted average job density were the same except that job densities and numbers of jobs were used in place of population densities and populations.

Local flavor: Cities with the most independent restaurants

Which US cities have the most independent restaurants?

One of the chief advantages of cities is the range of consumption choices they afford to their residents. In general, larger cities offer more choices than smaller ones.

One of the things that makes a city special and distinctive is its food and culture.  Too much of our modern life is indistinguishably the same from place to place. one McDonalds or Starbucks or Applebees offers, by design, virtually exactly the same experience as every other.  Whether its in Tacoma or Tucson or Tupelo or Tampa, a chain is a chain is a chain is a chain.  One of the things that makes a place special is its own local cuisines and locally owned restaurants, which are unlike those you find anywhere else. These distinctive differences are economically important: as Jane Jacobs once wrote: “The greatest asset a city can have is something that is different from every other place.”

We love our independent restaurants

The Internet has enabled and encouraged us to share our opinions about all kinds of businesses (and other experiences), and that provides a stronger statistical means of measuring consumer sentiment about chains and independent businesses. Yelp’s database of millions of restaurant reviews nationally shows that consumers rate independent restaurants more highly than chains, and the gap has been growing.

The divergence in ratings between independent and chain restuarants, by market, from Yelp.

Over the past five years, aggregate ratings for ratings for chain restaurants (the blue line on the chart) have have fallen, while ratings for independent restaurants (the red line) have risen.  There’s no market in the nation where independent restaurants don’t command higher ratings, on average, than their chain counterparts. Yelp quotes industry expert Dave Henkes, Senior Principal at food industry research firm, Technomic:

“Consumers are embracing local in all aspects of their lives, and this includes the restaurants they visit,” Henkes, said. “Consumers tell us that smaller, independent restaurants are more authentic, offer better and more unique menu items, align more closely to consumer needs, and provide better value than their chain counterparts.”

But in addition to sheer numbers, the quality of choices matters as well. When it comes to dining, roughly a quarter of the restaurants in most large US metro areas are chain restaurants, according to data compiled by Yelp.

Metros with the most independents

But the market share of chain restaurants varies widely across US metro areas. Chain restaurants are about three times more prevalent in Louisville Kentucky (where they make up more than 35 percent of all restaurants, according to Yelp) than they are in New York, where only about 13 percent of all restaurants are part of chains.

In general, big cities and cities in the Northeast and West Coast tend to have the highest fraction of independent restaurants.  The top ten ranked cities include New York, Boston, Providence and Buffalo, and San Francisco, Seattle and Portland. Conversely, cities in the heartland and the sunbelt tend to have a higher fraction of chain restaurants, according to the Yelp data. The top ten for chain market share includes Orlando, Dallas, Nashville, and Cincinnati.

Fewer chains, more choices

One indicator we’ve long looked at for understanding a city’s culinary standing is the number of restaurants per capita. Again, there’s wide variation across cities: New York has about 22 restaurants per 10,000 population; at the other end of the spectrum, Phoenix, Tucson and Memphis have only about 14 restaurants per 10,000 population.

These two indicators–independent market share and restaurants per capita–are related. Places that tend to have a higher fraction of independent restaurants tend to have more restaurants per capita. We’ve illustrated the relationship here. The vertical axis shows the number of restaurants per capita (estimated from Census business patterns data) and the horizontal axis shows the share of chain restaurants in each metro area. (We’ve reversed the values in the horizontal axis so that as you move from left to right, the share of chains is decreasing and the share of independent restaurants is increasing). Each dot corresponds to the values for a single metro area.

In general, this chart shows that as the independent restaurant share increases, (the chain percentage decreases) the number of restaurants per capita increases.  In the upper right hand corner of our chart we find cities with lots of restaurants relative to their population and a very high share of independents (New York, San Francisco, Portland, Seattle, Providence).  In the bottom left, we find places with relatively few restaurants per capita, and much higher fractions of chain restaurants (Cincinnati, Louisville, Memphis, Phoenix).  There are a couple of interesting outliers:  Milwaukee has lots of independents, but fewer restaurants per capita than one would expect; while Nashville is the opposite: more restaurants per capita but relatively fewer independent restaurants than one would expect.  For the most part though, the regression line plotted on the chart does a pretty good job of illustrating the strong connection between the number of restaurants in a metro area and the share than are independents.

Its unclear what the nature of the causal relationship is here: It could be that food-oriented metro areas have both more restaurants and therefore more opportunities for independent restaurants. The diversity of tastes in a New York, for example, supports more restaurants per capita and that in turn, creates more space for independents. It could also be that independent restaurants are generally smaller than chains, and so metros with more independents have more restaurants.

Our data on chain and independent restaurants were graciously provided by Yelp’s Carl Bialik. They provided data for most, but not all large metropolitan areas; data for a handful of large metros, including Detroit, Indianapolis, Kansas City and Oklahoma City are not part of our sample.

We really like the clever “Eat Local” image available on t-shirts from begoodmonster.com.  City Observatory is not affiliated with, and frank admiration aside, we not have any relationship with begoodmonster.

 

How driving ruins local flavor

Car-dependent metros have fewer independent restaurants

Chain restaurants and cars go together

Yesterday, we used data compiled by Yelp on chain and independent restaurants to compute the market share of chains in the nation’s largest metro areas. Overall, about a quarter of all restaurants are part of a chain, but that fraction varies widely across metro areas. We think that a high market share of independent restaurants is likely a good indicator of a thriving and diverse food scene, and is a strong amenity for many metro areas.

We also noted that there’s a strong correlation between places that have more restaurants per capita and places that have a larger market share of independent restaurants. Consumers in metros with a higher share of independent restaurants have both more varied dining choices and more total dining choices, on average, that consumers living in metros with a higher share of chain restaurants.

We’re not sure what the driving factors are that contribute to the more robust health of independent restaurants relative to chains in some metros, but we had a hunch that we’re investigating today.

Metro areas vary widely in their level of car dependence, something that we’ve explored in a number of ways at City Observatory (through our report on the Sprawl Tax) for example. One key marker of a metro area’s car dependence is the average number of miles driven per person per day. In the typical large metro area, that figure is about 25 miles per person per day, but is far lower in compact, transit served metros, and noticeably higher in sprawling, car-dependent metros.

So we took US DOT estimates of the number of miles driven per person per day in large metro areas and compared it with Yelp’s data on the market share of chain restaurants in those same metros.

The data show a strong positive relationship between miles driven and chain restaurant market share. Metros where people drive more have a higher fraction of chain restaurants. For example, New York, Portland and New Orleans all have a very low share of chain restaurants (less than 20 percent), and also have very low rates of driving per capita. Places where people drive a lot (Atlanta, Charlotte and Orlando) tend to have very high proportions of chain restaurants (more than 30 percent).  Overall, each additional mile driven per day is associated with an 0.6 percentage point increase in the share of chain restaurants in a metropolitan area.

We can conjecture why this might be. If people travel more by car, then it may be more important for restaurants to be visible and accessible by car, whether located along highways or in strip malls. National brands and advertising may be relatively more important to gaining consumer awareness than they are in cities where people drive less. If people spend less time driving in cities because they are more compact, or more accessible by transit, biking and walking, that may provide more niches for smaller scale independent restaurants, compared to formula-driven chains. Often times traffic levels on streets and arterials are location factors for national chains: unless a site has so many thousands of cars passing per day, they won’t consider opening a restaurant. That kind of rationale may lead to more chain restaurants in places where people drive more.

Cars seem to be bad for independent restaurants (Champaign, IL, WCIA-TV)

Car travel poses an information problem for people choosing restaurants.  This is clear when we think about our behavior traveling on a road trip, versus going out to dinner in our own neighborhood.  Behind the wheel of a car, the only information you have about restaurants in a town (unless you’ve done a yelp or google search in advance) is by a quick glance at a restaurants sign (or if you’re on the freeway, a tight cluster of tiny logos).  Chances are you’re much more likely to visit a chain restaurant in an unfamiliar place because you know what you’re going to get, even if it isn’t great. At home, you’ve got a lot of local knowledge (yours, and a network of friends) to tell you what’s good and what isn’t, and its less likely that you choose a restaurant because of its signs or advertising. What is true of us as individuals on the road, is likely also true of our behavior across communities:  people who drive a lot probably choose their restaurants more based on what they can see through the windshield than in places where people are walking, a dynamic that works to the advantage of national chains as opposed to local independents.

This could also be more evidence for our Green Dividend:  People who drive less spend less money on cars and gasoline, and have more money to spend on food, including supporting their local independent restaurants.

Regardless of the exact reasons, the strength of this finding is striking. It suggests that if you want to have more consumer choice and more independent entrepreneurship in your local restaurant scene, you want to have a less car-dependent transportation system. Our auto dependency may be on of the things fueling the banal sameness typically associated with chains.

Lying about safety to sell freeway widening

ODOT’s lies about safety at the Rose Quarter are so blatant they can be seen 400 miles away.

Freeway widening isn’t about deaths or injuries, but “motorist inconvenience” according to this safety expert, making this $800 million project an egregious waste of funds

Traffic safety is a real issue, and by any objective measure, Oregon is failing badly. Between 2013 and 2016 traffic deaths in the state increased by more than 50 percent.  Car crashes kill hundreds of Oregonians each year.

Those grim statistics rightly make safety a primary concern in transportation planning. But they’ve also led the Oregon Department of Transportation to hijack that concern and to pervert concerns about safety to support spending hundreds of millions of dollars on a project that has ultimately, nothing to do with safety.

This case is so egregious that it’s visible a full state away, in Boise, Idaho.

Don Kostelec is one of the sharpest voices for transportation safety in the US.A traffic engineer, he consistently points out the flaws in our current system of designing road to optimize vehicle speeds and throughput.He researches, practices and writes tirelessly on the subject; his “Twelve Days of Safety Myths,” published by Strong Towns, is an insightful exposure of the biases of many current industry practices.

Last March, Kostelec was an expert presenter for Boise’s citizen planning academy, a training course that helps citizens become more knowledgable about a range of land use, transportation and housing issues. In his presentation, “Is Congestion Really the Problem?” Kostelec singled out Portland’s Rose Quarter Freeway project as a classic example of a state Department of Transportation using lies about safety to sell a project designed to make cars move faster.

As we’ve noted at City Observatory, one of the Oregon’s Department of Transportation has loudly (and dishonestly) claimed that I-5 at the Rose Quarter is the state’s “#1 crash location.”  As Kostelec notes, that sounds pretty scary to the public and most officials:

The Oregon DOT . . . says that part of the project need to spend $450 million to add some amplified on and off ramps  is due to safety issues on I-5, and it says that it has the highest crash rate in the state. And you’re a policy-maker, you’re a legislator, you’re a planning and zoning commissioner, and you hear the state DOT saying we have the highest crash rate on the state system–we have to do that, right? We’ve got to do something?

Well, you’ve got to go to the Appendix . . .

And Kostelec takes his audience to the Appendix, which looks like this:

A slide from Don Kostelec’s March 2019 Presentation.

As Kostelec points out, according to ODOT’s own data for the most recent five-year period, none of the crashes in the project area have been fatal, or even serious.  Nearly all of the crashes are non-injury fender benders.

Kostelec digs deeper:  Despite ODOT’s claims that peak hour traffic is slowed by these fender-benders, ODOT’s data shows that most crashes actually happen during non-peak hours.

“. . . when you look at when crashes are occurring, for the most part, they’re occurring mid-day,  not at the time of day that the traffic models and things are trying to address.”

Because most of the crashes happen when the freeway is not jammed, widening the freeway is unlikely to do anything to reduce the number of crashes. If anything, as Kostelec argues, faster traffic is likely to increase the severity of the crashes that do occur.

The key takeaway here is that we ought to care about lives lost and injuries sustained, not the number of crashes. As a result, the I-5 Rose Quarter project isn’t about safety, its rally about motorist convenience. Kostelec:

“What are we trying to do here?  We’re trying to prevent a bunch of no-injury crashes.  Now nobody wants to be in a fender bender.  I have a minor ding.  But that’s what I call a motorists inconvenience crash . . . and we’re proposing to put $500 million into this . . . good for you Oregon DOT.  You’ve done a hell of a job with safety, . . .

Designing our roads–and indeed, re-engineering our entire urban environment–for the speed and convenience of drivers are what has produced our current lethal transportation system. It’s shocking and perverse that highway engineers at the Oregon Department of Transportation can use the mantle of “safety” to peddle a an $800 million project that addresses no real safety need, at the same time it routinely pleads poverty when asked to fix the many multi-lane arterial streets in the Portland area that routinely kill and maim our citizens.

Editor’s note:  $450 million to $500 million was the range of ODOT estimates of project costs when Kostelec gave his talk in March 2019; to almost no one’s surprise that has gone up, and ODOT now estimates the project could cost between $700 and $800 million, and buildable freeway covers that some community leaders say are essential to the project could cost another $200 to $400 million.

 

 

Climate failure and denial at the Oregon Department of Transportation

Oregon is utterly failing to reduce transportation greenhouse gas emissions

Instead of being down 10 percent by 2020, transportation greenhouse gas emissions are up more than 20 percent

Oregon will miss its 2020 GHG goal by 6.5 million tons per year

ODOT’s so-called “strategy” is really technocratic climate denial

In 2007, Oregon boldly adopted the policy of reducing its greenhouse gas emissions by 75 percent from their 1990 levels by 2050. The Legislature also set an interim goal of reducing greenhouse gases by 10 percent by 2020.  Since transportation emissions are the single largest source of greenhouse gases in the state, much of the responsibility for meeting this goal necessarily depends on transportation policy.

The 2010 Legislature directed the Oregon Department of Transportation to fashion a strategy for meeting that goal.  It labored, and came forth with an 130-page document in 2012, called the STS or State Transportation Strategy. That document was presented to–but significantly, not adopted by–the Oregon Transportation Commission. In 2018, the department published an update report presenting some emissions data and describing implementation efforts. On paper, at least, we’re led to believe that ODOT has an ongoing plan for addressing climate change. But independent data on emissions tells a different story.

The tale of the tape:  Rising transportation greenhouse gases

The tragic fact is that the state’s own greenhouse gas monitoring efforts show that the strategy is failing–utterly.  In order to meet the state’s goal of a 75 percent reduction from 1990 levels, greenhouse gases from transportation–which were about 25 million tons in 1990–would have to fall to about 5 million tons per year by 2040.

In the past four years, according to data published by the State Department of Environmental Quality, greenhouse gases from transportation have increased by more than 20 percent statewide, from 21.0 million tons per year to 25.3 million tons per year.  (Transportation emissions include sources other than cars, such as trucks, trains and aircraft, but automobiles and light trucks are the largest component of transportation emissions).

ODOT’s 2018 monitoring report concedes obliquely that it won’t come close to meeting the state’s climate reduction goals, but claims (falsely) that the state is making progress. As the report says:

The chart below (Figure 2) shows an estimate of GHG emissions projected from current plans and trends, compared to the STS vision. The chart shows an uptick in emissions following the recession and projected reductions in the long term. In the long term it is assumed that vehicles get more efficient, which helps to bring the curve down. While the overall trend line is moving in the right direction, it falls short of the levels called for in the STS vision.

ODOT STS Monitoring report (emphasis added).

One key fact is missing from the 2018 monitoring report:  the actual volume of greenhouse gases emitted by transportation.  Figure 2 shows, very crudely and without much detail, an increase in greenhouse gases from 2010 to some unspecified year between 2010 and 2020 shows the “plans and trends” (light green) rising compared to a “STS vision” line (dark green declining).

Transportation GHG:  Going rapidly in the wrong direction

Let’s take a close look at the actual annual data for Oregon transportation greenhouse gas emissions as tabulated by the Oregon Department of Environmental Quality.  They show year-by-year after 2012 a steady increased in transportation greenhouse gases.

A more accurate view of recent transportation greenhouse gas trends in Oregon. Data from Oregon Department of Environmental Quality.

Between 2013 and 2017, according to DEQ estimates, transportation GHGs in Oregon increased by more than 20 percent.  Contrary to what ODOT describes as an “uptick” and an “overall trendline going in the right direction” Oregon is going rapidly in the wrong direction.

How much so?

Oregon will miss by a wide margin its policy of reducing greenhouse gases by 10 percent from 1990 levels in 2020, at least as applied to transportation.  Instead of declining by 10 percent, transportation greenhouse gases will be up more than 20 percent (barring a dramatic reversal of the last four years trend).  And, if the current trend continues, Oregon’s transportation greenhouse gas emissions will be 25 percent higher than in 1990, missing the goal by 35 percentage points.

Since the STS was written in 2012, Oregon has gone dramatically in the wrong direction.  Focusing on percentage point changes tends to obscure the enormous size of the emissions increase:  Oregon’s transportation greenhouse gas emissions have increased by 4 million tons per year.  If emissions manage to stay flat from 2017 through 2020, Oregon will miss its greenhouse gas reduction target by 6.5 million tons per year.

The problem looks even more daunting going forward.  The following chart shows the level of transportation greenhouse gases in Oregon in 1990, 2013, and 2017, and the level of greenhouse gases implied by the state’s goal of reducing greenhouse gas emissions by 75 percent from 1990 levels.  Through 2013, we had made essentially no net progress compared to 1990.  Over the next four years, as noted above, we went dramatically in the wrong direction.  Today, to meet our 2050 goal, we have to reduce greenhouse gases from transportation by 20 million tons per year, from 25 million today, to just 5 million in 2050.

Going in the wrong direction makes the task going forward vastly harder.  In 2007, when the greenhouse gas policy was adopted, Oregon had 43 years to manage a 15 million ton reduction in transportation greenhouse gases; the state had to figure out how to reduce transportation emissions on an annual basis by about 350,000 tons per year each year to meet its 2050 goal. In 2020, with average emissions of about 25 million tons, we have just 30 years to make a now larger 20 million ton reduction, which works out to reducing transportation GHGs by 660,000 tons per year each year.  The road has become almost twice as steep, and becomes more so each year we fail to make progress.

While ODOT acknowledges it may only reduce transportation greenhouse gases by a fraction of those mandated by state law, its report still characterize its efforts as “on track” in its conclusion:

With current efforts and our newer plans, Oregon is on track to reduce GHG emissions by 15-20 percent below 1990 levels by 2050, which falls far short of the STS vision.

The only sense in which this is “on-track” is that it is on track to fail. And this claim of a 15-20 percent reduction is based on no evidence showing how ODOT expects to reverse the more than 20 percent increase in transportation greenhouse gases in the past four years, much less generate additional, continued reductions in emissions.

In theory, the huge miss on the 2020 goal and the big increase in transportation greenhouse gases in the past four years ought to prompt additional, much more aggressive action.  But that doesn’t appear in the 2018 update.

The plan is failing, and ODOT is doing nothing to respond

Even though the strategy calls for updates and bold action, the latest report, while conceding that the state’s efforts are inadequate, proposes no new actions.  The original STS report promised regular adjustments, based on performance:

. . . the STS will be monitored and adjusted over time, as needed. While there are challenges and unknowns ahead that will require continuous adaptation and development of additional creative solutions, the groundwork established in the STS provides a fi rm base from which to build. Strategies already underway will receive increased support and new, effective and aggressive strategies that require enhanced levels of collaboration between the private and public sectors and across all transportation markets will be implemented.

(Oregon Statewide Transportation Strategy, 2012, page 108)

Significantly, while the STS was submitted to the Oregon Transportation Commission, it was “accepted” by that body, rather than adopted as a statement of policy. As a matter of law, the STS has no regulatory force, nor has the commission done anything other than say that they’ve read it.The STS is more an implausible conjecture  than an actual plan. It’s a laundry list of largely speculative actions , all of which would be needed to be implemented by someone, but with no timetable or accountability for their actual implementation.

Even though it hasn’t been adopted and has no force of law, the STS functions as a kind of talismanic fig leaf for ODOT when it comes to climate concerns.  Pressed by activists to acknowledge the seriousness of the climate crisis, OTC commissioners and department managers can point to the STS as their “plan” for responding.

The STS isn’t so much a plan as a carefully rehearsed list of excuses for failure

The trouble is, the plan does next to nothing to reduce greenhouse gas emissions.

The bulk of the STS consists of projections of age, weight and fuel efficiency of the future vehicle fleet, the carbon content of fuels. ODOT counted on a combination of tough federal fuel efficiency standards, a decline in consumer purchases of trucks and SUVs, growing adoption of electrical vehicles, faster replacement of older, dirtier cars and reductions in the carbon content of fossil fuels.  As the 2018 STS update makes clear, everyone one of these projections has been wrong: fuel efficiency standards have been gutted, truck and SUV sales are nearly double those of cars, consumers are keeping cars longer than ever, electric vehicle adoption is far slower than forecast, and fuels are dirtier.

Little surprise than emissions have increased rapidly–ODOT isn’t undertaking any actions that would plausibly lead to their reduction. Aside from a few token efforts–like a handful of vehicle charging stations, and an Orego road pricing demonstration (which after four years of operation  has  only 611 of the 5,000 users it was seeking.  Most of those who signed up for the program have dropped out.  And, perversely, Orego’s flat per mile fee imposes higher costs on more fuel efficient, low emission vehicles. The STS also makes the unsubstantiated claim that so-called “intelligent transportation systems” (electronic road signs and ramp meters) will produce reductions in greenhouse gases (something its been caught lying about before).

In effect the STS isn’t a “strategy” at all:  its an elaborate ruse, designed to deflect responsibility and create an illusion of action that actually allows the agency to continue business as usual.

ODOT’s advisory group includes climate deniers

It shouldn’t be a surprise that this plan is so feeble.  In constructing the original plan, the Transportation Department relied heavily on the advice of those who either opposed taking strong action to reduce transportation emissions, or simply didn’t believe in climate change. As the update report makes clear, the Oregon Department of Transportation has selected a group of “stakeholders” including those who are climate denialists.  As the ODOT 2018 STS Monitoring report says:

Stakeholder opinions varied from adamant support of the effort to fundamental disbelief in climate change and the need to reduce GHG emissions.

Oregon STS Monitoring Report, 2018, page 1

Stakeholders  chosen by ODOT to develop the strategy included the American Automobile Association, the Oregon Trucking Association, and other groups with an interest in road-building. In short, the state’s climate reduction strategy has been authored by a group that is only in favor of meeting the statutorily prescribed objective if its not deemed too inconvenient. As the strategy document concedes:

The stakeholder groups guiding the process recognized the need to understand impacts to other outcomes beyond GHG emissions, such as health and equity. For example, changes to household costs was one of the primary outcomes the stakeholder groups looked to in assessing how hard to push on certain strategies, such as pricing.

And here we have the classic double standard:  The Oregon Transportation Commission effectively tells climate activists that it has utterly no discretion to consider the climate impacts of its proposed $800 million freeway widening project, because the Legislature has instructed it to “git ‘er done” and build the project.  But when it comes to a definitive adopted requirement to reduce greenhouse gas emissions by 75 percent, ODOT defers to the undocumented opinions of a few stakeholders it selects to block a strategy because they think (without evidence) that it may be costly or inconvenient. Moreover, the claims that it would be too costly or inequitable to achieve greenhouse gas reductions aren’t documented anywhere in the STS report.

If anything, reducing driving will reduce the costs incurred by Oregon households.  We have estimated that lower rate of driving in the Portland metropolitan area (compared to large metropolitan areas in the US) saves the region’s consumers more than $1 billion per year in expenses on vehicles and fuel compared to what they would pay if Portland were as sprawling and auto-dependent as the typical metro area. Further reductions in vehicle miles traveled reduce household costs.

With each passing day, the enormity of the climate crisis becomes more clear. In Oregon–as in most of the US–the biggest source of greenhouse gas emissions is transportation. In theory, Oregon’s statutory commitment to reduce greenhouse gases should motivate real action, but as state data show, we’re rapidly heading in the wrong direction. The ODOT STS has failed utterly to produce any progress–much stronger steps will be needed if Oregon is to do anything meaningful to address the climate crisis.

 

Memo to the Oregon Transportation Commission: Don’t Dodge

Climate change? Not our job.  We’re just following orders.

The Oregon Transportation Commission is on the firing line for its plans to build a $800 million I-5 Rose Quarter freeway widening project in Northeast Portland. There’s been a tremendous outpouring of community opposition to the project:  more than 90 percent of the 2,000 comments on the project’s Environmental Assessment have been in opposition (a fact conveniently omitted from ODOT staff’s recitation of its outreach process).

The commission has lately had to endure an onslaught of commentary from young climate activists who–quite correctly–recognize that freeway widening is exactly the opposite of the kind of transportation investment we need to make if we’re to do anything to reduce greenhouse gas emissions.  Oregon’s global warming commission has pointed out that transportation emissions are increasing, and are now the largest single source of greenhouse gases in the state.  And its a demonstrated fact that wider freeways induce more traffic, longer trips and more emissions, increasing greenhouse gases.   (And, its worth noting, that in just the past four years, per capita greenhouse gas emissions from transportation in the Portland area have increased by 1,000 pounds per person–so whatever we’re doing now is taking us rapidly in the wrong direction).  The people testifying at the Commission’s December 2018 and January 2019 commission meetings have offered passionate, informed testimony on the folly of the freeway widening project.

The commission has sat, mostly silently, through these pointed critiques.

In the end, after listening to the testimony, they shrugged, and basically said:  “It’s not our decision–the Legislature told us to build this road, so we will.”  Here’s OTC Chairman Bob Van Brocklin

We have been mandated by the Legislature to design and build a project, so these are decisions the Legislature and we are not in a position to override the Legislature’s decision on that.  So we have a Legislative mandate to design and build this project. We now, today, are forwarding to the Legislature, at their request, in the bill, HB 2017,  a Cost to Complete estimate that is substantially higher than the amount of money they have dedicated to the project . . .

We’re just following orders.

The claim that the decision is out of their hands rhetorically allows the OTC to absolve itself of responsibility for the climate implications of its actions.  The commission simply isn’t going to listen to climate concerns, or other objections, because (it says) the Legislature has already decided to build the project.  The commissioners are in a tough spot: they lack a plausible direct response to either the emotional or rational aspects of the climate critique of the freeway widening project. So instead of answering these questions, the plan is to dodge them, by claiming the decision is out of their hands.  In their view, climate change concerns, no matter how serious, simply aren’t an admissible argument.

Time and again, through history, this “helpless subordinate” act has been proffered by people looking to avoid taking responsibility for the moral implications of their actions.

If the transportation commission were a row of five junior clerks, they might have a plausible argument. But they’re not.  In Oregon’s citizen commission form of government, Transportation Commissioners are supposed to represent the public’s interest.

Nothing in the statute creating the commission or the law authorizing funding for this particular project prevents the Commission from reporting back to the Legislature that the project is no longer in the public’s interest, for any one of many reasons: it’s going to cost vastly more than the Legislature was told, its going to further damage the neighborhood severed by the freeway decades ago, its going to unacceptably worsen pollution, its going to undercut the state’s adopted legal mandate to reduce greenhouse gas emissions.  Hell, if they were being honest, they’d also tell the Legislature that the freeway widening project isn’t going to work to reduce congestion either.

That’s what commissions are for:  To dig deeper into the details than legislators have time to do in the press of a legislative session. The commission can bring additional information and advice back to the Legislature. Arguably this is exactly what was anticipated in in the passage of House Bill 2017.  It specifically mandated a legislative check-in on complicated, controversial and expensive mega-projects like the Rose Quarter Freeway widening.  In addition, there’s another legislative mandate, one that predates the authorization for this project, which calls for the state to reduce its greenhouse gas emissions by 80 percent by 2050; that counts as legislative direction as well.  And when legislative mandates are in conflict, and when new information becomes available, its incumbent on commissioners to advise the Legislature accordingly, and not blindly follow out-dated, un-wise and contradictory orders.

Rather than plugging their ears and ignoring the outpouring of citizen opposition, and pretending to be helpless minions of the Legislature, the citizen commissioners of the Oregon Transportation Commission should do their jobs, and reflect back to the Legislature the serious problems that have been identified with this project.

Bags, bottles and cans: Pricing works

Oregon’s new mandatory bag fee harnesses market forces to promote environmental objectives

Now do it for greenhouse gases

On January 1, a new law went into effect in Oregon, which mostly bans single use plastic grocery bags, and requires grocery stores to charge a 5 cent fee for every paper bag they provide to customers. The law harnesses economic incentives to encourage consumers to remember to bring their own re-usable shopping bags to the store.

Oregon actually has long experience with this kind of smart economic nudge. The bag fee comes on the heels–well, half a century, after Portland enacted its first in the nation bottle bill, in 1971. Requiring a mandatory deposit on beer and soda bottles and cans produced a dramatic reduction in litter, and led to widespread recycling. The state has a smart performance based system for setting deposits:  when recycling rates failed to meet established goals, the bottle deposit was raised from a nickel to a dime.

There are a lot of reasons to dislike single-use grocery bags; it is a form of waste that is obvious to almost everyone, and bags, especially the plastic ones, are a highly annoying and visible source of litter. Plastic bags also jam up the sorting machines in recycling plants; Portland’s Metro asked consumers to put bags in the trash, rather than recycle them.  If the experience of other places that have implemented bag fees (like Chicago and London) are any indication, Oregon can expect a pretty dramatic reduction in the number of single use bags shoppers take. England’s 5 pence fee on plastic bags reduced the number of bags by almost 90 percent in a few years.   It’s a small, but potent example of how getting the prices right can quickly, and mostly painlessly, help us achieve an environmental objective.  It’s a parable to embrace.

Tokenism and Cognitive Bias

But while the bag fee is a homely, teachable moment for environmental economics, it also has a deeper and darker message about the challenges we face in tackling our larger environmental challenges.  The key question is:  if we’re easily able to adopt economic incentives for paper bags, why can’t we apply this same tool to the really big problems, same of climate change.

The answer has a lot to do with our distinct cognitive biases:  We are aware of, and pay attention to somethings, and are oblivious to others.  Almost everyone shops for groceries regularly.  Everyone encounters litter (it is visible and annoying).  Everyone throws away (or recycles) single-use bags, cans and bottles.

Small scale recycling efforts are a kind of environmental ablution:  we go through a ritual to demonstrate to ourselves (and others) our moral commitment to saving the planet. These strategies have real, but small environmental benefits.  Trash is ugly and annoying, and often a hazard to wildlife, but it is not the reason the polar ice caps are melting.

Our cognitive bias stems from the fact that carbon emissions, particularly from cars are invisible, odorless and tasteless; they do no cumulate locally, but rather disperse globally.  If cars deposited a charcoal briquet every hundred meters or so (which is roughly the amount of carbon they emit), the vast piles of carbon the clogged our streets would immediately prompt us to clean them up, and ban internal combustion engines (just as we no longer tolerate horse manure and how people are expected to clean up after their dogs). But because carbon is invisible and dispersed globally, we don’t care.

So, ironically, we’re very good at small scale tokenism, like the ban on plastic straws in San Francisco.  It gives a simulacrum of sacrifice, but makes almost no difference to the larger environmental catastrophe we face. Yet it gives us as individuals and our leaders who enact such policies, the impression that we’re doing something. Too often, we engage in Nieman Marcus environmentalism:  engaging in conspicuous displays of “green” consumerism. That may be great for assuaging personal guilt, but does little or nothing to resolve the larger social problem of climate change.

The bag fee is 10 to 20 times higher than a carbon tax

Oregon’s 5 cent bag fee works about to a little less than 1 cent per 10 grams.  A kraft paper bag weights about 55 grams. That means that consumers are paying a fee of about 40 cents per pound for their paper bag (454 grams * .09 cents per gram).  On a metric ton basis, that means consumers are paying a fee of about $900 per ton.

To put that in context, most of the commonly forwarded ideas for a carbon tax suggest that a carbon fee of $50 to $100 per ton would lead to a dramatic reduction in greenhouse gas emissions.

Oregon’s bag fee asks consumers to pay a fee that is by any reckoning about ten to twenty times higher than the fee we ought to be charging for carbon pollution.

If we can charge a bag fee, we can implement some form of carbon pricing. If we do, we’ll discover that consumers, producers and the overall economy can adapt quickly.

 

Climate crisis: Cities are the solution

A new report shows how cities are central to any strategy to fight climate change

Cities have the “3 C’s: Clean, compact, connected

National government policies need to support cities

Let’s describe a low carbon future in positive, aspirational terms

Will the future be brighter or darker than today? That’s a central question in the climate debates. In an effort to focus attention on the severity of the challenge, much of the discussion is inherently pessimistic.  Advocates on one side talk about the dire consequences of inaction; those on the other side emphasize (and in our view exaggerate the difficulties of adjusting to lower carbon living. We’re pleased to see a new report–Climate Emergency, Urban Opportunity–from the Coalition for an Urban Transition, which paints the climate challenge in a fundamentally more optimistic way, with better cities at its center.

For all of the apocalyptic and “eat your peas” rhetoric that surrounds discussions of how to tackle the climate crisis, this report takes the important step of framing the role cities can play in positive, concrete terms. Great green cities are (and will be) wonderful, enjoyable places to live. We’ll be closer to the things we want; we’ll spent less time and money traveling and stuck in traffic, we’ll have more options on how to get around.  (Just as with the Sightline Institute’s guide to the rhetoric of housing affordability, how we talk about things matters)

A report released late last year is a guidepost to making this kind of positive case.  Entitled: Climate Emergency, Urban Opportunity, the report spells out how cities that are clean, compact and connected will not only have lower carbon emissions, but will be better, more enjoyable places to live, work and play. The core of the report is its “3 Cs” framework, and the narrative explains how each of the elements is mutually reinforcing, and that how together, they form a coherent vision of the role that cities can play making our world more just, sustainable and equitable.

As we’ve long noted at City Observatory, the market is already moving in this direction.  People, especially young adults, are increasingly moving to cities — and if that’s producing affordability problems its a sign that we aren’t building great urban spaces fast enough to meet the growing demand.  We’ve also noted the rising premium that housing in walkable neighborhoods commands; more evidence that there’s a market demand, and a shortage of places that are compact and connected.

It’s important to paint a picture of how low-carbon urban living can enrich our daily lives, and “Climate Emergency, Urban Opportunity” does just that.  Instead of sprawling, low density car-dependent development, we could have cities that offer more.

Reversing this trend by pursuing more compact urban development could deliver better living standards and more vibrant cities. People could enjoy easier access to jobs, services and amenities. Public services could be cheaper, as they could be delivered more efficiently. More time in shared spaces could help to connect people across class and cultural lines. Higher densities could support a greater variety of shops, restaurants and public spaces within neighbourhoods.

The report does a nice job of using graphics and examples to illustrate the livability benefits of low carbon cities. Around the world, cities like Stockholm, Windhoek and Seoul are building the kind of urban neighborhoods that exemplify this inclusive, interesting and low carbon ideal.

Building great cities requires national government support

It’s become fashionable to tout “the New Localism“–the idea that in the face of national government intransigence (or outright denial) of the need to tackle climate change, that Mayors and cities can tackle this big global problem.  While some cities are making strides, ultimately, achieving this vision of compact, clean, connected cities will require the full support of national government.  National policies on transportation, housing, and energy all set the context for local efforts to promote urban development, and for too long have penalized urbanism and subsidized decentralization and sprawl.  For example, the report specifically endorses carbon pricing (something only national governments can accomplish) and the reallocation of national transportation budgets from road building to transit and active transportation. National governments also need to empower cities and local governments to implement road pricing. This report is valuable because it makes a strong case that strengthening cities is essential to achieving important national objectives, starting with climate, but including economic and social progress.

Too often, climate change is framed in apocalyptic terms, with .  But its better to view it as an opportunity to fix many of the things we got wrong when we built human habitation around an expensive, unsustainable and in many ways anti-social car-dependent system. Our best days are ahead of us, if we can only have the imagination to build the world we want to live in.

Coalition for an Urban Transition, Climate Emergency, Urban Opportunity: How national governments can secure economic prosperity and avert climate catastrophe by transforming cities,  September 19, 2019.

The Week Observed, February 28, 2020

What City Observatory this week

1. The inequity built into Metro’s proposed homeless strategy. Portland’s Metro is rushing forward with a plan asking voters to approve $250 million per year in income taxes to fight homelessness and promote affordability in Metro Portland. It’s pitched as redressing the inequities of the past: homelessness disproportionately affects communities of color.  But the political deal that’s been crafted allocates the money to counties not based on need, but on population. The result is one suburban county gets five to six times as much funding per homeless person as does Multnomah County (which includes the central city, Portland).

2. Gentrification:  The case of the missing counterfactual. How do we know whether gentrification makes a neighborhood’s residents worse off or better off? The answer that comes from sociologists tends to be based on ethnography:  interviewing long-time neighborhood residents to get their opinions about change. That’s a good starting point, but as a rule, these ethnographic studies don’t look at a control group of otherwise similar neighborhoods that don’t gentrify. So, while residents of gentrifying neighborhoods may remember that things were better in the past, and resent change, that doesn’t necessarily mean that residents of otherwise similar low income neighborhoods that didn’t gentrify would feel better about their situation. What’s missing from these studies is the “counter-factual”–i.e. a look at the opinions of people in low income neighborhoods that didn’t gentrify.  The assumption is, that in the absence of gentrification, everything stays the same. But we know that’s not true: low income neighborhoods that didn’t gentrify tended to lose about 40 percent of their population over four decades. This “displacement by decline” is far more common, and likely even more devastating to long-time residents than gentrification. But you’d be hard pressed to know this by reading the typical sociological study of neighborhood change.

3. Why Atlanta’s building moratorium won’t stop gentrification, but will definitely accelerate “flipping.” Atlanta is investing $25 million to turn an abandoned quarry into the city’s largest park, and even before its done, the new park is generating huge interest in surrounding neighborhoods. So much so, that, fearing gentrification, Atlanta Mayor Keisha Lance Bottoms has announced a six-month moratorium on new building permits in the area. While the Mayor is hoping to forestall gentrification, in our view, the moratorium is likely to backfire. Demand for housing in the area won’t go away, instead, it will simply be focused on the existing housing stock. If anything, the moratorium makes the attraction for house flippers even more irresistable, because when they put their revamped houses on the market, prospective buyers and renters won’t have as many choices.

Must read

1. What do to about gentrification?  Writing in Governing, guest commentator Jabari Simama reflects on his experiences with neighborhood change in Atlanta. Change creates friction and discomfort, but if we’re starting from a place that’s highly segregated by race and income, we’ve got to figure out how to make change productive, rather than simply blocking it. As Simama points out:

Gentrification is neither good nor bad, but we must manage it better. Clearly we want to encourage residents to live wherever they please. Mixed-income and multiracial neighborhoods are good for our cities. The question is what needs to be done to make them inclusive — racially and economically — and balanced with legacy and new residents.

It’s a useful message: too often in the face of rapid change and palpable pressure to do something, political leaders find themselves confronted with limited and fruitless choices.

2. Jenny Schuetz:  Corporations make a convenient scapegoat for housing, but that’s not the problem. Brookings Institution scholar Jenny Schuetz takes on one of the most popular myths in housing debates:  that the increasing–but still very small–corporate ownership of houses and apartments is responsible for housing unaffordability. At best, they’re a symptom, and it turns out that corporations are actually finely tuned to taking advantage of supply constraints to make shrewd investments.  If local regulations block the construction of new homes and apartments, then demand shifts to the existing housing stock. Instead of older homes and apartments filtering “down” as they age and depreciate, they get rehabbed and “filter up” rising in price.  As Schuetz explains:

Local regulations also play a role in the buy-and-rehab strategy employed by private-equity firms. In places where regulation limits new apartment construction, acquiring existing buildings is less risky than trying to build new rental housing. There are stronger financial incentives to maintain and upgrade old apartments in tightly regulated markets, because they face less competition from new, high-amenity buildings. This process of upward “filtering” among existing apartments is particularly harmful to housing affordability because it results in higher rents without expanding the number of homes available.

3. More road-widening madness, Los Angeles edition.  Magnolia Boulevard is a major thoroughfare in North Hollywood, and with recent investments in improved transit and new apartment construction, its become an increasingly walkable area. It’s baffling then that the LA Department of Transportation is proposing to widen the roadway, in apparent contravention of the city’s adopted 2015 Mobility Plan, and also contrary to Mayor Eric Garcetti’s recent proclamation that the city will work to reduce greenhouse gas emissions by lowering the number of vehicle miles traveled.  LA Streetsblog explains that when it comes to the manifest destiny of road-widening, highway engineers aren’t going to let newly adopted policies or contributing to planetary destruction get in their way.  Widening the roadway threatens to undo much of the improved livability and walkability of the area:

Since 1999, there’s been a subway, Bus Rapid Transit, dense housing, great walkability, thriving sidewalk cafes, bike lanes, bike paths, bike-share, theaters, galleries, and much more. In two decades NoHo has emerged as the most walkable, most transit-oriented place in the San Fernando Valley, and indeed one of the more walkable neighborhoods in the entire city. Ignoring the current neighborhood opposition and holding steadfastly to outdated plans threatens to cram more cars into the area disrupting what makes it work for the people that live there.

New Knowledge

How density and distance from downtown shape political affiliations. We’ve long known that the red/blue divide cleaves along the urban/rural axis.  Almost everywhere big cities and dense urban neighborhoods are more reliably blue, and non-metropolitan areas and small towns are predictably red. At least some of this has to do with the sorting that’s going on along demographic lines, with younger, better educated (and more generally blue) people moving to metro areas and city centers, while the population of rural areas remains older and less educated.

A new study from University of Maryland political scientists James Gimpel and co-authors published at the Daily Yonder looks to dis-entangle the relative effects of neighborhood urbanity and local demographics in explaining this red/blue divide.  It finds that even after controlling for the partisan differences imparted by different levels of age, income, education and the like, that denser urban areas, and neighborhoods closer to the urban center tend to have a higher fraction of Democrats.  The following chart shows their findings for distance from a large city and partisan affiliation.

The median Democrat lives within about 12 miles of the center of a city of 100,000, while the median Republican lives about 20 miles away.  Democratic affiliation peaks at about 5 miles from the center and declines the further one goes outward; Republican affiliation peaks at about 60 miles from the city.

Demography, still matters, of course, but geography plays a significant part even after controlling for the effects of income, age, education and so on. As the authors explain:

The effects of place are significant. For example, two voters with otherwise similar backgrounds, one living in a city, the other living 165 miles outside the city, will differ in the probability of expressing Republican loyalty by about 9 points. Density, similarly will alter the propensity to identify with the major parties, with those in the densest settlements 15 points more likely to be Democrats than those living in the least dense settlements.

Hat tip to our friend Bill Bishop, publisher of the Yonder.

In the News

The Milwaukee Business Times quoted our ranking of the share of independent restaurants in the the nation’s largest metro areas in its story: “Milwaukee among U.S. metros with most independent restaurants.”

The Portland Tribune highlighted our analysis of the mismatch between the allocation of funds and the incidence of homelessness in Metro’s proposed $250 million per year housing measure.

 

 

The Week Observed, February 21, 2020

What City Observatory this week

1. Local flavor:  Which cities have the most independent restaurants.  Local eateries are one of the most visibly distinctive elements of any city. As Jane Jacobs said, the most important asset a city can have is something that is different from every other place.  Independent restaurants are a great indicator of local distinctiveness. We use data from Yelp to rank the market share of independent and chain restaurants in the nation’s largest metro areas, and find which cities have the most and fewest independent restaurants. The cities with the smallest chain restaurant market shares included New York, San Francisco, and Providence.

We also note that there’s a strong relationship between the market share of independent restaurants and the number of restaurants per capita (a good indicator of consumer choice).  Cities with a higher independent market share have more restaurants per capita than cities with a higher fraction of chains.  Independent restaurants equate to more different choices and more total choices, adding to local flavor in a measurable way.

2. Why cars are bad for independent restaurants. We followed up on our analysis of Yelp’s data on chain vs. independent restaurant market shares by comparing it to US Department of Transportation data on the number of vehicle miles traveled in different metropolitan areas.  It turns out that there’s a pretty strong relationship by how much people drive in a metro area and the fraction of chain restaurants:  more driving is correlated with a higher market share for chain restaurants.

We’re not sure of the reason for this correlation, but we have some hunches. Selecting restaurants by looking through the windshield of a fast moving car is going to bias ones choices in favor of familiar, well-advertised chains. Also, cities where people drive less enjoy a green dividend:  they spend less on cars and fuel and therefore have more money to spend on food, including independent restaurants.

3. Understanding Walkable Density.  We’re pleased to publish a guest commentary from DW Rowlands, a graduate student at the University of Maryland at Baltimore.  Her research looks into the lived density of particular neighborhoods, adjusting conventional measures of density (which ignore the connectedness and walkability of local streets) to produce a new measure of walkable density that effectively captures how easily people can interact in real world environments. Her work shows that some cities and some neighborhoods come much closer to achieving an “ideal” level of walkability:  chiefly older cities and core urban neighborhoods who have traditional street grid systems. Newer metros and outlying areas, with serpentine roadways and cul-de-sacs tend to have even lower “walkable” densities than ordinary density measures would suggest.

 

4. Mapping Walkable Density.  DW Rowlands has mapped walkable density in 17 of the nation’s largest metropolitan areas.  Her maps compare the actual walkable density of census tracts with their theoretical ideal density (i.e. how many people one would live near, ignoring the nature of the street network. These maps show which neighborhoods come closest to realizing their “ideal” density, with dark colors indicating places with relatively high levels of walkable density (relative to their ideal) and lighter colors showing places where actual density falls shortest of the ideal.  Here’s a map for Boston:

5. Climate Failure and Denial at the Oregon Department of Transportation.  As is now true nationally, transportation is the largest source of greenhouse gas emissions in Oregon.  On paper, the state has a response, in the form of a “State Transportation Strategy.” In reality, it amounts to an excuse for inaction, mostly hoping that other actors (federal regulators, car manufacturers, and car owners replace current cars with clean vehicles and fuel. The trouble is its not working: Oregon’s plan called for a 10 percent reduction in 1990 emissions by this year; the state’s transportation emissions are now up 20 percent above 1990 levels–and are going in the wrong direction.

And despite its failure to make any progress, ODOT isn’t proposing any additional actions to reduce greenhouse gases–instead its planning to spend billions widening freeways. It shouldn’t be any surprise that the strategy isn’t working. The department of transportation’s “stakeholder” group included climate skeptics and deniers, who didn’t think the state mandate to reduce greenhouse gases made sense (or was worth incurring costs).

Must read

1. Rent control works best when rent control works least.  The District of Columbia has had a rent stabilization law since the mid 1980s, and more than a third of the District’s apartments are covered by the law. The usual economic concern about rent control is that it tends to stifle upkeep and maintenance, encourages condominium conversions and discourages new construction. If DC’s ordinance hasn’t had all those effects, its probably because it imposes relatively modest limits on landlords: the stabilization provisions limit rent increases to the cost of living plus two percent per year. According to the DC Policy Center’s Yesim Sayin Taylor, the reason the program works so well, and why there’s been relatively less shrinkage in the rental housing stock in DC, than in San Francisco (which has tougher rent control), is because it only aims to stabilize, not restrict, rents:

D.C.’s stock of rent-stabilized units has remained so steady in part because the law prioritizes rent stabilization over strict price controls. The Rental Housing Act’s goal is not to create or preserve affordable housing, but to protect tenants from rapid, unreasonable increases in their rents.

More stringent rent controls, Taylor argues, would likely prompt landlords to convert units to condominiums, and would discourage new construction, leading to a reduction in the total number of rent controlled units.

2. More right of way for transit in Portland. As Smart Growth America notes, Portland’s City Council has unanimously adopted a plan for expanded “Rose Lanes” a network of dedicated lanes and transit signal priority on 29 streets around the city. Building on the success of some existing exclusive bus lanes, the project is avowedly experimental, with the idea of trying and as needed adjusting different treatments to speed bus operation.

Most of the projects are designed to be relatively inexpensive and quick-to-implement. Projects are expected to roll-out city-wide this year and next. Unlike huge capital projects (light rail), these measures can provide widespread benefits, quickly, to many transit users and at low cost.  As SmartGrowthAmerica concludes: “political will, more than money, can become the limiting factor in whether or not transit is truly prioritized on the street.”

3. Public opinion is shifting toward the environment and climate change. The Pew Research Center has a new survey out showing that after the waning of the Great Recession, concern about environmental issues and climate change has increased substantially.  According to Pew a majority of Americans now say that both the environment generally and climate change in particular should be a top priority for the President and Congress.

For the first time in Pew Research Center surveys dating back nearly two decades, nearly as many Americans say protecting the environment should be a top policy priority (64%) as say this about strengthening the economy (67%).

The downside:  there’s a stark partisan divide on both issues:  No where are Democrats and Republican’s more divided than on the importance of dealing with climate change.  Some 78 percent of Democrats think its an important issue; only 21 percent of Republicans do.

In the news

The Stranger quoted City Observatory’s analysis of induced demand on Houston’s Katy Freeway in an article entitled: “Washington Democrats are damn right to remove congestion relief from state transportation goals.”

 

The Week Observed, February 7, 2020

What City Observatory this week

1. Talent drives economic development. We know the single most important factor determining metropolitan economic success:  It’s determined by the education level of your population. The latest data on educational attainment and per capita incomes show that two-thirds of the variation in income levels among large metro areas is explained by the fraction of the adult population with a four-year degree.

Each one percentage point increase in the adult college attainment rate is associated with a $1,500 increase in metro average per capita income.

2. Jealous billionaires and cash prizes for bad corporate citizenship. The big economic development prize of the past several years was Amazon’s HQ2, which the e-commerce giant purposely structured as an competitive extravaganza, baldly asking for subsidies from cities across North America.  Bloomberg Business reports that Amazon’s quest was fueled by CEO Jeff Bezos’s jealousy over the generous subsidy package Nevada provided Elon Musk for a Tesla battery factory.

3. It works for bags, bottles and cans, why not try it for carbon?  A newly enacted law in Portland requires grocery stores to charge customers a nickel for each grocery bag they take. Echoing Oregon’s half-century old bottle bill, and similar bag fees in other places, this provides a gentle economic nudge to more ecologically sustainable behavior. And the evidence is that it works–it London, single use bags are down 90 percent. We ought to be applying the same straightforward logic to carbon pollution, and ironically, the bag fee, on a weight basis is 20 times higher than the charge most experts recommend for carbon pricing.

4. Dodging responsibility for climate change. Oregon’s Transportation Commission, a five-member citizen body, is on the firing line in the battle on climate change because of its plans to spend $800 million to widen a Portland freeway. It has largely turned a deaf ear to testimony about the freeway’s negative effects, essentially asserting that the decision to build the freeway has already been made by the Legislature. They’re just following orders, apparently.

In our view, that’s far too narrow a view of the Commission’s role and responsibility:  they have an obligation to hear citizen concerns, and if the project is unsustainable, uneconomic, and won’t work, they have a duty to tell the Legislature.

Must read

1. Fighting housing segregation in Baltimore. CityLab has a great retrospective on the struggles in Baltimore to break down the segregation of public housing. As in many US cities, for decades, public housing (which disproportionately serves people of color) has gotten built in low income neighborhoods, which has only served to perpetuate and intensify racial and income segregation. CityLab chronicles the role of Barbara Samuels, and ACLU lawyer who challenged this practice, and who won a court victory 25 years ago.  This court case–Thomspon v. HUD, gave some real teeth to the Fair Housing Act’s provisions requiring governments to affirmatively further fair housing. Ultimately this case got housing officials to instead implement a system of housing vouchers that give public housing recipients the opportunity to live in middle income and higher opportunity neighborhoods.

2. Little Women’s lessons for housing policy. It takes a real wonk to find deep housing policy lessons woven through an Academy Award nominated film, and Brookings Institution scholar Jennie Schuetz is up to the task. Little Women features a combination of mixed income neighborhoods, relaxed building codes, and co-housing (boarding houses) all of which facilitate greater housing affordability–and thanks to social mixing–romance. As Schuetz explains:

Less zoning equals more social equity and more romance. While some details of the world depicted in “Little Women” would not appeal to modern audiences—corsets and the lack of modern health care come to mind—a more laissez-faire approach to housing regulation seems worth revisiting. Can we imagine a return to communities where mansions, middle-class homes, boarding houses, and low-income housing can co-exist without legal restrictions or social prejudice? Where households with diverse incomes and family structures can interact with casual, everyday intimacy?

In spite of 21st Century injunctions about lesser income housing lowering home values and the apparent desirability of ubiquitous homeownership, the lifestyles (and policies) of 19th Century America promoted housing affordability and social mobility, things we could use more of today.

3. Boston’s under-occupied large homes. We often talk about a shortage of housing, but in an important sense, the problem is less a shortage than a maldistribution. A new study from the Metropolitan Area Planning Council, Boston’s regional planning agency looks at the occupancy of large dwellings (those with three or more bedrooms). It finds that most are under-occupied–that is, these housing units have fewer occupants than they do bedrooms. This phenomenon is driven by history, homeownership, and demographics. A quarter of all three bedroom units are occupied by just one or two persons aged 55 or older.  Inertia, the transaction costs associated with selling a home, and a probable lack of smaller ownership opportunities for those who want to age in place (or very nearby) likely contribute to older homeowners staying in these larger houses, rather than selling them to younger and larger households.  The mismatch between household sizes and housing units suggests that part of the solution to our perceived housing shortage would be to develop incentives for older homeowners to downsize more quickly.

New Knowledge

Integration and civic engagement. A growing body of evidence points to the importance of mixed income neighborhoods to the lifetime economic prospects of kids from low income families. A new study from xxxx shows that growing up in mixed income neighborhoods also seems to encourage greater civic participation. Eric Chyn, who looked at the economic outcomes from kids from families given vouchers to enable them to move out of low income neighborhoods in Chicago, has a new study looking at effects on voter participation. His key finding:  kids who grew up in these more mixed income neighborhoods tended to have higher voting rates as adults that otherwise similar peers who grew up in lower income neighborhoods. On average, kids growing up in mixed income neighborhoods were about 12 percent more likely to vote as adults than their peers.

Eric Chyn & Kareem Haggag, Moved to Vote:  The Long Run Effects of Neighborhoods on Civic Participation, University of Chicago, Human Capital and Economic Opportunity Global Working Group, Working Paper #2019-079, 2019.

 

 

Gentrification: the case of the missing counter-factual

Why are there so few studies charting displacement and cultural decline in non-gentrifying neighborhoods?

The implicit assumption in most gentrification research is that if a neighborhood doesn’t change, that it stays the same, but almost no one looks at that question

Displacement by decline is much more common, and more harmful than displacement due to gentrification.

As we reported a few months back, a recent longitudinal study of neighborhood change presents a number of conclusions about gentrification that plainly challenge the conventional wisdom. In a working paper published by the Federal Reserve Bank of Philadelphia Davin Reed and Quentin Brummet find that there’s only an extremely small difference in outmigration of vulnerable populations from gentrifying neighborhoods and non-gentrifying neighborhoods, that long time lower education renters experience little if any increase in rents in gentrifying neighborhoods, and that the patterns of change observed in gentrifying neighborhoods are consistent with increased intergenerational economic mobility for kids growing up in those neighborhoods.

The website Next City dutifully reported these results in an article entitled, “Study suggests gentrification has an upside. Housing advocates aren’t yet convinced.”  As the title suggests, Next City gave prominent play to doubters. They interviewed PolicyLink’s Sarah Truehaft and the Urban Displacement Project’s Karen Chapple, neither of whom challenged the statistical analysis presented in the Reed and Brummet paper.

Both instead fell back on the fact that those in communities experiencing change regarded the process as negative.  Truehaft said:

“We cannot think of gentrification as good when we know it leads to increased displacement of lower-wealth residents and the erosion of cultural diversity and vitality.”

Chapple added:

But even if displacement happens to only a few, communities perceive it as a violent process, and for some it begins a downward spiral. With such powerful negatives, it remains hard for communities to accept even in the face of net benefit. It is hard to be rational.

How do Truehaft and Chapple know that gentrification “erodes cultural diversity and vitality” and is perceived as “a violent process” that begins a downward spiral?  Well there is a wealth of literature relating the dissatisfaction that long-time residents of gentrifying neighborhoods have expressed about change. (And note that Reed and Brummet’s research show that gentrification leads to a very small increase in displacement).

In theory, you might test attitudes about gentrification by asking simple, open-ended opinion research questions about how people feel about neighborhood change.  But that’s seldom the approach that’s employed in this field.  Rather, many scholars start with the assumption that gentrification is bad, and then marshall anecdotes and opinion that reflect this view.

In reality, the changes wrought by gentrification are a mix of good and bad, and are perceived differently by different people in the community.  Lance Freeman calls the notion that “long term residents hate gentrification” one of the five myths of gentrification, noting that “In neighborhoods with severe disinvestment, lacking many retail services that most people take for granted, one may find long-term residents who appreciate gentrification.”  One of the assumptions of gentrification research seems to be that lower income people are unhappy when higher income people move into their neighborhoods. But research shows that pretty much the opposite is true: low income people living in neighborhoods with more high income neighbors report higher levels of satisfaction.

Individuals can have mixed emotions; acknowledging what is lost, but also overall preferring change. For example, even after reciting a fairly standard litany of complaints about the loss of business and the disorienting aspects of change, related in a New York Times story about the gentrifying Chelsea neighborhood one long time resident concludes:

“I’d rather have Chelsea as it is today…. There’s more people. It’s brighter, it’s beautiful, it’s more inviting than it used to be. We’re very lucky to be able to stay in housing that hopefully will not disappear.”

The sociology literature of gentrification has a well-established methodology for charting the ills of gentrification:  you go into a gentrifying neighborhood and talk to a sample of remaining long time residents about what they’re unhappy about.  A typical dissertation looked at that Cincinnati’s  Over the Rhine neighborhood, and is based on interviews with four-dozen subjects, most chosen by non-random snowball sampling. With little prompting, residents will tell you what they don’t like about change and what they’ve lost.  That’s not to say that interview studies like these won’t accurately capture and reflect the views of persons who’ve got genuine grievances about gentrification, but its reasonable to ask whether their results are representative.

Arguably, there’s some inherent selection bias in this approach, which is a widely acknowledged problem in anthropological research.  In addition, people, especially long-time residents may be nostalgic about the good old days, and that may color their narratives.  Consider this recent tweet from Portland’s Joan by Bike, noting the tendency of recent leavers from cities to lament its decline and invariably regard as the community’s high water mark as having been recorded at just about the time they arrived.

Is it possible for a writer to move from a city and not write an article lamenting the loss of the place they loved, that was perfect coincidentally at the exact moment they moved there but has since become terrible?

It’s also possible that such qualitative studies may succumb to confirmation bias.  Researchers show up having selected a neighborhood that is gentrifying, seeking out interviewees through non-random networking efforts, and posing leading questions.  Many, if not most researchers investigating gentrification start out with the hypothesis that gentrification is a bad thing, and unsurprisingly, seek out, find, and listen to people who tell them what they expect to hear.

Clearly some people will always express dissatisfaction with neighborhood change. The interesting question from a policy and scientific perspective is whether on balance more people are satisfied that dissatisfied, and also whether the level of dissatisfaction with neighborhoods that gentrify is greater or less than in neighborhoods that don’t gentrify.

The counterfactual: What if we asked the same questions about poor, but non-gentrifying neighborhoods?

What’s missing from qualitative gentrification studies is a parallel analysis of the attitudes and observations in otherwise similar but non-gentrifying neighborhoods. Rigorous paired comparisons of gentrifying and non-gentrifying neighborhoods would tell us whether the problem is gentrification or more broadly, the sense of disempowerment and disadvantage that seem to plague low income communities. For example, one could presumably ask the same sets of questions about how long time neighborhood residents perceive change in gentrifying and non-gentrifying neighborhoods, and see whether there are significant differences. That’s generally omitted, and as a result, the studies assume implicitly assume non-gentrifying low income neighborhoods are just fine:  they’re not experiencing decline, out-migration or the loss of cultural assets and that neighborhood residents are happy.

There are very good reasons to believe that’s not the case.  Consider recent community-led analyses of neighborhood change in Chicago’s Greater Chatham.  Historically, this has been one of the most prosperous and well-established middle-class black neighborhoods in the United States.  It was the heart of the Chicagoland’s African-American middle-class population, primarily as a set of amenity-rich bedroom communities, but also as the home of many leading African American-owned businesses and prominent figures in arts and culture.  But in the past decade, in the wake of deindustrialization and the collapse of the housing market, the neighborhood has experienced an exodus of middle-class families, and seen significant increases in crime and poverty.  The neighborhood has lost population. Greater Chatham is changing, but not due to gentrification. If anything, its just the opposite:  the area is underrepresented in the 25-39 age group, relative to the City of Chicago. It’s very likely that a cultural anthropologist would find that the area has lost vitality and is caught in a downward spiral driven by a violent process.

A well-executed qualitative study long time resident attitudes in Greater Chatham would probably mirror many of the same kinds of complaints that people regularly report about gentrifying neighborhoods:  There’s been a steady erosion of the businesses and cultural institutions that have historically underpinned community well-being. Things used to be so much better, the neighborhood has seen a decline in its civic assets and cohesion.  Long time residents don’t feel like they belong in the neighborhood any more.

Change is always hard and always disruptive, but gentrification is not the only kind of change, and neighborhoods that don’t gentrify don’t stay the same. As the work of Alan Mallach makes clear, the story of declining middle income neighborhoods like Chatham is far more common.  Displacement by decline, as Akron’s Jason Segedy says, is vastly greater than displacement by gentrification, and its effects are unambiguously more pernicious. Available quantitative data shows that residents of non-gentrifying poor neighborhoods have lots of negative change to complain about as well. Our research and that of Reed and Brummet confirm that low income neighborhoods that don’t gentrify hemorrhage population.  Over four decades we found that low income neighborhoods that didn’t gentrify lost 40 percent of their population on average. Reed and Brummet found that since 2000, non-gentrifying low income neighborhoods experienced population declines of 8 percent. The exodus from non-gentrifying low income neighborhoods is a testament to the dissatisfaction that residents have when their neighborhood’s don’t attract additional investment and new (if wealthier) residents.

That kind of comparison, looking at neighborhoods that gentrify, and otherwise similar neighborhoods that don’t, would lead to clearer insights about the nature and uniqueness of residents views of long term change than simply looking at gentrifying neighborhoods. We’ll be on the lookout for this kind of research.

 

 

Jealous billionaires: The story behind Amazon’s HQ2

Cash prizes for bad corporate citizenship: When we  incentivize anti-social behavior by big corporations, we get more of it

Bloomberg Business has a behind-the-scenes post-mortem of the Great Amazon HQ2 sweepstakes, “Behind Amazon’s HQ2 Fiasco: Jeff Bezos Was Jealous of Elon Musk.” In the Fall of 2017, Amazon announced a high stakes competition for cities across North America to become the site of the company’s second headquarters location.  As Spencer Soper, Matt Day and Henry Goldman chronicle, the competition was kicked off by Jeff Bezos growing jealousy over rival billionaire Elon Musk’s $1.3 billion subsidy deal for Tesla’s electric battery plant in Nevada.

When Elon Musk secured $1.3 billion from Nevada in 2014 to open a gigantic battery plant, Jeff Bezos noticed. In meetings, the Amazon.com Inc. chief expressed envy for how Musk had pitted five Western states against one another in a bidding war for thousands of manufacturing jobs; he wondered why Amazon was okay with accepting comparatively trifling incentives. It was a theme Bezos returned to often, according to four people privy to his thinking. Then in 2017, an Amazon executive sent around a congratulatory email lauding his team for landing $40 million in government incentives to build a $1.5 billion air hub near Cincinnati. The paltry sum irked Bezos, the people say, and made him even more determined to try something new.

And so, when Amazon launched a bakeoff for a second headquarters in September 2017, the company made plain that it was looking for government handouts in exchange for a pledge to invest $5 billion and hire 50,000 people.

We don’t find this story surprising in the least.  When the competition was announced in September of 2017, we called it the inevitable result of a system of cash prizes for bad corporate citizenship. When companies see their rivals getting something for nothing, they feel like suckers if they, too don’t ask for as much as they can get.

And that is truly the perverse effect of economic incentives:  Cities and states feel like they have to have incentives to be competitive; and once incentives are in place, corporations have every interest in asking for them. And its hard to say no:  When a Governor or Mayor is courting a company, she or he wants to show evidence of a strong commitment to the company; given that most investments are for decades, a location decision is less about today’s policy environment than it is having a sense that a company will be treated well over a longer period of time. If a Governor or Mayor declines to provide a tax break or write a subsidy check that is within their power to offer, its a pretty good indication that they’re not committed. So once the incentives are on the books, they tend to get asked for, and given. Even though incentives don’t work most of the time as Tim Bartik has shown, the political dynamic (where leaders want to posture as pro-jobs, and can shift costs off to their successors) leads to their perpetuation and growth.

And when a prize as large and public as Amazon becomes available, cities and states around the country feel compelled to compete, even if they have no reasonable chance of winning.  Just to be seen as playing is important politically.  But this leads to a prodigious waste of time for nearly all of the “competitors.” Again, Bloomberg Business:

City and state officials privately complained to their Amazon contacts that the exercise was a tremendous waste of public resources, according to a person who received the complaints. Mayors and governors said they had other businesses genuinely interested in their cities and states and lamented that Amazon was stringing the entire continent along, the person said.

In reality, nearly 90 percent of the 238 cities that submitted bids had no chance.  According to Bloomberg Business, 20 of the 25 cities that made the list of finalists in September 2018 were the cities Amazon had identified as priority candidates prior to launching the competition. And ultimately, the decision boiled down to the availability of talent, just as outside observers (including City Observatory) said all along.  In September 2017, we wrote:

While there’s a lot of detail here, the factor that’s going to make the most difference is the availability of talent.  When you’re hiring upwards of 50,000 highly trained workers, as we’ve said before, the location decision is going to be made by the HR department. A city has to have a substantial base of talent–especially software engineers–and be a place that can easily attract and accomodate more.  Beyond these the availability of talent, it’s likely that analysts are reading too much into the criteria laid out in the RfP. The request for proposals was not drawn up to reveal Amazon’s decision criteria. It was drawn up to solicit the maximum number of credible incentive packages.

Ultimately, it wasn’t a surprise when Amazon selected New York and Washington DC as its two preferred locations (for the record, we also predicted that Amazon would choose multiple locations).  These two cities offered the deepest and richest talent pools.  As the Bloomberg article chronicles, Amazon’s cynical and ham-handed approach to seeking tax breaks in New York backfired, generating local opposition that eventually sank the deal for that location.  But the critical takeaway of its long drawn-out contest is that it chose two cities with vital centers and a strong talent base.  Those features, and not tax breaks, are what drives location decisions.

This isn’t a new story: Talent & urban amenities trump tax breaks

Consider, for example, the recent case of General Electric’s relocation to Boston. The after the fact statements of the company’s CFO make it abundantly clear that GE was strongly attracted to Boston by the region’s urban amenities and the culture of innovation–attributes that could hardly be replicated at any price in most alternative locations.  Asked by the Wall Street Journal why GE chose an urban location (Boston’s Seaport District), CFO Jeffrey Bornstein emphasized these characteristics.

Yes. From the get-go we knew we wanted to be in a place that was vibrant and entrepreneurial, where you could walk out your door enriched by your environment and your ecosystem. I can walk out my door and visit four startups. In Fairfield, I couldn’t even walk out my door and get a sandwich.

We knew we wanted to be in a more urban environment where we could actually participate in the ecosystem and be smarter and more aware as a result.

Even though the chief financial officer acknowledges that talent and an entrepreneurial environment were critical, GE was handsomely rewarded with $145 million in incentives by going the Kabuki theatre route of pretending to be weighing lots of competing offers.  If anything, Amazon was more audacious at this game than GE.  Greg LeRoy of Good Jobs First has pointed out that Amazon had managed to extract a quarter of a billion dollars incentives to support the construction of distribution centers around the country. The whole point of the HQ2 exercise was to up this game to a new and more lucrative set of subsidies.

Corporations have choices. They could go about their business, and simply choose the best location, the one that makes the greatest business sense, and invest accordingly. Or they can as Amazon, GE, and dozens of others, go through the ritual of pretending to entertain a wide range of proposals, and use the leverage of competing bids to sweat the best possible deal out of their preferred location. The net result of our current approach is to provide giant cash rewards to those who engage in the most cynical behavior. As a result, while Amazon may reap some short term financial benefits, it may come at the cost of considerable corporate goodwill and also at the cost of fiscally impoverishing the city that it chooses to locate in.  The other losers will be all the businesses against which Amazon competes, who are too small to have the leverage to insist on a comparable level of public subsidy for their similar operations.

 

Talent: The key to metro economic success

Educational attainment explains two-thirds of the variation in economic success among metropolitan areas.

Each additional percentage point increase in the four-year college attainment rate increases metro per capita income by $1,500

We’re increasingly living in a globalized, knowledge based economy.  In that world, the single most important factor determining a region’s economic success is the education and skills of its population. If you’re concerned about urban economic development, the one thing you should be focusing on, laser-like, is educational attainment.  Raising a metro area’s educational attainment is the key to raising productivity, living standards and incomes. Our core metric for assessing the importance of a well-educated population is to look at the relationship between per capita incomes and the four-year college attainment rate, a relationship we call the “Talent Dividend.”

We’ve been tracking these data, and today we’re updating them to reflect the latest information from the Census Bureau’s just-released 2018 American Community Survey.  We’ve paired this information with the Bureau of Economic Analysis’ estimates of per capita income in each metropolitan area for 2018. The following chart plots the relationship between per capita personal income (on the vertical axis) and the fraction of the adult population who have completed at least a four-year college degree (on the horizontal axis).  Each dot on the chart represents one of the nation’s metropolitan areas with at least 1 million population (53 of them).  You can mouse-over a dot to see the corresponding metropolitan area and its educational attainment rate and per capita income.

There’s a strong, positive correlation between educational attainment and per capita income.  The metro areas with the highest levels of education have the highest levels of per capita personal income.  Cities like San Francisco, Boston and Washington have the highest levels of per capita income and the best-educated populations. Cities like Riverside and Las Vegas have low levels of educational attainment and correspondingly lower levels of per capita income. The coefficient of deterimination of the two variables–a statistical measure of the strength of the relationship–is .65, which suggests we can explain about two-thirds of the variation in per capita personal income among metropolitan areas, simply by knowing what fraction of their adult population has a four-year degree. Most cities lie close the to regression line; a few outliers have plausible explanations for their over or under performance. San Francisco and San Jose lie far above the regression line, and are super-charged (and expensive) high performers.  Raleigh and Austin have incomes lower than their educational attainment would predict, but also have populations that skew very young, and therefore have lower incomes.

This chart tells you the most important thing you need to know about urban economic development in the 21st century: if you want a successful economy, you have to have a talented population. Cities with low levels of educational attainment will find it difficult to enjoy higher incomes; cities with higher levels of educational attainment can expect greater prosperity. As Ed Glaeser succinctly puts it: “At the local level fundamentally the most important economic development strategy is to attract and train smart people.” And critically, because smart people are the most mobile, building the kind of city that people want to live in is a key for anchoring talent in place. And, importantly, the economic research shows that the benefits of higher educational attainment don’t just accrue to those with a better education: people with modest education levels have higher incomes and lower unemployment rates if they live in metro areas with higher average levels of education.

The data presented here imply that a 1 percentage point increase in the four-year college attainment rate is associated with about a $1,500 per year increase in average incomes in a metropolitan area, an increment we refer to as the Talent Dividend.  This cross-sectional relationship suggests that if a metropolitan area were to improve its educational attainment by one percentage point on a sustained basis, that it would see a significant increase in its income.

Over time, the strength of this relationship, and the size of the talent dividend effect has been increasing.  When we computed the relationship using 2010 data, the correlation coefficient was .60 and the size of the talent dividend was $860 (in current dollars).  These data suggest that educational attainment has become even more powerful in determining economic success than just a few years ago.

Education is a stronger predictor of economic success today than ever before. That’s true for individuals, for private businesses, for communities, and for metropolitan economies.  The better educated you are, the more likely you are to be prosperous in a knowledge-based economy. Not only do well-paid and fast growing technology jobs go disproportionately to the better educated, but better educated workers tend to be more adaptable and more innovative, which better prepares them to cope with a changing economy.  The policy lessons for city leaders are clear: a successful economy depends on doing a great job of educating your population, starting with your children, and also building a community that smart people will choose to live in.