On Baltimore: Concentrated Poverty, Segregation, and Inequality

Yet again, a black citizen dies at the hands of the police. This event and the ensuing riots in Baltimore are a painful reminder of the deep divisions that cleave our cities.  There’s little we can add to this debate, except perhaps to say that there’s a strong evidence for a point made by Richard Florida:

The real problem in Baltimore is race & class division – persistent concentrated poverty.

We’ve chronicled the persistence and spread of concentrated poverty in our recent reports and blog posts at City Observatory.  Our Lost in Place report tracked the change in neighborhoods of concentrated poverty in the nation’s largest metro areas over the past four decades.  Our dashboard for Baltimore shows that the number of high poverty neighborhoods in Baltimore increased from 38 in 1970 to 55 in 2010.  And high poverty neighborhoods have hemorrhaged population.  Only one census tract in Baltimore saw its poverty rate fall from above 30 percent in 1970 to less than 15 percent in 2010.

Balt_Map

 

And as our map shows, the Baltimore has experienced persistent–and growing–concentrated poverty in many of its urban neighborhoods.  Concentrated poverty remains rooted in the neighborhoods adjacent to the central business district–and has spread outward in the decades since 1970.

Baltimore_Map

Earlier this month, we highlighted the connection between racial segregation and black white income disparities in the nation’s cities.  Those places with the greatest levels of segregation regularly also had the biggest differences in incomes between black and white households.  Segregation appears to be an important contributor to racial income disparities.  These data show that Baltimore is somewhat more segregated than the typical large US metro, with a black-white dissimilarity index of 64, ranking about 20st highest (most segregated) of the largest metropolitan areas in the country.  And on average black incomes in Baltimore were about 28 percent lower than white incomes, a slightly greater disparity than in the typical large metropolitan area.  So while somewhat more severe than average, the levels of racial segregation and income differentials in Baltimore are hardly unusual in large metro areas.

Sadly, concentrated poverty is a problem which only becomes visible to many Americans when it erupts in the violence we’ve seen in the past few days in Baltimore.  We hope the data provided here give everyone a sense of the depth and seriousness of the problem.

Peaks, valleys, and donuts: a great new way to see American cities

In my inaugural post, I claimed that county-level population data is bad at telling us much of anything about cities and housing preferences. Counties just contain too many multitudes – of built environments, of types of neighborhoods, of zoning regimes – and vary too much from place to place to be very useful in cross-metro comparisons.

But happily, as of February, we have a much better way of turning the millions of moving parts in a given metropolitan area into a coherent story. That’s when Luke Juday of the University of Virginia’s Cooper Center for Public Service published “The Changing Shape of American Cities.”

“The Changing Shape” includes a traditional PDF report, which emphasizes the emergence of what writer Aaron Renn has called “the new donut”: a wealthy core, surrounded by a ring of relatively low-income outer city neighborhoods and inner suburbs, surrounded by wealthy outer suburbs. That, of course, is very much worth reading.

But what really makes our hearts go pitter-patter is this:

Screen Shot 2015-04-21 at 10.30.44 PM

If you click over, you’ll see an interactive data presentation for over sixty metropolitan areas – as well as a handful of regional groupings – that shows how each region’s demographics change as you move further from the city center. As you move from left to right on the graphs, you literally travel through the city, beginning downtown and then following the trends mile by mile. The data illustrate the “new donut” phenomenon as well as anything I’ve seen, showing, for example, a steep peak in residents’ college attainment at the city center, then a deep trough, and a second peak several miles out. Here, for example, is the graph for college attainment, aggregated over eight Rust Belt cities – places you wouldn’t necessarily expect to be seeing lots of privileged people moving downtown:

Screen Shot 2015-04-21 at 9.52.18 PM

The purple line shows the data from 2012; just as dramatically, the orange line shows the data from 1990, back when American cities followed the “old donut” model: a poor inner city and wealthy suburbs.

What’s so valuable about this presentation of metropolitan data – as opposed to county, or even municipal, based analysis – is that it doesn’t require the historical accidents that are government boundaries to correspond with subtle and ever-changing social and economic geography. By simply showing what happens to, say, the proportion of residents living under the poverty line as we move mile by mile through a metropolitan area, we get a much better sense of a region’s shape than we do by drawing a handful of sharp lines and measuring how many people fall on one side, and how many on the other.

Juday’s work even gets past some of the issues of more sophisticated approaches to urban categorization. For example: Trulia’s Chief Economist, Jed Kolko, recently made an urban/suburban distinction based on whether most housing units in a given neighborhood were in multi-family or single family buildings. In America’s biggest, densest cities, that makes a lot of sense: most New York- or Chicago-area neighborhoods where most people live in single family homes are probably not considered very “urban” there.

But in a lot of other places, especially medium-sized cities away from the East Coast, that standard doesn’t necessarily apply. The Midtown district of Memphis, where I lived for a year, generally looks like this:

In other words, it’s mostly single family homes. But it’s also centrally located, and, in the Memphis area, is popular specifically for its centrality, relative density and walkability, and other “urban” amenities. Kolko’s criteria, though they sound perfectly reasonable at first blush, mischaracterize the role Midtown plays in Memphis – and the role that many other similar neighborhoods play in their regions across the country. Juday’s charts, on the other hand, easily register Midtown’s popularity among the young and well-educated as a close-in neighborhood.

Of course, no form of analysis is perfect. One thing that you can’t see in these visualizations is the phenomenon of the “favored quarter.” That is, demographic patterns tend not to form perfect concentric rings around a city center: more often than not, they’re a composite of rings and wedges, beginning downtown and moving out in one direction. That, too, has been well-presented by Radical Cartography, among others.

This map from Radical Cartography shows per capita income in Atlanta. The wealthy (pink) favored quarter is clearly visible to the north.
This map from Radical Cartography shows per capita income in Atlanta. The wealthy (pink) favored quarter is clearly visible to the north.

But that is a relatively small issue. “The Changing Shape of American Cities” is an excellent approach to urban demographics, and the fact that the data are publicly available to play around with in an interactive display means there should be many, many more insights to come from Juday’s work.

Young People are Buying Fewer Cars

Cars_Revised

Will somebody teach the Atlantic and Bloomberg how to do long division?

In this post, we take down more breathless contrarian reporting about how Millennials are just as suburban and car-obsessed as previous generations. Following several stories drawing questionable inferences from flawed migration data claiming that Millennials are disproportionately choosing the suburbs (they’re not) come two articles in quick succession from Bloomberg and the Atlantic, purporting to show the Millennials’ newfound love of automobiles.

Bloomberg wrote “Millennials Embrace Cars, Defying Predictions of Sales Implosion.” Hot on its heels came a piece from Derek Thompson at the Atlantic (alternately titled “The Great Millennial Car Comeback” and “Millenials not so cheap after all”) recanting an earlier column that predicted Millennials would be less likely than previous generations to own cars.

The Atlantic and Bloomberg stories are both based on new estimates of auto sales produced by JD Power and Associates. The data for this report are shown below.  We also examined a JD Power released a press release making broadly similar claims last summer; we relied on that to better understand their methodology and definitions.

The headline finding is that in 2014, Millennials (the so-called Gen Y) bought about 3.7 million cars, while their older GenX peers bought only 3.3 million.  (We extracted these numbers from the charts in the Atlantic story).  Superficially, that seems to be evidence that Millennials are in fact buying more cars.

But there’s a huge problem with this interpretation:  there are way, way more people in the so-called “GenY” than there are in “GenX.” Part of the reason is that the GenY group–also often called the “echo boom”–were born in years when far more children were born in the US.  The bigger, and less obvious problem is the arbitrary and varying periods used to define “generations.”   According to the definitions used by JD Power, GenY includes people born from 1977 to 1994 (a 17-year cohort), while GenX includes those born between 1965 and 1976–just an 11-year cohort.  As a result, these definitions put nearly 78 million people in Gen Y and about 49 million in GenX.  There are nearly 29 million more GenXers than GenY.*  Hardly surprising, and not at all meaningful, that this very much larger group buys about 10 percent more cars than the very much smaller group.

This is where long division comes in.  Let’s look at the rate of car buying on a per person basis for each of these two groups.  By normalizing the data to account for the different number of people in each group, we get a much more accurate picture of the behavioral differences of individuals in each group–this is dead simple standard fare in statistical analysis.  The 78 million GenYers bought about 3.7 million cars, or about 47.5 cars per 1,000 persons in the generation.  Meanwhile, 45 million GenXers bought 3.3 million cars, or about 67.1 cars per 1,000.  Rather than being just as likely or more likely than GenX to buy cars, the typical member GenY is actually 29 percent less likely to buy a car than the previous generation.

Characteristic Gen Y Gen X Boomers
Birth Year 1977-1994 1965-76 1946-64
Age in 2013 19-36 37-48 49-67
Birth Years in Cohort 17 11 18
Persons, 2013 77,970,996 49,211,709 75,900,696
Cars Bought 2015 3,700,000 3,300,000 5,100,000
Market Share 27% 24% 38%
Cars Purchased per 1,000 47.5 67.1 67.2

Once you go to the trouble of normalizing the sales data to reflect the very different sizes of these “generations,” you get results that are pretty much exactly the opposite of what’s claimed in both the Bloomberg and Atlantic stories.  Today, Millenials are buying new cars at a rate far lower than older generations.  That’s consistent with other data we have showing Millenials being less likely to get drivers licenses, and when they do, driving fewer miles per year than previous generations.

To be fair, a really good answer to this question would require a bit more data sleuthing:  Because automobile purchasing patterns vary over a person’s life cycle, you can’t accurately gauge the generational change in buying habits by comparing the current year buying habits of Millennials (average age, late 20s) with GenX (average age early 40s). The more interesting question to answer would be whether the average 25-year-old Millennial today is more or less likely to purchase a vehicle today than someone who was 25 in 2005, or in 1995 or in 1985.  Unfortunately, we don’t have access to that data. However, if the folks at JD Power would be willing to dip into their considerable archives, we’d gladly do the computations.

No doubt this kind of story generates lots of clicks and tweets—witness the Natural Resources Defense Council’s panicky “Uh-oh” retweet of this story.  Clearly that is the coin of the realm in journalism these days, but it’s just plain irresponsible to make an utterly phony claim based on data that hasn’t been adjusted to reflect the size of different groups in question. As Paul Krugman said in a simpler time, “don’t be making claims that can be disproved with a copy of the statistical abstract and a pocket calculator.”  There’s even less excuse for this today.

A couple of technical notes:  Our estimates of population by birth year are from the Census Bureau:  Annual Estimates of the Resident Population by Sex, Single Year of Age, Race, and Hispanic Origin for the United States: April 1, 2010 to July 1, 2013.  The car sales data are from JD Power for 2014, as reflected in the charts shown in the Atlantic article and confirmed by data provided by JDPower.  Our table above omits data for sales to “pre-boomers” which make up approximately 10 percent of car sales, and explains why the total market share doesn’t add to 100%.  We use the terms “GenY” and “millenials interchangeably in this post.

_____________

* – Towards the end of his article, Derek Thompson acknowledges the big discrepancy in the sizes of GenX and GenY, allowing that there are “15 to 20 million” more Millenials than GenXers. Not only is the actual difference almost 29 million, it begs the question of whether why Thompson didn’t find the time to do the very basic long division normalization that would have given a much more reasonable, and much different answer to the question posed by his article.

Revised and Corrected April 23.

We’ve corrected and updated this post.  Our original version had a math error which understated the number of persons in “GenX”.  I inadvertently assigned those born in 1965 to the Baby Boom Generation rather than GenX.  The correct number of persons in GenX (born between 1965 and 1976) is 49.2 million not the 44.8 million I originally reported.  This changes the number of cars purchased per 1,000 persons by this Generation from 73.7 I originally reported to the correct number of 67.1.  This means that GenY is about 29% less likely than GenX to have purchased a car in 2014.  We’ve revised the text to reflect these corrections. My apologies for this error.

Also, JDPower and Associates graciously provided the data that served as the basis for the Bloomberg story.  It is shown below

  2010 2011 2012 2013 2014 2015YTD
Percent of Retail Sales            
Y 18% 21% 23% 25% 27% 28%
X 23% 24% 24% 24% 25% 24%
Baby Boomer 43% 41% 40% 39% 37% 37%
Pre Baby Boomer 16% 14% 13% 12% 11% 11%
             
Retail Sales (MM)            
Y 1.7 2.2 2.7 3.2 3.7 0.9
X 2.1 2.5 2.9 3.1 3.3 0.8
Baby Boomer 3.9 4.2 4.7 5.0 5.1 1.2
Pre Baby Boomer 1.5 1.4 1.5 1.5 1.4 0.3

Our six month anniversary!

It’s spring in the city

On October 20 of last year, just six months ago, we launched City Observatory, a website and think tank devoted to data-driven analysis of cities and the policies that shape them. We are delighted to have participated in ongoing national discussions about a number of important policy issues facing cities. It’s been a whirlwind–and here’s what we’ve been up to:

To date, we’ve released three national reports.

The Young and Restless detailed the migration patterns of educated 25- to 34-year-olds to the close-in neighborhoods of the nation’s large metropolitan areas and compared how cities across the country were faring in attracting them.

Lost in Place tracked the persistence and spread of concentrated poverty, and showed how poverty—not gentrification—is our biggest urban challenge.

Surging City Center Job Growth showed how urban populations are are growing faster than suburban ones, and that jobs are coming back to the center of cities with this increase.

Over on our blog, we’ve been continuing to provide commentary about a variety of subjects, from biotech to McMansions to the looming threat that “Cappuccino Congestion” poses to the nation’s economic productivity. We’re also weighing in with our views on the important issues confronting the nation’s cities.  Learn why we think that, contrary to some assertions in the media, young adults are increasingly moving to the nation’s urban centers, and how some of the measures of gentrification are misleading and wrong.  And be sure to take a look at our latest post showing the close connection between segregation and the racial income gap.

We’re pleased with the reception that City Observatory’s work has gotten.  In addition to those who’ve visited our website, we’ve gotten terrific coverage in the media, including the New York Times, Washington Post, The Economist, and USA Today.

Our aim is to be open-source and data-driven, which is while you’ll find all the detailed data behind each of our analyses freely available on our website—we have a data page that provides data downloads and spells out methodology.  In addition, we’ve constructed a series of dashboards that let you check to see how your city is performing in attracting talented young workers, addressing concentrated poverty, and growing city center jobs.

This month, we welcome a new face to the City Observatory staff: Daniel Kay Hertz. You may already have come across Daniel’s insightful writing on his blog City Notes, or you may be already following him on Twitter, but feel free to hop on over to our blog and check out his contributions. We’re thrilled to have Daniel on board.

We’re grateful to the John S. and James L. Knight Foundation for supporting our work, and we’re especially grateful to those who follow and commnet on the discussions here at City Observatory. Our work is only as good as the commentary and discussion we provoke. Please comment on our blog, connect with us on Twitter or Facebook or just email us to tell us what you think. Your continued interest, thoughts, and feedback push the conversation forward and make our work worth doing.

More evidence of surging city job growth

In February, we released our latest CityReport Surging City Center Job Growth, presenting evidence showing employment growing faster in the city centers of the nation’s largest metros since 2007. Another set of analysts has, independent of our work, produced findings that point to renewed job growth in the nation’s inner city neighborhoods.

A new report issued by the Federal Reserve Bank of Cleveland, using similar data but different definitions reaches many of the same conclusions. The analysis, prepared by Fed Economist Daniel Hartley and Nikhil Kaza and T. William Lester of the University of North Carolina, is entitled Are America’s Inner Cities Competitive?  Evidence from the 2000s.  The Fed study divides metropolitan areas  into three parts:  the central business district (CBD)–a series of tracts that form the core of the commercial area in each metro’s largest city–the inner city–tracts within a principal city but outside the central business district (CBD), and the suburbs–the remainder of metro area.

Between 2002 and 2011, Hartley, Kaza and Lester report that inner cities have added 1.8 million jobs.  They also echo one of our key findings:  that job growth in city centers was stronger in the post-recession period than it was earlier in the decade.  In the aggregate, inner cities recorded relatively robust job growth over the past decade (up 6.1% between 2002 and 2011) compared to suburbs (6.9%), and that particularly since the end of the recession (i.e. 2009) have recorded faster job growth (3.6%) than either suburbs (3.0%) or central business districts (2.6%).

To get a sense of how the geography of job growth has shifted over the past decade, its useful to divide the data roughly in half, comparing growth trends in the 2002-07 period (during the height of the housing bubble) with the growth from 2007-11 (the period representing the collapse of the bubble, and the impact of the Great Recession, and the first years of recovery).  These were the time periods used in our Surging City Center Job Growth report, and we’ve recalculated the Fed data to make it directly comparable to our analysis.  The chart below shows the data from the Fed report and computes the average annual growth rate of jobs for central business districts, inner cities, and suburbs for these two time periods.

These data show that in the earlier time period, suburbs were outperforming cities; inner cities were growing about half as fast as suburbs and CBD employment was actually declining.  From 2002 to 2007, the further you were from the center, the faster you grew.  This relationship reversed in the latter 2007-11 period.  Cities outperformed suburbs–suburbs saw a net decline in employment–and job growth was actually somewhat faster in the CBD than in inner cities.  Despite the recession, CBD job growth was much stronger in the 2007-11 period (+0.3%) than it was in the earlier 2002-07 period (-0.7%).  (Note that percentage figures in the following graph represent annualized growth rates.)

Hartley_Jobs

There are some key differences between the Fed study and our recent City Observatory report. Our definition of “city center” included all those businesses within three miles of the center of the central business district.  Both studies are based on geographically detailed employment data from the Census Bureau’s Local Employment and Housing Dynamics (LEHD) program.  The new Fed study reports data for 281 US metropolitan areas (our report looked at 41 of the largest metropolitan areas).

The authors conclude that while it is too soon to term this an urban renaissance, its a noticeable change from the long term trend of employment decentralization.  Though not universal, the pattern of strong inner city growth is widespread, with two-fifths (120 out of 281 metros) recording gains in overall employment and share of employment in inner cities.  The traditional decentralizing pattern of employment still holds for some metropolitan areas, like Houston and Dallas, but inner cities are flourishing in some unlikely places, like heavily suburbanized Los Angeles and San Antonio.

As we did in our report, the authors of the Federal Reserve study examine the industrial dimensions of job change.  Manufacturing jobs continue to suburbanize, and inner cities have been relatively more competitive for jobs in “eds and meds” education services and health care.  They also identify a key role for the consumer city and population-led theories of urban growth.  Within inner cities, job growth is positively associated with transit access and distance to the CBD, and seems to be driven more by population-serving businesses (like restaurants) than businesses dependent on infrastructure (manufacturing and distribution).

The full report has many more details, and identifies the metros with competitive inner cities (i.e. those places where inner city areas gained share of total metro employment between 2002 and 2011).

We’re expecting to get data for 2012 and 2013, to be able to judge whether these trends persisted as the US economy continued to recover.  If you’re keenly interested in urban economies, as we are, you’ll be eagerly awaiting the new numbers.  In the mean time, the Cleveland Fed study is a “must read.”

Hartley, Daniel A., Nikhil Kaza, and T. William Lester, 2015. “Are America’s Inner Cities Competitive? Evidence from the 2000s,” Federal Reserve Bank of Cleveland, working paper no 15-03.  https://www.clevelandfed.org/en/Newsroom%20and%20Events/Publications/Working%20Papers/2015%20Working%20Papers/WP%2015-03%20Are%20Americas-Inner-Cities-Competitive-Evidence-from-the-2000s.aspx

 

City Observatory Welcomes Daniel Kay Hertz

We’re delighted to announce that Daniel Kay Hertz is joining City Observatory as our new Senior Fellow. Its likely that if you’ve been following the discussions on a wide range of urban issues in the past year or so, you’ve become familiar with his views on his own blog City Notes, and in a range of social media forums and even the editorial pages of the Washington Post. Daniel’s finishing up his graduate studies at the University of Chicago’s Harris School of Public Policy, but is already contributing to City Observatory. Let me pass the microphone to Daniel so he can tell you why he’s here.

There’s a joke that became famous – in certain circles, at least – because David Foster Wallace included it in his widely-read commencement address at Kenyon College a decade ago. A simple version of the joke goes: There are two fish. One fish says to the other fish: “Hey, how’s the water?” The second fish says: “What the hell is water?”

The point, of course, is that most of our environment – the systems, rules, and physical objects that determine the shape of our lives – is, from our perspective, so all-encompassing as to be invisible.

For most Americans, cities (which I’m using here in the “built-up area” sense, including suburbs) are our water. The kinds of homes and streets that predominate in our neighborhoods determine whether we walk, ride a bus, or drive to work – which, in turn, determines whether our monthly transportation budget is $25, $100, or $500. The geography of our city’s social networks determines who our friends and neighbors are, what kind of job openings we’ll hear about, and where we would consider moving. The economic relationship of our neighborhoods to the rest of the metropolitan area – and to the rest of the country, and the world – determines what kinds of jobs are available to us, how much they will pay, and how long our commutes will be. Our cities’ political architecture determines, to a large extent, the quality of our children’s schools, how much we trust the police, and whether we bother to vote.

Of course, it’s not news to most people that where you live matters. But the particular ways in which cities and neighborhoods create opportunity – or, conversely, reproduce inequality – remain mostly vague and up for debate. How much, and in what directions, we can or should use public policy and private initiative to push them towards opportunity and equality – and for whom – is even less settled.

The beauty of cities, for me, is that they contain so much. In a meaningful way, they are life for most people in 21st century America. (I’ll leave it to some hiker in Denver to tell me about all that I’m missing beyond the last subdivision.) Friendship, history, the arts, the thrilling awe of modern feats of engineering and the comfort of the familiar in your home: all of these are, for most people, particularly urban phenomena.

But what makes an organization like City Observatory so necessary right now is the urgency of hashing out what role cities play – and what role they should play – in a country facing profound civic and economic challenges. The abstract racial and class fault lines that go a long way towards defining our lives don’t just physically rearrange our neighborhoods; our neighborhoods can also rearrange them, for good or bad. A changing climate has important implications for how we live in cities, but the reverse is also true.

Which is why I’m very excited to be a small part of the conversation towards solutions on both of those fronts, and more, here at City Observatory. If you’re interested, some of my previous writing – and a little more about my previous experiences – is at my personal website. Soon I’ll have more up here, though, and I’m looking forward to it.

More evidence on city center job growth

In February, we released our latest CityReport documenting a remarkable turnaround in the pattern of job growth within metropolitan areas.  After decades of steady job decentralization, the period 2007-2011 marked the first time that city centers in the nation’s largest metropolitan areas recorded faster job growth than their surrounding peripheries.  Much of that rebound seemed to be associated with the movement of talented young workers back to cities, and the industrial composition of growth in cites–with high-skilled service and software firms choosing urban locations.

Perhaps no where is this trend more in evidence than in San Francisco.  The city is in the midst of a boom in both population and employment growth.

All the controversy about the “Google Bus” and other corporate shuttles that ferry San Francisco residents to jobs in Silicon Valley, an hour or so to the south, miss the burgeoning growth of high tech firms in the city itself.  The growing desire of young well-educated workers to live in cities is making the central city location much more advantageous for tech firms, relative to the traditional Silicon Valley office parks, than in decades past.  As a result, in the past several years, technology firms have increasingly started, expanded or relocated in San Francisco.

A recent report from the City of San Francisco’s Planning Office chronicles the growth of tech jobs–in software, telecommunications, information services and related sectors of the economy– in San Francisco.  Over just the past four years, employment in the city’s tech sector has increased about 90 percent, from 19,700 jobs in 2009 to 37,600 in 2013.

SF_Tech_Job_Chart

The tech industry’s growth has been highly concentrated in the city’s fast-changing South of Market area.   CM Commercial Real Estate has mapped the significant leasing deals by tech firms over the past three years.  You can see the size and timing of these developments on their animated map (click on the image below to visit their website).

SF_Tech_Leasing

Data from CM Commercial Real Estate

The concentration of talented workers in San Francisco and the tight clustering of tech firms is a reminder of the real power of agglomeration effects in our knowledge-based economy.  Building on the strength of urban amenities to attract and retain well-educated workers with choices creates a strong talent base that leads firms to gain economic advantage by locating nearby.  San Francisco is now at a point where these two trends are mutually reinforcing:  the base of talent attracts more firms; the abundance of employment opportunities attracts more workers.  The key limiting factor going forward is the supply of housing in San Francisco.  As we’ve argued before, what this really illustrates is our shortage of great urban spaces.  As more Americans seek urban living, and as the firms that need to employ talented workers cluster nearby, the demand for housing in cities surges, and unless housing supply keeps pace, rising prices and affordability problems will likely worsen.

Want to close the Black/White Income Gap? Work to Reduce Segregation.

 

Nationally, the average black household has an income 42 percent lower than average white household. But that figure masks huge differences from one metropolitan area to another. And though any number of factors may influence the size of a place’s racial income gap, just one of them – residential segregation – allows you to predict as much as 60 percent of all variation in the income gap  from city to city. Although income gaps between whites and blacks are large and persistent across the country, they are much smaller in more integrated metropolitan areas and larger in more segregated metropolitan areas.  The strength of this relationship strongly suggests that reducing the income gap will necessarily require reducing racial segregation.

To get a picture of this relationship, we’ve assembled data on segregation and the black/white earnings gap for the largest U.S. metropolitan areas. The following chart shows the relationship between the black/white earnings disparity (on the vertical axis), and the degree of black/white segregation (on the horizontal axis).   Here, segregation is measured with something called the dissimilarity index, which essentially measures what percent of each group would have to move to create a completely integrated region. (Higher numbers therefore indicate more segregated places.) To measure the black-white income gap, we first calculated per capita black income as a percentage of per capita white income, and then took the difference from 100. (A metropolitan area where black income was 100% of white income would have no racial income gap, and would receive a score of zero; a metro area where black income was 90% of white income would receive a score of 10.)

The positive slope to the line indicates that as segregation increases, the gap between black income and white incomes grows as black incomes fall relative to white incomes. On average, each five-percentage-point decline in the dissimilarity index is associated with an three-percentage-point decline in the racial income gap (The r2 for this relationship is .59, suggesting a close relationship between relative income and segregation).

What’s less clear is which way the causality goes, or in what proportions. That is to say: there are good reasons to believe that high levels of segregation impair the relative economic opportunities available to black Americans. Segregation may have the effect of limiting an individual’s social networks, lowering the quality of public services, decreasing access to good schools, and increasing risk of exposure to crime, all of which may limit or reduce economic success.  This is especially true in neighborhoods of concentrated poverty, which tend to be disproportionately neighborhoods of color.

But there are also good reasons to believe that in places where black residents have relatively fewer economic opportunities, they will end up more segregated than in places where there are more opportunities. Relatively less income means less buying power when it comes to real estate, and less access to the wealthier neighborhoods that, in a metropolitan area with a large racial income gap, will be disproportionately white. A large difference between white and black earnings may also suggest related problems – like a particularly hostile white population – that would also lead to more segregation.

The data shown here is consistent with earlier and more recent research of the negative effects of segregation.  Glaeser and Cutler found that higher levels of segregation were correlated with worse economic outcomes for blacks.   Likewise, racial and income segregation was one of several factors that Raj Chetty and his colleagues found were strongly correlated with lower levels of inter-generational economic mobility at the metropolitan level.

Implications

To get a sense of how this relationship plays out in particular places, consider the difference between two Southern metropolitan areas: Birmingham and Raleigh.  Birmingham is more segregated (dissimilarity 65) than Raleigh (dissimilarity 41).  The black white income gap is significantly smaller in Raleigh (blacks earn 17 percent less than whites) than it is in Birmingham (blacks earn 29 percent less than whites).

The size and strength of this relationship point up the high stakes in continuing to make progress in reducing segregation as a means of reducing the racial income gap.   If Detroit had the same levels of segregation as the typical large metro (with an dissimilarity index of 60, instead of 80), you would expect its racial gap to be  12 percentage points smaller, which translates to $3,000 more in annual income for the average black resident.

These data presented here and the other research cited are a strong reminder that if we’re going to address the persistent racial gap in income, we’ll most likely need to make further progress in reducing racial segregation in the nation’s cities.

The correlations shown here don’t dispose of the question of causality:  this cross sectional evidence doesn’t prove that segregation causes a higher black-white income gap.  It is entirely possible that the reverse is true:  that places with smaller income gaps between blacks and whites have less segregation, in part because higher relative incomes for blacks afford them greater choices in metropolitan housing markets.  It may be the case that causation runs in both directions.   In the US, there are few examples of places that stay segregated that manage to close the income gap; there are few places that have closed the income gap that have not experienced dramatically lower levels of segregation.   Increased racial integration appears to be at least a corollary, if not a cause of reduced levels of income disparity between blacks and whites in US metropolitan areas.

If we’re concerned about the impacts of gentrification on the well-being of the nation’s African American population, we should recognize that anything that promotes greater racial integration in metropolitan areas is likely to be associated with a reduction in the black-white income gap; and conversely, maintaining segregation is likely to be an obstacle to diminishing this gap.

Though provocative, these data don’t control for a host of other factors that we know are likely to influence the economic outcomes of individuals, including the local industrial base and educational attainment.  It would be helpful to have a regression analysis that estimated the relationship between the black white earnings gap and education.  It may be the case that the smaller racial income gap in less segregated cities may be attributable to higher rates of black educational attainment in those cities.  For example, the industry mix in Raleigh may have lower levels of racial pay disparities and employment patterns than the mix of industries in Birmingham.  But even the industry mix may be influenced by the segregation pattern of cities; firms that have more equitable practices may gravitate towards, or grow more rapidly in communities with lower levels of segregation.

Brief Background on Racial Income Gaps and Segregation

Two enduring hallmarks of race in America are racial segregation and a persistent gap between the incomes of whites and blacks.  In 2011, median household income for White, Non-Hispanic Households was $55,412; for Blacks $32,366 (Census Bureau, Income, Poverty, and Health Insurance Coverage in the United States: 2011, Table A-1).  For households, the racial income gap between blacks and whites is 42 percent.  Census Bureau data shows on average, black men have per capita incomes that are about 64 percent that of Non-Hispanic White men.  This gap has narrowed only slightly over the past four decades: in the early 1980s the income of black men was about 59 percent that of Non-Hispanic whites.

Because the advantage of whites’ higher annual incomes compounds over time, racial wealth disparities are even greater than disparities in earnings.  Lifetime earnings for African-Americans are about 25 percent less than for similarly aged Non-Hispanic White Americans.   The Urban Institute estimated that the net present value of lifetime earnings for a non-hispanic white person born in late 1940s would be about $2 million compared to just $1.5 million for an African-American born the same year.

In the past half century, segregation has declined significantly.  Nationally, the black/non-black dissimilarity index has fallen from an all-time high of 80 in 1970 to 55 in 2010, according to Glaeser and Vigdor .  The number of all-white census tracts has declined from one in five to one in 427. Since 1960, the share of African-Americans living in majority-non-black areas increased from less than 30 percent to almost 60 percent.  Still, as noted in our chart, their are wide variations among metropolitan areas, many of which remain highly segregated.

Technical Notes

We measure the racial income gap by comparing the per capita income of blacks in each metropolitan area with the per capita income of whites in that same metropolitan area.  These data are from Brown University’s US 2010 project, and have been compiled from the 2005-09 American Community Survey.  The Brown researchers compiled this data separately for the metropolitan divisions that make up several large metropolitan areas (New York, Chicago, Miami, Philadelphia, San Francisco, Seattle, Dallas and others).  For these tabulations we report the segregation and racial income gaps reported for the most populous metropolitan division in each metropolitan area.

Travis County, TX is booming. Cook County, IL is shrinking. What does that tell us about cities? Not much.

For the last few years, counties at the center of their metropolitan areas have been growing faster than those at the edge. But late last month, the Washington Post‘s Emily Badger – citing analysis by demographer William Frey at the Brookings Institution – reported that the Census’ latest population estimates show that in 2014, the country returned to its pre-recession norm of faster growth in the exurbs.

This reversal, Badger and Frey argue, sheds some light on Americans’ housing preferences. In particular, it suggests that part of the much-heralded “return to the city” may just have been people delaying their moves to the suburbs while they held out for better economic conditions.

Badger is one of the best writers on urban affairs working today, and she takes pains to emphasize how limited and provisional that conclusion is. But I would take that one step further: even if 2014 is a return to the “old normal,” this particular data tells us very little about national housing preferences. Moreover, it necessarily misses important strengths and important weaknesses in urban cores.

The problem is that county-level data, on its own, is just not good at telling us anything about what kinds of neighborhoods Americans want. It’s as if the answer to the question we’ve asked has passed through a long game of telephone: by the time it gets to you, you have no idea whether what you’re hearing has anything to do with what you actually want to know.

There are at least two big reasons for this. The first is that counties at the center of their metropolitan areas aren’t a good proxy for “urban cores,” in the sense that Frey seems to mean. If Americans’ housing preferences have shifted towards the urban, then presumably that means not just neighborhoods that are closer to downtown: it means walkable streets, decent access to transit, perhaps a mix of housing stock that includes more apartments, and so on. But in the vast majority of American cities – and Brookings’ analysis includes all metropolitan areas with more than half a million people – those kinds of neighborhoods represent a minority of even the innermost county.

Take, for example, Travis County, Texas, which sits at the heart of the five-county Austin metropolitan area. Travis County’s neighborhoods range from newly-minted skyscrapers walking distance from the state capitol, to low-rise but recognizably urban communities like East Austin, to prototypical car-oriented suburbs like Pflugerville, to farmland. Travis County happens to be growing substantially – 26% in the last decennial Census – but without more detailed information, it’s impossible to know whether that’s because people are flocking to its urban core, or because places like Pflugerville are booming.

Downtown Austin:

Pflugerville:

Even in many of the largest, densest cities – ones that are definitely experiencing a continued boom in their urban core – using county-level data can be highly misleading. Chicago, for example, led the nation in the 2000s in population growth within two miles of its city hall, and has seen some of the most rapid gentrification of its urban neighborhoods of any city in the country. (It also showed up as one of the cities attracting the most “young and restless” – 25-to-34-year-olds with a four-year college degree – within three miles of downtown in our report late last year.) But Cook County, which contains all of the city of Chicago as well as its inner-ring suburbs, lost more than 3% of its population in the last Census. Again, county-level data completely erases the real growth in more recognizably “urban” neighborhoods.

The second reason is that a growing demand for city living might show up in two different numbers: one is population, but the other is housing prices. After all, in order for people to move to a city, there has to be somewhere for them to live. If the city doesn’t allow new housing to be built – or allows less than what people would like to buy – then population might flatline, but prices will skyrocket. Badger mentions this possibility towards the end of the piece, but it’s a much more serious problem for the data than is portrayed. It’s well-established that virtually every economically healthy city in the country permits less housing to be built than would be without zoning restrictions. In some places, that leads to population stagnation, or even loss, in places that are clearly in extremely high demand: in the 2000s, Brooklyn’s Park Slope saw its population increase by just one half of one percent – despite being one of the most desirable neighborhoods in one of the most desirable cities in the country. At the same time, according to the real estate website Trulia, the median home sales price in Park Slope rose by over 50%, or nearly $200,000.

And of course, rents and home prices are rising quite rapidly in many other urban neighborhoods, too. Moreover, as we’ve pointed out before at City Observatory, in some places, there’s evidence that prices are rising faster in central cities than in their suburbs. Taken with what we know about the restrictive power of local zoning codes, that’s overwhelming evidence that a large amount of the demand to live in urban cores translates not into population shifts, but higher housing prices.

The point here is not that, in a correct reading of the data, cities are “winning” in some simple sense. In fact, county-level data can hide some of the problems of characteristically urban neighborhoods, too. Many central counties contain lots of inner-ring suburbs, or outlying city neighborhoods, that may not be as dense as those right outside downtown, but still have urban-type grids, walkable retail districts, and much higher density than the younger suburbs further out. A generation or two ago, these places were bastions of the middle class; today, many are struggling, stuck between the newer, larger homes of the exurbs and the more dynamic, trendy urbanism of the core. By blending data from those areas with data from healthier city centers, we end up missing both.

Finally, county-level data ends up putting our focus on the few very recent years – 2011 to 2013 – when core-county population growth exceeded exurb-county growth. But that’s a very misleading portrayal of the timeline of urban revival. Signs of growing interest in living in the urban core were evident in places like New York City, San Francisco, and Chicago in the 1970s. But the work done on urban economic geography by Sean Riordan and Kendra Bischoff shows that even beyond the usual suspects – in cities like Denver, Seattle, or Dallas – there were signs that people with options were beginning to move back to urban neighborhoods decades ago.

On these maps, from the Stanford Center on Poverty and Inequality, show relatively wealthy neighborhoods in green and poorer neighborhoods in purple. The growing number of high-income households choosing to live in the center of Dallas is clear as early as 1980 - 1990.
These maps, from the Stanford Center on Poverty and Inequality, show relatively wealthy neighborhoods in green and poorer neighborhoods in purple. The growing number of high-income households choosing to live in the center of Dallas is clear as early as 1980 – 1990.

 

And, for all the reasons outlined here, county population numbers can’t tell us whether those decades-long trends are slowing or picking up steam, or whether more or fewer people actually want to live in “urban” neighborhoods. The answer to that question is very difficult, and mixed up in the interpretation of any number of demographic and economic indicators – including (but not necessarily limited to) home prices, new construction, and population on a much more geographically detailed level. But if we really want to know what most people want in a neighborhood, that’s where we have to look.

Walkability rankings: One step forward, one step back

To begin, let’s be clear about one thing:  we’re huge fans of Walk Score–the free Internet based service that rates every residential address in the United States (and a growing list of other countries) of a scale of 0 to 100, based on their proximity to a series of common destinations.  The concept and implementation of Walk Score are brilliant, transparent, and well-documented: not only can you see the score for your house or any other, Walk Score shows you which destinations were used in calculating that score.  And did we mention, it’s free.

The power of Walk Score is its market-moving value:  Americans are increasingly looking to live in vibrant, walkable communities, and Walk Score gives home buyers (and now apartment renters) a clear and simple tool for assessing the relative merits of different locations. (Which is undoubtedly why it was acquired by real estate website Redfin.com last year).  While there’s a lot more to walkability than just proximity to destinations–urban design, the quality of the built environment and pedestrian infrastructure matter too–Walk Score has substantially advanced the conversation about how to measure and make walkable places.  To their credit, the team at Walk Score has responded to criticism and continued to refine and extend their product, incorporating a Street Smart algorithm to track the street grid rather than relying on straight line distances, and adding measures for transit and bike access.  All this is exciting and useful: We think that giving consumers better information about their choices is pretty much an unalloyed public good.

And–full disclosure–in 2009 we got the cooperation of the team at Walk Score to provide data for a research project looking at the connection between walkability and real estate values.  Our research–done independently from Walk Score–showed that in 14 of 15 cities that we examined, walkability was positively correlated with home values, even after controlling for a host of other observable factors (like neighborhood income, numbers of bedrooms and bathrooms, home size, distance to jobs) that we know influence home values.  You can read the study “Walking the Walk” here.

Yesterday, Walk Score released its latest analysis rating the of walkability of major US cities.  Scores are produced by averaging the walk scores for different parts of the city, weighted by the population.  According to Walk Score, New York is the most walkable large city in the U.S., with an average walk score of 87.6, followed by San Francisco (83.9) and Boston (79.5).  The complete rankings are here.

Because they’ve been gathering data for a number of years, Walk Score is now in a position to report the change in walkability at the city level.  This should be a key indicator for mayors, planners and citizens.  Becoming more walkable is likely to be a proxy for an improving local economy, and suggests a city is becoming more accessible to its residents.  Walk Score reports that several cities have notably better walkability than a few years ago year ago:  Miami’s city-wide Walk Score increased by  more than 3 points, Detroit saw an increase of 2.2 points, and New Orleans recorded an increase of 0.7 points.  These particular results are a bit muddled by some changes to the Walk Score algorithm since 2011, but going forward, this promises to be an important tool for tracking progress at the city and neighborhood level.  Kudos to Redfin and Walk Score for making this information available.

But enamored as we are of Walk Score, we’re compelled to point out one glaring flaw in their rankings: the use of municipal geographies to compute scores for ranking purposes.  Their methodology looks at the average level of walkability only for addresses located within the city limits of each city.  Because municipal boundaries are so varied from place to place, municipalities are a poor unit for comparison, particularly for this kind of spatial data.  Using municipal boundaries for comparative work inevitably ends up comparing apples to acorns, and produces rankings that are at best misleading, and at worst, arguably wrong.

Chicago and Miami provide a case in point.  According to the Walk Score ranking, Miami is more walkable than Chicago–Miami’s city-wide walk score is 75.6, edging out Chicago’s 74.8–a finding that immediately struck our colleagues who have lived in the two cities as counter-intuitive, to put it mildly.  But the problem isn’t a flaw in Walk Score, it’s that these two municipalities represent wildly different chunks of their respective metropolitan areas.  The City of Miami encompasses only a small portion of the Miami-Ft. Lauderdale metropolitan area (the densest parts of downtown Miami and close-in urban neighborhoods); Chicago covers a much larger swath. The City of Miami is just the most densely housed 400,000 people in South Florida; while the City of Chicago is 2.7 million people.  There’s little doubt if we measured the walkability of the Chicago neighborhoods that were home to that region’s 400,000 or so most densely housed residents, we’d find a much higher Walk Score.

It turns out that metropolitan areas are a much more sensible basis for making comparisons and presenting rankings.  While municipal units may be a valid geography for some comparisons (related say, to elections or public finance) they can easily be misleading or wrong for making comparisons that involve economics and geography.  Look for a future CityCommentary digging deeper into this problem–and outlining how to avoid it.

In the mean time, here’s an unsolicited suggestion for the team at Walk Score:  can you use your database to create a count of the number of persons living in homes and apartments with a Walk Score of 80 or higher “very walkable” and “walkers paradise” in each metro area?  This would be a much more compelling indicator of how metro’s stacked up as walkable places than a single average score for a city–or a metro.

So, in the end, its one step forward (another year’s worth of data and the promise of tracking changes in walkability over time) and one step back (using municipal boundaries for comparisons).  This last glitch is easily fixed–and knowing the team at Walk Score, it certainly will be. In the mean time, their excellent and informative Walk Score data for individual properties is performing a vital public service and helping move markets.

Should your city build a headquarters hotel?

Around the nation, tourism officials are pushing the construction of publicly subsidized “headquarters” hotels to help fill publicly subsidized convention centers. One person who has tracked this industry carefully is University of Texas at San Antonio professor Heywood Sanders, author of the recent book, Convention Center Follies. In this commentary for City Observatory, Woody shares his insights on the lofty expectations and less than optimal outcomes that plague many of these economic development schemes.

by Heywood Sanders

Des Moines wants a convention center hotel. With the argument that its “convention space is not able to live up to its potential due to the lack of an attached hotel,” Des Moines and Polk County officials are seeking a $39 million grant from the state to partially finance a $101 million, 330 room hotel. Des Moines officials are certain that a new hotel will produce a boom in local convention business, based on a consultant study. In Sioux City, they want a convention center hotel too, one with 150 rooms, as part of a planned entertainment and retail district downtown. Armed with a 173-page consultant study, Sioux City wants a piece of the same state grant funds. In Muscatine, city officials are seeking state grant dollars for a 112-room hotel and conference center in the heart of their downtown.

Savannah, Georgia wants a convention center hotel too. They have a consultant study that proves it will bring an increase in convention business. Atlanta wants a really big one, circa 800 to 1,200 rooms. Ft. Lauderdale wants one, and so does Irving, Texas, Pittsburgh, and Portland, Tacoma, and Salt Lake City. With a host of cities around the country, large and small, struggling to fill their convention centers in an overbuilt market, local officials have been told the same tale—in order to compete as a convention destination, in order to make your convention center perform, you need a big “headquarters hotel” next door. As one consultant recently put it, “Convention centers without one need one. Those without enough [hotel rooms] are planning more.”

These cities would join a long list of other communities, from Baltimore and Washington, to Columbus and Cleveland, to Omaha and Denver that have received the same consultant advice and have pursued their own efforts to develop a major new hotel. And when attempts at securing private investment in a convention center hotel have failed, these communities have then gone into the hotel business, financing a new hotel with tax-exempt municipal bonds usually backed by tax revenues. The promise from the expert consultants is that a new hotel will boost the city’s convention business, neatly filling the new hotel rooms and thus a sure bet as a public investment. Then there is the story of Phoenix.

The new, 1,000-room Phoenix Sheraton, financed by the city and backed by a city tax revenue stream, was supposed to be neatly filled by the over 220,000 additional convention attendees brought to Phoenix each year by its $600 million convention center expansion. By 2014, the Sheraton hotel was forecast to hit an occupancy rate of 69 percent, at an average daily rate of just under $200. Things didn’t quite work out that way. Last year the hotel only managed occupancy of 57.5 percent, at a rate of $146.93. Looking at the hotel’s bottom line is even grimmer. The new Sheraton was forecast to generate net revenues in 2014 of over $29 million—more than enough to cover the $22.2 million in debt service. But the hotel actually produced only $11.9 million in net revenues, forcing the city to use more than $12 million in other tax dollars to support the hotel.

Things in Phoenix went wrong for a series of reasons, including the Great Recession. But perhaps the most central (and pervasive) problem for Phoenix lay in a series of overly optimistic consultant studies on the performance of the expanded convention center and adjacent hotel.

A 1999 study by PriceWaterhouseCoopers forecast that a bigger convention venue would boost annual convention attendance by 85 percent, giving Phoenix the ability to host two large conventions at one time. A March 2003 analysis by Ernst & Young promised that the bigger center would see convention attendance grow from the then 125,000 to a forecast 376,861 each year. Hotel consulting firm HVS then took that forecast of 375,000 annual attendees to project 289,282 new hotel room nights in the downtown Phoenix hotel market annually. That new room demand would be more than enough to fill up the proposed 1,000 room Sheraton, and spill over to other downtown hotels.

Unfortunately, things did not work out quite as the consultants had assumed. The expanded Phoenix Convention Center held its grand opening in December 2008, two months after the new Sheraton opened for business. Just two years later, in December 2010, the Moody’s bond rating agency downgraded the hotel bonds. The early portents on the center’s performance had been good, hosting the NBA All-Star Game’s Jam Session in February, followed by the National Rifle Association’s national convention in May. But for fiscal year 2009 (ending June 30), the center managed to attract 284,586 convention attendees—decidedly fewer than 375,000 but a sizable increase nonetheless. It would stand as a high water mark.

The center reported convention attendance of 229,097 in fiscal 2010, and 156,126 the next year. For 2013, the reported attendance was 165,370, with an estimated 173,000 in fiscal 2014. At a cost of $600 million, the expanded Phoenix Convention Center was producing about the same attendance levels Ernst & Young had reported for its predecessor in 1996 and 1997.

Without the boom in convention attendees to fill the Sheraton, the hotel has struggled, with occupancy rates in the 50 percent range since 2010. Even as the impact of the Great Recession has ebbed since 2009, the hotel’s average daily rate has not budged at all, with the 2014 rate of $146.93 still below the $163.90 it managed in its first year.

The Phoenix tale is not unique. A great many publicly-financed hotels, including those in Baltimore, Myrtle Beach, Overland Park, KS, and Omaha have seen faltering performance and downgrades by bond rating agencies. One hotel, the St. Louis Renaissance Grand, failed so spectacularly that it went into default and was ultimately sold for a fraction of its development cost, at substantial loss to the bondholders.

The notion, whether in Des Moines or Pittsburgh or Boston, that a big new hotel is some kind of magic elixir to make a city more successful in the enormously competitive convention business, is based far more on hope than reality. Even with a consultant study.