Imagine that it's 2100, and your car sputters out in the middle of the Henry Hudson bridge. You've reached the invisible border that separates Manhattan from the Bronx —a line that is enforced not by walls or armed guards, but by order of an unseen digital authoritarian. This techno-dystopian scenario is exactly what life is like for many Brazilians, for whom Big Brother Uber is denying travel to the kinds of places that it has decreed are no good. This digital discrimination, tantamount to redlining, is only reinforcing the vastly unequal rifts in Brazilian society, keeping the poor poor and the rich rich.
The rideshare giant has made news in the past for its patriarchal culture and lack of diversity in the workplace; Susan Fowler's shocking account of harassment and misogynistic behavior led to the 2017 ouster of founder and CEO Travis Kalanick. While new leaders have taken steps to improve Uber's culture and its reputation, the company's so-called "safety" policy in Brazil shows that the homogeneity of "toxic bro-culture" has pernicious consequences that extend far beyond Silicon Valley.
Now, Uber has been denying low-income Brazilians rideshare service by classifying favelas and other peripheral areas in Brazilian cities as "zones of risk." These areas are meant to protect riders and drivers from violent crime, after incidents where drivers have been shot at or even killed while on duty. The problem? Uber's zones of risk are defined by opaque, proprietary algorithms that sometimes black out poor neighborhoods for years on end, reducing mobility for populations that already have limited access to transportation and reinforcing the divides between rich and poor neighborhoods. In other words, Uber's algorithmic biases are driving inequality and classism in an already deeply unequal country.
Indeed, Uber's actions seem to belie its mission "to bring transportation—for everyone, everywhere." Yet like many examples of what is sometimes called algorithmic bias, it is possible that Uber's programmers aren't even aware of the vast social repercussions of their software. The power to control where and how people travel, and where they are allowed to travel, gives Uber vast influence over how the city is organized, and throws Uber executives into the position of ad hoc public security officials and urban planners. The variegated landscapes of risk and inequality in Brazilian cities underline the complexity of this role, and taint the notion that algorithms are ideal solutions for thorny social issues.
In Brazil, most cities are home to several favelas, defined by the Brazilian government as "subnormal agglomerations," a term associated with low standards of living and a lack of access to services such as running water and paved roads. In reality, no two favelas are the same: living conditions vary greatly within and between favelas. Some communities, such as Rio de Janeiro's Rocinha, have existed for over 100 years and are officially recognized as neighborhoods by municipal governments, while others are more recently established and lack basic infrastructure.
Regardless of what life is like on the ground, favelas are almost universally identified with poverty and violence among upper class Brazilians, due to their informal history and association with militias and drug trafficking. Some scholars argue that favela areas are united by cultural stigma rather than measurable social, economic or infrastructural characteristics, much in the same way that certain American neighborhoods are derided as "ghettos" even when they may not have distinct, measurable traits that affirm them as such.
Favelas present problems for rideshare companies because they can exist in relatively central locations —Rocinha, for example, separates two of Rio de Janeiro's wealthiest neighborhoods. For this reason, rideshare companies can't claim that they are simply out of range of their service area. Favelas are also home to millions of Brazilian citizens — up to 24% of the population in some cities, including those that live furthest from public transport.
Meanwhile, rideshare companies argue that favelas should also not be treated as "normal" parts of the city after several violent incidents involving drivers occurred in these communities — including a particularly gruesome incident where an alleged gang member tortured and killed four rideshare drivers after an Uber canceled on his ailing mother in the Northeastern state of Bahia. Rideshare drivers must also contend with restrictions set out by militias or traffickers who control certain neighborhoods.
Recent reporting by Nexo Jornal shows that Uber's knee-jerk reaction to terrible incidents like these was to eliminate "zones of risk" from their routes within the city using an undisclosed machine learning process that takes into account individual rider characteristics, and data from public security institutions. Sao Paulo's two largest favelas were denied service completely until last year, and the company pulled out of many Rio favelas in 2014. Depending on the day, residents who attempt to be picked up are told that Uber is not available in their areas, while those who ask to be dropped off at home are instead dropped miles away, at the algorithmic border of the "safe" city. Other services such as Waze and Google Maps have taken similar measures.
While this policy makes sense from the perspective of executives and shareholders who want to minimize liability, it also enacts a form of digital redlining that holds entire populations responsible for isolated crimes. It is no coincidence that excluded populations consist of lower-income, predominantly black and brown Brazilians.
Harvard scholar Sheila Jasanoff has argued that all technological developments create new risks, but that the negative consequences of such risks are distributed unevenly, most often reinforcing existing inequalities and power imbalances. She argues that risk assessments and responses are not objective matters, but rather subjective judgements made according to the specific values of the groups or institutions that implement them.
Social distancing as a response to the coronavirus is one constructive example of how dominant social values shape responses to risk. This measure has been taken up en masse despite the fact that the vast majority of fatalities are restricted to a small minority of older people. The fact that world leaders have endorsed this response despite its negative social and economic consequences indicates that they value the lives of all citizens, including older individuals, over short-term economic gains.
In contrast, Uber's decision to solve their safety conundrum by excluding entire swathes of the city indicates that their vision for "safer communities" does not necessarily include marginalized populations. It also underlines Silicon Valley's enduring tendency to create "universal" technological solutions that in fact encode the value systems of those who create them, a group that is overwhelmingly composed of wealthy white men. Women make up just 12 percent of leading machine learning researchers worldwide, and little data exists on racial and ethnic minorities in the field. The "diversity crisis" in AI has led to several notorious failures, such Amazon's systematically sexist hiring system, Google's problematic search results for terms related to Black and Latina women, and predictive policing algorithms that consistently recommend the overincarceration of minority offenders.
Uber's low levels of diversity, especially among engineers, are indicative of this crisis. The company's employment data shows that black and Latino/Latina workers combined make up less than 10 percent of their tech team. Hence, the engineers who control how space is defined and governed within the app share a relatively specific worldview.
This lack of diversity at the top matters. Automated decisions about transportation are primed to become influential as smart mobility becomes ubiquitous and begins to affect the design of urban areas. The importance of early transportation decisions become apparent when looking at the history of transportation in the US. Early decisions about the design of highways in the US has had lasting effects on urban development, with minority and low-income populations bearing the burden of negative consequences. Decisions about rideshare routes are less materially permanent but even more complex, because they are made by private companies who have little incentive to maximize equality over profit.
Uber's Brazilian competitor, 99, has shown how contextual knowledge and human oversight can provide a better solution to rideshare companies' safety conundrum. 99 has also defined "zones of risk" for drivers on their ridehailing app, with one important dinstinction. The company defines these zones through manual consultations by security specialists, in addition to statistics. They also allow drivers to make the choice to operate in zones of risk, as opposed to routing them around these areas — recognizing that drivers themselves may live in favelas or peripheral areas. In addition, they discount prices by up to 30 percent in low-income areas. Certainly 99's approach to safety does not eliminate the stigmatizing potential of labeling; but it does recognize the importance of local knowledge and contextual judgment in making decisions about risk — insights that are overlooked in favor of distant, technical expertise in Uber's case.
These steps help keep humans "in the loop" of algorithmic decisions, rather than creating a system that forcefully reshapes the city in the image of corporate risk assessment. Musician and computer scientist Ge Wang argues that this approach to automation creates "a process that harnesses the efficiency of intelligent automation while remaining amenable to human feedback, all while retaining a greater sense of meaning."
However, humans can only be in the loop if companies like Uber are willing to make their algorithms more transparent, a move that also allows for more diverse input and greater societal oversight. If done right, these measures provide companies with the opportunity to understand and combat marginalization more effectively, making it possible to trace detailed patterns of exclusion without recklessly perpetuating them. However, this requires tech companies to let go of the delusion that they can create technology that is neutral, universal, and all-knowing from within the confines of Silicon Valley. Brazil's rideshare safety dilemma shows that the world in which tech companies operate is much more complex than many executives would like to believe, and that sometimes the answer to making technology work is to make it more human.
Shares