Smartphones are all-consuming vampires, sucking our mental energy and leaving city dwellers disoriented and alienated. But they’re here to stay — so how can urban designers use their immense power for good?
We are living in a time of unprecedented visual distraction. In the modern urban environment, our attention has to battle with myriad layers of signage and communication — some useful, some not — from billboard advertisements to traffic lights. At the same time, an even more pervasive source of visual pollution can be found in our own hands. The constant drip, drip, drip of digital diversions originating from our smartphones and other devices is reshaping how our minds behave and function, and how we perceive the world around us.
Does this matter? And if it does, how do we regain control of our sensory experience in urban environments? The effect that the built environment has on our brains is the subject of a growing field of study, combining insights from disciplines such as neuroscience, psychology, architecture and philosophy. By understanding how places influence our thoughts, feelings and behaviours, the theory goes, we’ll be able to better design cities that can make people healthier and happier. And while our screens may be part of the problem, those ubiquitous digital devices also offer solutions. For example, the Conscious Cities movement, founded by architect Itai Palti and neuroscientist Moshe Bar, seeks to shift the focus of urban design from efficiency to effectiveness. By drawing on advances in data analysis, artificial intelligence and behavioural science, a “conscious city” would be more user-centric, responding dynamically to occupants’ needs. Bridging city-building with neuroscience and technology, Palti contends, “presents an opportunity to raise the intelligence of our surroundings and improve our wellbeing”.
A cognitive machine
The relationship between humans and buildings is far more complex and deep-rooted than simply one of shelter, or even of home. In fact, some cognitive scientists have come to believe that the distinction between mind, body and environment is an arbitrary one. The philosophical concept of the “extended mind” holds that we recruit aspects of our environment to support cognitive function. Rather than our minds being limited to the boundaries of the individual person, they extend outwards to include manmade tools, technology, buildings, even entire cities — an idea that suggests that visual distractions could have wider implications than mere annoyance.
“The architecture that we create isn’t just an extension of one mind — it’s what allows multiple minds to come together,” says Alan Penn, dean of the Bartlett Faculty of the Built Environment at University College London. “It’s the way that we become more than just the individual to become a social group, and there’s a sort of social intelligence that emerges out of that.”
This may sound like a way-out idea but it’s backed by a growing body of literature. Penn’s research focuses on how spatial characteristics of the built environment — the degree to which it brings people together or keeps them apart — influences patterns of social behaviour. By his account, the history of human civilization offers many clues as to how our social and urban systems have co-evolved. The earliest known built settlements were constructed around 10,000 years ago in the Anatolian plateau in Turkey, and Penn believes that they were the catalyst for a series of very rapid advances in society, such as writing and currency. His argument is that the invention of buildings and cities created a new layer of “cognitive machinery” beyond the individual brain and body, which allowed a collective intelligence to develop. This, in turn, informed how buildings and cities were constructed. It’s a positive feedback loop. Brain and building, building and brain. Or as Penn puts it, “The DNA of the social world is encoded within architecture.”
Moreover, he suggests that the degree to which we can see one another, and whether we’re constrained by whom we can see, has played a role in the evolution of empathy and imagination. “At its most fundamental level, empathy depends upon perception,” Penn argues in a 2018 paper. “We have to see, or possibly to hear, others in order to view things from their point of view. Building a wall constrains who can see whom, and so can constrain the potential for empathic relationships.”
Perception is a more complex process than simply seeing, as it also involves deriving meaning. What our attention is drawn to, both consciously and subconsciously, plays a role in interpretation. Our brains selectively process visual information, prioritizing particular areas in our visual field. This is known by cognitive neuroscientists as “visual spatial attention”.
So what happens when our senses are distracted and overloaded with extraneous information, as is the case in busy urban centres? When we encounter a typical scene in an urban setting — a crosswalk, an intersection, a doorway, a building facade — our visual spatial attention must work overtime. And in a modern city, our attention has to battle with other visual impairments that scream for our attention, spewing out the cognitive equivalent of toxic smoke, or “visual pollution”.
What exactly constitutes visual pollution is difficult to define since it is largely dependent on personal preference, judgement and opinion. Even when it can be agreed that visual pollution is present, quantifying the magnitude of it is still difficult. But any city dweller will recognise the sensation of being bombarded by visual stimuli. Advertisements jostling for our attention are a ubiquitous, even defining, feature of urban life. And it’s getting worse. Revenues from print, radio and television ads have been in decline for years as our attention moves online, but billboards are bucking the trend. According to the Out of Home Advertising Association of America, billboards have hit a record high in the US, for example, with revenue growth for 35 consecutive quarters. Their even more visually demanding digital versions now account for almost one-third of revenues.
Or rather, almost ubiquitous. In an attempt to regain control of urban sensory experience, some places have proactively reduced visual pollution by imposing restrictions on the location, size and number of advertisements and billboards. In 2006 São Paulo, Brazil — South America’s largest city — became the world’s first city to go ad-free. Its Clean City Law banned billboards, limited signs on storefronts and prohibited advertisements on taxis and buses. In the first year, 15,000 marketing billboards were taken down and businesses were given 90 days to remove 300,000 ostentatious shop signs or face a fine. Dozens of other cities and states, from Alaska to Hawaii to Moscow to Chennai, have also instituted some form of restriction on outdoor advertising.
Now consider that São Paulo’s ban on outdoor advertising was implemented a year before the first-generation iPhone was released to the market. For the last decade the technology industry has been ruled by smartphones and mobile apps with the single goal of capturing our attention. But screens haven’t just achieved this: they’ve completely conquered most of our cognitive capacity, leaving us more scatterbrained, unfocused and anxious than ever. The US is suffering from a “national attention deficit”, according to Richard Davidson, neuroscientist at the University of Wisconsin-Madison and founder of the Center for Healthy Minds. Humans are better able to voluntarily regulate their attention than other species — a key trait that sets us apart — but attention-grabbing devices subvert this.
A 2017 study of 800 smartphone users by the University of Texas at Austin found that their cognitive capacity was significantly reduced when their devices were within reach — even when they were switched off. “It’s not that participants were distracted because they were getting notifications on their phones,” said Adrian Ward, assistant professor at UT’s McCombs School of Business. “The mere presence of their smartphone was enough to reduce their cognitive capacity.” Whether the phone was on or off, lying face up or face down, if it was within sight or easy reach reduced the subjects’ ability to focus and perform tasks. “Your conscious mind isn’t thinking about your smartphone, but that process — the process of requiring yourself to not think about something — uses up some of your limited cognitive resources. It’s a brain drain.”
And there is plenty of evidence that smartphone dependency is changing our spatial cognitive abilities, especially as we outsource our intuitive navigation skills. The use of mapping apps encourages “head down” navigation behaviour and therefore less engagement in our immediate surroundings. A 2017 University College London study published in the journal Nature Communications scanned participants’ brains as they navigated through a film simulation of London’s Soho, either making their own decisions or being guided by a GPS. When they were passively following directions, crucial brain areas didn’t fire.
“If you are having a hard time navigating the mass of streets in a city, you are likely putting high demands on your hippocampus and prefrontal cortex,” explained senior author Dr Hugo Spiers, a reader in neuroscience in UCL’s Department of Experimental Psychology.
“Our results fit with models in which the hippocampus simulates journeys on future possible paths while the prefrontal cortex helps us to plan which ones will get us to our destination. When we have technology telling us which way to go, however, these parts of the brain simply don’t respond to the street network. In that sense, our brain has switched off its interest in the streets around us.”
From wayfinding to wayknowing
So what’s to be done? As a society we’re still trying to figure out how best to deal with the intrusion of technology into our daily lives. But we also need to acknowledge that these technologies aren’t going away, and find ways to turn their strengths to our advantage.
“Smartphones are one of the best — maybe the best — human-machine interfaces that we currently have,” says Jay Wratten, smart building strategist at WSP. Better interfaces will emerge in the near future, but for now, smartphones open up socially advantageous ways for people to interface with various systems and services — from entertainment to transportation to governance.
Many of the potential benefits of smartphones in potentially stressful settings such as airports or busy city streets have nothing to do with the screen. Smartphones can be proxies for people within smart buildings, says Wratten, providing data that can be used to improve the user experience without measuring or surveying occupants themselves. So, for example, a building could identify where people are located, how long they spend in those areas and what the temperature is.
Data-driven design, often abbreviated to D3, uses analysis of building systems data alongside occupant feedback to make buildings more responsive and adaptive. WSP is currently working on a smart building retrofit of San Francisco International Airport using the D3 concept. One of the primary goals is to use traditionally neglected data sets in order to improve passenger experience. For example, can Uber and Lyft drop-off densities at the departure gate be used to help passengers predict waiting times at security? Can flight arrival and departure data be integrated into building management systems to better improve thermal comfort?
To better understand how digital technology can be harnessed to improve experience in the built environment, it helps to take a step back and consider something as commonplace and low-tech as wayfinding. At its most fundamental level, wayfinding can be thought of as a tool for spatial problem-solving: helping people understand where they are, where they want to go, and how best to get there. It’s important because spatial awareness is a key predictor for whether people have a positive or negative experience of a place. This is particularly the case in complex or high-stress environments — the very places that impose the greatest cognitive load — such as campuses, airports, city centres and hospitals.
The modern concept of wayfinding was first introduced in the 1960 book The Image of the City by American urban theorist Kevin Lynch. But it has only been in recent years that designers have had the ability to transcend traditional visual communication methods and explore smarter ways of helping people get from A to B. What’s emerging is a new approach to wayfinding that merges physical signage with data, using technologies such as smartphone apps, indoor positioning systems and wearables, to offer a more personalized, intuitive experience. It is an approach that integrates directions with real-time information about the conditions of the journey and the destination. Think less “wayfinding” and more “wayknowing”.
This isn’t just a matter of providing people with more information to look at on their smartphone. To accomplish a better kind of wayknowing, tech giants are starting to build a new digital landscape, one that relies less on the visual and more on the audible and tactile (or “haptic”). Emerging “conversational interfaces” such as Apple’s Siri rely on voice assistants, in-ear technologies and wearables that can provide more of a heads-up experience of navigation. For example, with Google Maps on Apple Watch you can listen to turn-by-turn pedestrian directions. Google is also starting to add haptic feedback features (ie, vibrations) to let you know not only when to turn, but in which direction: three vibrations for a left turn, two vibrations for a right.
This approach to wayknowing clearly offers benefits to people with special needs. For example, it could provide visually impaired people with the ability to navigate unfamiliar indoor environments with greater independence. Computer vision-based wayfinding aids such as the Blind Launcher app use algorithms that can detect maps and objects such as doors and elevators, as well as optical character recognition (OCR) software that recognizes text and signage. This information is then conveyed to users through a combination of auditory and tactile feedback.
Of course, not all solutions to wayfinding are technology-based. Some are aesthetic, and defiantly low-tech. When signage is well-integrated into the local vernacular, it can be a defining part of urban character — and serve as memorable visual markers. Take, for example, the ubiquitous hand-painted advertising found in Oaxaca, Mexico. Across Latin America, walls and buildings have become a canvas advertising everything from local events and small businesses to national brands and multinational corporations.
Oaxaca’s unique style of colourful graphics and bold typography exemplifies the best of this type of vernacular branding, elevating its streetscapes into an artform.
From this perspective, the goals of reducing visual distraction and making cities more legible could be in conflict. After all, one of the greatest annoyances for those who would rid our city streets of clutter is excessive signage. On the other hand, eliminating all visual distraction would result in lifeless cityscapes, devoid of character and impossible to navigate. Like any design criteria, the new insights provided by this emerging discipline need to be applied sensitively and intelligently. Creating places that are better for people doesn’t require blindly buying the latest consumer gadgets or haphazardly adopting the newest technologies. Similarly, it can’t be achieved by neglecting all commercial interests or spurning new innovations. It’s a balancing act that will present many fresh challenges and dilemmas for urban designers. Overall, though, it’s a very exciting problem to have.
Originally published in The Possible magazine.