The Dilemma of Regulating Tech

To anyone from 2007, the Senate hearing on April 10th, 2018 would’ve looked like preposterous fiction. Mark Zuckerberg, the boy-genius of Silicon Valley, received a five-hour tongue lashing from a panel of indignant senators wanting to know how Facebook had become a venue for Russian operatives to target U.S. voters with misinformation. Their questions, laced with indignation, voiced the concerns of an angry electorate who wanted to see the billionaire squirm. As Senator Nelson from Florida said, “Let me just cut to the chase. If you and other social media companies do not get your act in order, none of us are going to have any privacy anymore.”

Not long before, that scene would be unthinkable. Just a decade earlier the world couldn’t get enough of Zuckerberg, the media darling behind the most popular app in memory. Instead of having CSPAN cameras getting closeups as he sweat through a suit, he had been grinning on the cover of magazines in a hoodie. He was the second-coming of the tech-boy billionaires from the 90’s who promised to change the world and make their investors rich in the process. Only instead of operating systems and browsers, he was building a social network that claimed to connect the world. While we now malign Facebook as a threat to democracy and our mental health, back then it was the next big thing that we couldn’t get enough of.

This piece is not about Facebook, but rather the challenge presented in trying to control new technology. The Facebook arc is a story we see played out over and over again where society embraces a new thing only to later face unintended consequences. This piece is about the inherent dilemma of trying to monitor and shape technology capable of evolving from a fledgling novelty product into a society-wide phenomenon in just a few short years. Every year we see dozens of new technologies emerge and evolve, each with the potential to elevate or devastate society.

Breakthroughs in AI, energy technology, and biotech alone are occurring every day. So what, if anything, can we do to harness their benefits while making sure the innovations behind today’s magazine covers don’t end up as the focus of senate inquiries in 2035?

The Collingridge Dilemma

The first thing to recognize is that this dynamic isn’t new. The heart of the problem is the fact that any risk posed by a technology is typically only obvious once the genie has been let out of the bottle, but by then it can be difficult to change anything. The best description of this phenomenon is called the Collingridge Dilemma, written by David Collingridge in his 1980 book, The Social Control of Technology. His key insight is that, “when change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult, and time-consuming."

This makes sense. Think back to Facebook. In those early days, government could have easily pushed back against Facebook’s worst characteristics, and regulators could have cracked down on the business with relative ease. The problem was that during the window of time when change was easy, the need for change was not apparent. In fact the opposite was true. Anyone calling for regulating Facebook would’ve seemed cooky as the democratic world celebrated its impact. Back then the company was seen as a benign force for good, a way for young people to connect. A decade later, there’s a laundry list of complaints, but by then change is much harder. Congress can drag Zuckerberg to Capitol Hill as many times as they want, but it’s hard to regulate a multi-billion dollar, world-leading business with billions of users and a well-oiled army of the best lawyers and lobbyists money can buy.

It isn’t just financial interests that make it hard to change an established practice. It also is political. Perhaps the bigger challenge is that once a technology is in use, it can be incredibly hard to force people to change. For example Tesla is currently under scrutiny for its self-driving technology. Any attempt by regulators to eliminate or radically change the technology is much harder now, when hundreds of thousands of people deploy the technology every day, compared to back when it was first rolled out.

The other challenge is that many technologies do not appear to pose any meaningful risk when they first arrive. Take ChatGPT, which has now become the fastest app to ever have 100 million users. Right now media is largely awash with thousands of examples of how marvelous the technology is and all the beneficial applications it offers (save one NYT piece). Any effort by regulators to establish limits on the technology’s use right now would appear premature. How can you regulate something when there’s no clear harm and people are still figuring out how to use it? Yet, given the technology’s rapid ascent it appears obvious that it will become the source of societal conflict as more people use it.

So what can can be done?

Option 1: Do No Harm

The first option of how to deal with technology’s uncertainty is to work to avoid risks at all costs. It is best summed up in what is known as the precautionary principle - don’t embrace anything new until you can demonstrate the benefits outweigh the risks. You can see how this mentality appeals to government agencies where there is often little incentive to take large risks and huge costs to increasing liability. The Precautionary Principle also makes sense if you focus on the costs of past technological failures. Every past example where ‘moving fast and breaking things’ created problems (pick any example from the media ‘techlash’ of recent years) is reason to be more careful the next time around. Policymakers who embrace that narrative would argue it’s better to stop ‘the next Facebook’ before it begins.

There’s several obvious problems with applying this principle (1) it’s impossible to know a technology’s risks and benefits early on and (2) bureaucracies are often poorly equipped to navigate and respond to risk. Any cautious attempt to limit the benefit of a technology until its value is guaranteed inevitably also means stunting the benefits of emerging tech as well. The only way to eliminate risk is to throw the baby out with the bathwater as costs and benefits don’t emerge until a technology is fully deployed.

This is a real problem because technological benefits are real. For example, one RAND report on autonomous vehicles estimated that waiting to deploy autonomous vehicles only increased the likely lives lost on the road to human drivers by as many as 500,000 people. More recently, technology like MRNA vaccines were clearly able to demonstrate the ability to save trillions of dollars and millions of lives. Thankfully for the second case, the tangible cost of COVID created the political urgency to get MRNA into use. In Santa Monica, where the arrival of Bird scooters created a local political frenzy, the city estimated their initial pilot with scooters eliminated more than a million car trips over a 15 month period.

The other challenge is that bureaucracies are generally ill-equipped to make the rapid changes technology requires. Few public servants are rewarded for taking risks. As one city manager I interviewed during my dissertation stated, “when I was an entrepreneur you sit there and hope for the phone to ring and for new opportunities to arrive. In government you sit and hope your phone never rings, as something new typically means more headaches.” Any innovative officialis often working against the grain of how their jobs are designed. Heightened concerns about liabilities, public perception, and general risk aversion are baked into the incentives of most agencies. On top of that, agencies are beholden to politics, and negative perception of tech can shutter any ambitious innovation program.

Take San Francisco’s Office of Emerging Technology, which was created in the wake of Uber and Bird’s approach to launching in the public right of way without permission. The Office’s designers intend it to be a streamlined, one-stop-shop for getting a 1-year pilot approved. Seeing as the office will be housed within San Francisco’s Public Works Department, the same office managing the infamous $20,000 trash cans that took 4 years to develop, there’s a real chance the Emerging Innovation Office becomes a regulatory chokepoint instead of an advocate for bold ideas. Inflexible bureaucratic institutions are often poorly equipped to assess risk. Working to only avoid risk often means leaving a lot of public good on the table.

Option 2: Techno-Boosterism

The alternative to the first view is to take the view to its alternate extreme - to see technology as a silver bullet solution to all things. Instead of minimizing risk, there’s the view that the inevitability of technological improvement means the only solution is to accept every new idea to maximize the potential good. This view goes beyond simple optimism that technology can solve problems. It’s what happens when you start to believe there’s a technological solution to every societal problem.

One of the best articulations of this view come from David Zipper, who is a must-read for anyone interested in transportation technology. He’s described how technological FOMO can drive politicians to ignore valuable improvements in their pursuit of silver bullet solutions that end up useless. After listing off flashy tech ideas that wooed politicians but yielded little public benefit, his admonition is to:

fight the temptation to amaze your peers with a new, sexy mobility technology—and instead embrace whatever solutions can bring the most benefit to the most people. It’s your best chance to leave a legacy of improved lives

There’s plenty of examples of how tech hype can get ahead of itself. A recent WSJ exposé, “Elon Musk’s Boring Company Ghosts Cities Across America” documents cities that believed autonomous tunnels will save their cities’ transportation challenges only to be left high and dry. It reads straight out of the Simpson’s monorail episode.

There’s rational actors behind this behavior, just as there is for the skeptics described in Option 1. Startups live and die by their ability to grow, and so any company whose existence hinges on winning the approval of public officials will do whatever it takes to hype their technology. Similarly, elected politicians want a shiny accomplishment to point to as proof of their ability to act. What better than a sexy, new innovation.

At a more fundamental level, this behavior can also reflect a genuine worldview of techno-optimism, that innovations really will save the world. At a macro level I agree that technological advancement is our best way to create a more abundant, prosperous society, and I’m glad the belief is strong enough to encourage entrepreneurs to take bold bets. The problem is when leaders tasked with maximizing public good sip too much of the kool-aid and play angel investor with valuable public resources.

Option 3 (My Favorite): Embrace but Verify

There’s no way to resolve the Collingridge dilemma. That’s why it’s a dilemma. However, just because there isn’t a simple, pure solution doesn’t mean there isn’t a path forward. There needs to be guiding principles because technological change is inevitable. Recurring waves of innovation are guaranteed to come. As a (mediocre) surfer I think about how to approach real waves in the water. Either extreme approach is a recipe for a bad time. Avoid all risk and you’re left floating beyond the break. Try and catch everything and you end up exhausted and missing good waves as you get tossed around. Success requires embracing uncertain opportunities and constantly adjusting to changing conditions.

We’ll never know the full outcome of a technology before it is used, but it’s irresponsible to never take a chance. So instead, leaders need to trial and test ideas where they can to try and maximize the potential public benefits. At a 30,000 foot level the best way to do this involves (1) test as much promising technology as you can politically sustain, (2) be clear in defining success, and (3) ruthlessly iterate based on those metrics.

  1. Test as much promising technology as you can politically sustain. This is a constantly moving target depending on the political climate and culture, but in general there’s no technological reward without trying new things. “Trying” new things can be both active and passive. In contexts like local transportation, any new idea typically requires active approval to change existing systems. Other innovations like new social networks don’t require approvals to get off the ground and are therefore passive trials from the government’s view. Political leaders should try and take on whatever risk they can tolerate that appears to offer tangible public benefits.

  2. Be clear in defining success. This is where a lot of attempts go wrong. If the criteria for success aren’t defined it’s impossible to know what’s working. Some metrics of public benefit are easier than others to define. For example transportation metrics (car trips, miles driven, vehicle emissions) are much more easily quantifiable than those of social networks (degree of connectivity? time spent on screen?). However, that doesn’t mean the task is impossible. Even in the social media example, there are metrics that can be managed and defined. For example, it’s becoming increasingly clear that extensive social media use in teens has negative mental health effects. That’s a clear form of harm that can be recognized as a current policy failure and merit reforms.

  3. Ruthlessly iterate based on those metrics. Once a technology is being tested and there’s an agreed-upon definition of success, then the process requires iteration and evaluation on on those criteria. Several cities in my dissertation were successful with this in their pilots of shared electric scooters. For example, Santa Monica was able to continually monitor their shared mobility fleet to maximize ridership while minimizing public safety concerns. They weren’t perfect in their approach, but their clear objectives and time frames for their pilot made it possible to benefit from an uncertain technology. Now they’re applying a similar approach with sidewalk delivery robots. Communicate your goals, and then scale up or limit society’s embrace of a technology based on that performance.

Previous
Previous

Will AI Destroy Jobs? Yes - But That’s Asking the Wrong Question

Next
Next

Southern California, An Island in the Land (Review)