Regulating an Earthquake: Crypto, Section 230 and AI
How America's Congressional Dysfunction Will Impact Innovation
When Cars Were Illegal
There are decades when nothing happens and then there are weeks when decades happen.
This past week has felt like a decade for the strained relationship between Washington DC and Silicon Valley. From SEC actions on crypto to Supreme Court cases on Section 230, it’s clear that the two coasts of the American empire are locked on a collision course. And that’s before we even consider how to regulate artificial intelligence.
Thanks for reading. If you liked this post, and want more content that blends history and internet drama, consider subscribing.
The relationship between DC and SF is painted by Elon, Thiel and their disciples as inherently antagonistic. How could it be anything but? Silicon Valley moves fast and breaks things. DC moves slowly and tries to clean things up – while maybe wrecking more things in the process.
But that simple fairy tale belies a more interesting reality. Government is not the enemy of innovation. Government basic research often provides the seeds that become disruptive companies. Good regulation - like externality pricing, infant industry protections and consumer transparency measures - can help markets operate efficiently. But bad regulation is well… bad.
The problem is that it’s often hard to know which is which in the easiest of circumstances. When it comes to new technologies that provide seismic shocks to our social and economic order, well… it gets damn near impossible.
But that doesn’t stop us from trying. Take, for example, the history of the humble automobile.
On June 3, 1900, the New York Times reported on a little-known provision of state law: Penal Code 640. That provision provided that New Yorkers could be fined up to $500 (~$18,000 today) and imprisoned for a year for driving a car without having a second “operator” waving a red flag at least ⅛ mile ahead of the vehicle.
The law had been passed before automobiles were really invented. It was intended to accommodate new steam engines that were starting to move outside of tracks. But every lawyer that the NY Times consulted with agreed that it could be enforced against cars – if any regulator was brazen enough to try. None did in New York.
But in England, things were different. The UK had a similar Red Flag law - it required the red flag alert system, but also demanded that three people operate any vehicle and that no vehicle travel over 4 mph lest they scare livestock.
And the result was predictable. While America led the way in automobile manufacturing, England lagged behind until they eventually reformed their laws in the early 20th century.
One lesson of the Red Flag Laws is that vaguely worded old regulation can be a dangerous tool in the hands of imperious regulators with a savior complex.
But, again, it’s not so simple.
Regulation also saved the automobile industry from itself.
In the 1920s, cars were selling quickly. But fatal accidents were ascending just as fast. Drunk driving, reckless driving, speeding – all of it was leading to surge in traffic fatalities. The public was excited about cars, but also kind of terrified. Because say what you want about horse-drawn-carriages, but the risk of dying in a horse crash is well… low.
So the industry worked with the government to create programs for licensing drivers, for eliminating drunk and reckless driving, and for instituting speed limits. The public’s faith in automobiles stabilized. And the American automobile went on to become a cultural and economic behemoth. By the 1970s, 1 in every 5 American jobs depended on the industry.
Bad regulation that fails to anticipate and evolve with technology is bad. Good regulation that creates trust, transparency and accountability accelerates progress.
So how do we find the right balance? How can we maximize the benefits of this really exciting era of progress while not creating our own dystopian hellhole?
We can start by remembering what regulation is intended to do.
Crafting Good Regulation
With all due respect to my socialist friends, the market is a pretty phenomenal way to make decisions. Rather than allowing any particular powerful actor to say what is “good” or “right” or “valuable”, we crowdsource the decision to individuals who - in a thousand purchase or not-purchase decisions - reveal what they actually value.
And yet – the market fails us all the time. It does so when our impulses lead us into a race to the bottom or when information and power imbalances cloud our judgements. The project of regulation, then, is not to substitute decentralized decision-making for that of a bureaucrat, but to keep us from falling into these traps.
This is, of course, a subtle art. It usually requires a light touch. The best regulations are those that are easily understood, easily acted upon, and frankly - those that easily fade into the background.
That’s why the list of good reasons to regulate businesses is actually pretty small. Even as a self-described liberal, I’ve got only four.
The Moloch Problem. In certain cases, the incentives of the market force us to make choices that can seem optimal, but actually lead to everyone being worse off. This is what Scott Alexander calls the Moloch problem. Consider the current choices facing OpenAI, Google and Microsoft. Each would be better off - and users would, too - if they went slowly to work out the kinks of their AIs. But each also knows that there is a first-mover advantage - and can’t be sure that a competitor won’t release first. So now it’s a race - to release an unvetted AI that might damage their business or consumers, but might also let them hog the spotlight of launching first. Absent a regulator who can appropriately tax the harm created by a bad AI, the three companies will have no choice but to risk catastrophic harms in their pursuit of first-mover advantage.
The Externality Problem. This is the classic failure of liberal economics. Imagine a factory that makes widgets. It sells these widgets for $10, that covers its costs and a tidy profit. But to make the widget, they also pollute the public river causing harm to the health of the community and to fishing and tourism businesses that depend on clean water. Without regulation, there is no incentive for the business to pay to clean up its pollution or stop polluting altogether. Absent consumers who suddenly care about a polluted lake, the lake will become toxic at roughly the same rate that the business thrives. That negative impact is an externality of the business. Good regulation forces a producer to account for externalities. It would require the polluter to pay for its impacts that are not represented in a simple widget transaction.
The Asymmetry Problem. Decentralized decisions are usually better than a single person deciding what is right and wrong. But good decision making requires each decision maker to have accurate information. Imagine that a firm intends to perpetrate a Ponzi scheme, but the customer has no way to know whether the returns and investments the firm is claiming to make are legitimate. Or, imagine that you are deciding to buy cigarettes in the 1950s while the cigarette companies conceal from you that it might cause you cancer. Without good information, you can’t make good decisions. Without good decisions, the market is just a test in who can run the most effective fraud. Good regulation - that provides customers valuable information on which to base their decisions - is essential to a healthy market.
The Power Imbalance Problem. A well-running market also depends on competition to regulate firms. If I can buy a widget from many people, none of them can overcharge me or sell a misleading, shitty widget. But when one firm dominates the market, and controls the supply of a critical good – say, electricity – then I have no choice but to pay whatever they demand and to accept whatever quality of service they offer. Let’s call this the “Comcast” problem.
So good regulation is designed to help markets do their job more effectively. But even the best intentions can cause terrible outcomes if that regulation is not implemented consistently and clearly. And, uh, the United States has some work to do on that front.
Consider the lingering controversy over Section 230.
Section 230: How America Got It Right and then Fell Asleep at the Wheel
On Tuesday, the Supreme Court heard arguments in Gonzalez vs. Google to consider whether Google should be liable for its machine learning recommendations of terrorist content. This is a case of terrifying externalities. Google profits off of delivering relevant content to its users, but sometimes those users get radicalized and murder other people.
Which, y’know, is not ideal for everyone.
But, interestingly, the US Congress has previously passed laws that basically shield Google from liability. That law, sometimes called “The 26 words that birthed the internet,” is Section 230 of the US Code:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
In this law, Congress weighed the potential of user generated content on the internet and realized that if a platform was to be liable for what its users said, they could never hope to provide an open-forum to millions of people. So Congress acted to protect them while simultaneously enumerating the exact types of content that platforms had to police - child pornography and copyright infringement chief among them.
This approach enabled the entire Web 2.0 revolution. Without it, there would be no messaging apps, no YouTube, no TikTok, no Facebook, no Instagram, no Substack, etc. This was a case of Congress seeing a new technology and giving it the regulatory clarity it needed to develop.
In some ways, Section 230 is a phenomenal example of what good regulation looks like. A democratically accountable legislature weighed benefits and harms, and chose a course that would maximize the former and minimize the latter. USA! USA!
But 230 also demonstrates the limitations of the US regulatory system. In an era of increasing partisan polarization, Congress has not been able to act to clarify the role of algorithms online. When the law was passed thirty years ago, no one understood the power of personalized ranking algorithms. But Congress, today, is too polarized to weigh in on the shape of our digital world.
Justice Kagan herself seemed bemused by this reality, noting that the Court are hardly the nine best experts on the internet (then again, Orrin Hatch asking Mark Zuckerberg how Facebook makes money suggests that Congress is also hardly full of digital heavyweights…).
To their credit, the Supreme Court seems uneasy with having to decide what is in-bounds and out-of-bounds on the internet. They are (in this circumstance) showing some respect for the limits of their authority. Unfortunately, Congress’s other way of avoiding responsibility for decisions is delegating powers to regulatory agencies. And regulators have no such qualms about the limits of their power.
The SEC vs. Crypto: When Agencies Take Over for Congress
Let’s say you’re the Chair of the SEC. Your duties are to maintain fair and orderly securities markets, to facilitate capital formation and to protect investors. After 90 years, you’ve got a pretty good handle on the capital markets. Sure - people complain about stock manipulation or fraud here-and-there, and no one likes that American companies have to spend ~$4B per year on compliance paperwork, but hey… things could be worse.
Then a new thing comes along. Blockchain technology seems to allow rapid formation of capital, and it’s theoretically transparent. That’s good! But it brings with it a whole host of new fraud and fairness concerns. That’s bad.
And what’s worse is that it seems to fall into a regulatory gray area.
You think you have a responsibility to regulate it. But many members of Congress disagree. And other federal agencies seem to think it falls under their jurisdiction, as well. That’s annoying, but as long as no one is getting hurt, the debate is largely theoretical. You’ve got a market to manage.
But then things get sketchy. A massive project collapses. Then another. And finally, a firm that you’ve been consulting with on the development of new rules and regulations gets wrapped up in some serious fraud.
Suddenly, the people you serve are mad. They want action. They want heads on pikes. And you have some free pikes.
So you start announcing your intention to pursue actions against whoever you please. You know that fighting your charges will be costly and risky for new entrepreneurs, so you bank on them settling with you. That will establish a precedent in the industry and solidify your claim to regulatory authority without ever having to consult a judge or an elected representative.
This is called regulation by enforcement. Rather than setting clear rules, you create clear examples. It’s also rule by fiat rather than rule of law. It’s in bad taste, even if you set out to protect people. But more than just being irritating - it’s bad regulation.
In fact, it actively harms the development of good, clean productive markets.
Good actors lack clarity on how to proceed so they avoid the market entirely. This creates adverse selection - only the sketchy enter the market, at all. This harms consumers and harms the cause of capital formation.
But it gets worse! To win settlements, regulators focus on technicalities rather than the spirit of the law. Blockchain projects have two ways to achieve the consensus their technology depends on. One is called Proof-of-Work - it’s an expensive, carbon destructive process used by Bitcoin. The other is called Proof-of-Stake, it’s used by pretty much every other blockchain and is far cleaner and cheaper. It is unequivocally better for consumers. But it also might run afoul of technicalities of security law. So in enforcing the law, the regulators create massive negative carbon externalities. Clean legislation would prevent this, but regulation by enforcement can’t avoid it.
And if that weren’t bad enough, regulation by enforcement inherently exacerbates power inequalities in the market. Large firms with large legal teams or well-connected lobbying operations can operate freely, while small firms and new entrants are excluded. That’s the exact opposite of what good antitrust regulation achieves, and it is an inevitable consequence of an unclear regulation by enforcement regime.
In other words, the SEC has chosen the worst of all possible modes of regulation.
The 800-GPU Gorilla
Look - I get that you might hate crypto or Facebook. And so you look at these two areas, and you say, “Yeah, but so what?” Who really cares if we sell less monkey JPEGs or if Zuckerberg loses a few more zeroes on his net worth?
That’s fine. You’re entitled to that view. But what should scare you is that these look like the only two channels available for regulating AI. The single most important technology that humans have developed in 100 years will likely be regulated by… outdated laws interpreted in courts or bureaucrats fighting to assert control over new turf. That’s dangerous.
So the question of how to regulate effectively is important. And we need to solve it now. We need to develop an approach that provide a degree of safety for innovators and consumers, while also empowering our government to respond to emergent harms.
There are no shortage of good ideas. Frankly, many of them were proposed in previous crypto legislation. The challenge is actually implementing them. Here’s a few to get us started:
Set clear policy goals. Technologies are going to change. Specific tactical details of regulation will need to change, too. The role of policy is to express intent and objectives clearly. We may not know the mechanism that AI will use to cause harm, but we know that we need to prevent it. Legislating the “spirit” of the law is far more useful than specific proclamations on LLMs or other technology.
Set lightweight registration requirements. The joke of last week’s enforcement action against Kraken was that the SEC claimed all that Kraken had to do was fill out a form on their website. The Kraken CEO and an SEC commissioner pointed out that no such form existed. Today’s reporting requirements – appropriate for a pre-digital and pre-blockchain world where disclosures were the only source of information – are often so onerous that they deter companies from going public or even getting started altogether. That’s a major problem.
Charter a Self-Regulatory Organization. An SRO enables industry operators to have a first pass at regulating their own industry, and in doing so, to avoid Moloch-traps, while allowing governments the right to intervene if industry self-regulation does not get the job done.
Create Sandboxes. Allow innovators to experiment freely, up to a certain scale, and thus, ensure that regulators can both encourage innovation and study potential harms before they reach critical scale.
Assign Clear Regulatory Authority. In the absence of clear ownership of regulatory authority, competitive bureaucratic creep will require entrepreneurs to kiss many rings to get their products launched. That’s a recipe for corruption and for stagnation.
Otherwise – well – we can always make someone wave a red flag on your screen before you interact with an AI.
If you liked this post, please consider subscribing. It’s free, but really goes a long way toward supporting my work.