Sunday Stories #4

This is reprinted from my Substack which can be found here.

Will AI Make The Web Go Dark?

Over the past 30 years, literally billions of websites have been built to provide materials to the rest of the world, and new tools for displaying and consuming content have emerged. It seems hard to believe, but there was a time when eBay and Yahoo—two sites with a fraction of the traffic that Google and Facebook have today—were the dominant players.

Sites, apps, and processes now have a shorter shelf life than ever, and that shelf life, as measured in traffic, could, in the next wave, decline to zero. There is the possibility that AI will simply learn everything that is on all those websites and display it to users without ever directing traffic to the original sources.

Chris Dixon details how this is happening already:

Chris Dixon: Specifically, over the last 20 years, an economic covenant has emerged between platforms—specifically social networks and search engines—and the people creating websites that those platforms link to. If you’re a travel website, a recipe site, or an artist with illustrations, there’s an implicit covenant with Google. You allow Google to crawl your content, index it, and show snippets in search results in exchange for traffic. That’s how the internet evolved.

David George: And it’s mutually beneficial.

Chris Dixon: Mutually beneficial. Occasionally, that covenant has been breached. Google has done something called one-boxing, where they take content and display it directly. I was on the board of Stack Overflow, and they did this—showing answers directly in search results instead of driving traffic to the site. They’ve done it with Wikipedia, lyrics sites, Yelp, and travel sites.

David George: Yeah, they did it with Yelp.

Chris Dixon: And people get upset. With Yelp, they promoted their content over others. These issues existed, but the system still worked. Now, in an AI-driven world, if chatbots can generate an illustration or a recipe directly, that may be a better user experience. I’m not against it—it’s probably better for users. But it breaks the covenant. These systems were trained on data that was put on the internet under the prior covenant.

He goes on to note that if there were just a few—like five—dominant AI systems, those systems would summarize everything else and supply the source of truth, just like back in the bad old days of three TV networks. To keep from falling into this trap, the internet needs new incentives.

At this point in the talk, Dixon describes how new technologies go through phases that are relatively predictable at first and then incredibly surprising. He calls this the Skeuomorphic vs. Native phase distinction. In the skeuomorphic phase, the new technology is doing the same thing the old technology did—but better. So, for example, think of the early websites that replicated a newspaper or a magazine online.

But then, after an adjustment period that could last a decade, come the applications that have no predecessor. These new uses are truly native, and that’s when new businesses get built on the next wave of tech advancement. The next wave builds on the capabilities enabled by previous waves.

Chris Dixon: Second-order effects matter. Bitcoin couldn’t have existed before social networking. Thirty years ago, if you told someone that gatekeepers would disappear and people would control their own media, they wouldn’t have predicted digital currencies emerging from that.

David George: There would have been no way to create communities around them.

Chris Dixon: Exactly. It would have been a New York Times article dismissing the idea, and that would have been the end of it. There would have been no space for people to congregate, discuss, and build momentum.

Dixon goes back in history to show that this phenomenon is not new—but that it has led to incredible advancements and new artistic forms. A major turning point in the world of visual imagery occurred when photography began to displace artists who worked with paint, brushes, and canvas.

Chris Dixon: When photography first emerged, cultural critics worried about its impact on art. Walter Benjamin’s famous essay, The Work of Art in the Age of Mechanical Reproduction, questioned what would happen to artists when anyone could take a photograph.

Today, similar concerns exist about generative AI. If AI can create entire movies, what happens to traditional filmmaking?

David George: We’re already seeing that with images.

Chris Dixon: Images are there, and video is probably coming soon. What happened with photography was twofold. Fine art moved toward abstraction, away from photography, leaning into what made it unique—giving rise to movements like Cubism. On the other side, photography enabled the rise of film. Someone recognized that while machines could replace photography, they could also create a brand-new art form that never existed before. Animation had some of this, but film became a sophisticated new medium.

Film became the native media form of the age of mechanical reproduction.

Right now, AI is replacing search, customer service representatives, and some aspects of image production. But all of those functions have predecessors. What comes next—when AI becomes truly native—is unknowable. What is known, however, is that there will be resistance.

Chris Dixon: Exactly. Marc Andreessen wrote a great blog post asking, “How do I know they’re going to ban AI in medicine?” Because they already have. Many areas AI will impact are highly regulated.

Take the Hollywood generative AI example—adopting AI might require laying off unionized workers, which companies may resist. Maybe fresh upstarts in other countries will create AI-native movie studios, but that will take time.

The right approach is probably to integrate AI with existing Hollywood talent, not replace it. There are many highly skilled people in the industry. But how long will that cultural shift take? It may require an entirely new generation.

Dixon notes that, as of now, five—and only five—companies control most of the internet in one way or another, and they have effectively kicked away the ladder to new uses and emerging applications. There was a time when attracting users was the primary goal, even without a clear path to monetization. But eventually, the dominant companies—Google, Facebook, and others—figured out how to profit from attention, whether through subscriptions, sales, advertising, or other means. They captured the eyeballs and monetized them.

Will AI be a better, iterative version of that—or something entirely new?

The interview with Chris is here.

It’s Not My Imagination: Movies Do Suck More Now

I’ve used movies as educational tools for my kids, and they are culturally literate as a result. We’ve watched movies together, both old and new, and many of the films I saw earlier in life—commercial vehicles with no intent of great art-making—have held their value well. I recently wrote another entry in my Sunday Morning at the Movies series about The Year of Living Dangerously, and it holds up remarkably well.

I used to think my lack of interest in so many contemporary films was just a sign of getting older—of preferring things made when I was young, when my mind was more pliant and impressionable. It’s been suggested that our musical tastes become permanently anchored to what was popular when we were around 20 years old, and that’s at least partially true for me. I love the music from that period in my life.

But I couldn’t help but notice that maybe it wasn’t just me when it came to movies and TV shows. I see films now that I think are just atrocious. Maybe movies really do suck now. If so, why? The New Yorker offers a theory.

The writer begins by quoting from recent movies, where the dialogue is what I was taught to recognize as far too “on the nose”—meaning obvious and over the top. This was simply bad writing. And then there’s this:

These scenes, from the recent movies “Gladiator II,” “Megalopolis,” and “The Apprentice,” respectively, are examples among many—so many!—of what I’ve started calling the New Literalism. This isn’t a new genre but a new style. Each of these films belongs to its own genre—action/adventure, sci-fi/drama, and drama/history, respectively—and none of them seems interested in the filmic tradition of documentary realism, not even the bio-pic.

When I say literalism, I don’t mean realistic or plainly literal. I mean literalist, as when we say something is on the nose or heavy-handed, that it hammers away at us or beats a dead horse. As these phrases imply, to re-state the screamingly obvious does a kind of violence to art. “A point is still a point!”

There is a meme going around from a “Family Guy” episode in which Peter, the animated comedy’s paterfamilias, confesses to his family that he never cared for “The Godfather.” Why not? “It insists upon itself,” he says with a shrug. A lot of recent productions deserve this scorn—literally. It’s gotten so bad that, lately, the highest compliment I can muster for even the best of them is: “Well, at least it’s a movie.”

OK, not much proof that the movies are not as good, but I’m not the only one who has noticed. These movies don’t breathe, they don’t let the viewer fill in some of the gaps. There are no long shots of the actors not talking.

Here is a scene from The Year of Living Dangerously where there is no dialogue, great music, and scenes that move the story along.

I lay much of the blame for the degradation of movies on technology—including the SFX industry, which has made things that were once only suggested, when older effects were all that was available, now completely literal. Anything that can be imagined can be put on film, including things that would have been impossible with puppets, clay models, and green screens.

That the stories these effects serve are chaotic doesn’t seem to matter. It’s an effect, and the audience is encouraged to step out of the story and marvel at it, fully aware that it’s not real. Whole genres of movies—mostly of the superhero variety—are built on effects. It’s numbing and stupid. But even the “real” movies are overwhelmed by the obvious in this new Literalism.

The French theorist Roland Barthes coined the term studium for photographs that seemed to him to represent “a classical body of information”: human-interest stories, “political testimony or . . . good historical scenes” that produce in us “a kind of general, enthusiastic commitment.” This is useful, in its way, but it’s not art. Why do these recent movies insist on rehashing this studium, these familiar source materials, this aura of pastness? Are they trying to compete with the new popularity of documentary forms by absorbing them?

I think something else is going on. The point is not to be lifelike or fact-based but familiar and formulaic—in a word, predictable. Artists and audiences sometimes defend this legibility as democratic, a way to reach everyone. It is, in fact, condescending. Forget the degradation of art into content. Content has been demoted to concept. And concept has become a banner ad.

Saying the quiet part out loud has given way to a general loudness. This is as true in our cultural life as it is in our political life, which feels like a badly written finale, so in your face are the Ponzi schemes, Nazi salutes, and tech-bro cant of our latest overlords. That sense of unmistakable catastrophe may be why we keep returning to predigested cultural comfort food.

AI is going to take this to a new and unprecedented level. We are about to be awash in “content” of limited subtlety, and separating the art from the dreck will be difficult. Some will use these new tools to tell new and better stories. Cinema is an art form built on technology—technology that was once deemed an abomination by painters when it first emerged. But that artistic evolution will take time. For now, the culture has stopped innovating—not because of the limits of the medium, but because of the limits of the audience. I hope to live to see its next iteration of genius.

The New Yorker story is here.

The New Mercenaries

I was horrified, intrigued, and entertained by the Zelensky/Trump interaction a few weeks ago—and Trump was right: it was great television. But it looks like others were watching too, and what they saw was an offer of security in exchange for access to resources in the ground they otherwise couldn’t reach.

The Democratic Republic of Congo (DRC), a nation rich in minerals yet beleaguered by conflict, has extended a bold proposition to President Trump: assist in expelling the M23 rebels, and in return, gain access to its vast mineral wealth.

Talk of a deal with the U.S. – which is also in discussions with Ukraine over a minerals pact – has circulated in Kinshasa for weeks.

“The United States is open to discussing partnerships in this sector that are aligned with the Trump Administration’s America First Agenda,” a State Department spokesperson said, noting that Congo held “a significant share of the world’s critical minerals required for advanced technologies.”

The U.S. has worked “to boost U.S. private sector investment in the DRC to develop mining resources in a responsible and transparent manner,” the spokesperson said.

The DRC’s mining sector, like much of industry in sub-Saharan Africa, has long been mired in “complexities”—from allegations of corruption to the dominance of foreign entities. Perhaps the U.S. is now contemplating intervention to prevent Chinese firms from monopolizing significant projects like the Manono lithium venture.

In essence, while the DRC’s proposal presents a tantalizing opportunity for the U.S. to secure critical minerals, one must ask: is this, finally, true colonialism raising its head again? I’ve said for years that the U.S. is the world’s stupidest empire—we bear all the burdens of empire without enjoying the perks. Fighting rebels in Africa to gain exclusive access to a key mineral source? Now that’s a real perk. Is this where we’re headed?

“I think it’s certainly something that will pique people’s interest in Washington, and I think it has attracted interest,” said Jason Stearns, a Congo expert at Canada’s Simon Fraser University, noting that Congo’s mineral supply chains are currently dominated by China.

But, he said, the U.S. does not have state-owned companies like China does, and no private American mining companies currently operate in Congo.

“So if the Congolese want to make this work, it will probably not be by offering a U.S. company a mining concession. They’ll have to look at more complicated ways of engaging the U.S.”, he added.

So, the U.S. would have to set up a sort of East India Company to access these minerals, and the U.S. military would have to establish bases. Back to the Future, indeed—with American forward operating bases and U.S. Marine jungle foot patrols guarding trucks headed for the coast, loaded with lithium.

This seems unlikely, but the world is changing rapidly. If these minerals are valuable enough, someone will bring in the muscle to get them. It might as well be the United States. As I’ve noted before, a frontier has always suited a strong country. The West helped shape the American character because it provided a place to send the troublemakers and malcontents. Danger sharpened the ill-disciplined mind. Fewer men confused themselves with women on a deadly frontier.

It could happen.

Reuters has the story here.