Drifting Toward the Status Quo

OpenAI has been on a tear lately, announcing new features and products that leverage its latest AI capabilities. On October 1, it launched Sora, a TikTok-like app exclusively populated by AI-generated video. On October 6, it hosted its annual DevDay event and announced that popular apps like Zillow, Spotify, and Canva are now integrated into the ChatGPT interface. On October 14, CEO Sam Altman posted on the app formerly known as Twitter that they were rolling back restrictions on “mature” content for age-verified users.

In stark contrast to these announcements is OpenAI’s mission: to ensure that artificial general intelligence (AGI) benefits all of humanity. Altman and other AI boosters talk a good game about the economic, cultural, and social blessings that will come from a god-like emergent artificial intelligence. However, it certainly seems like their current strategy is pretty mundane. Are these companies innovating? Or are they simply finding more expensive, energy-sapping ways to get us to buy more mediocre stuff?

Look, I don’t know what’s happening behind the scenes at OpenAI or any other AI company. I follow AI news more closely than your average knowledge worker, but certainly less closely than your average tech bro. However, I’m very good at spotting patterns. And the one that OpenAI and the AI industry as a whole seem to have fallen into is a pattern that anyone who cares about challenging themselves, pursuing unconventional choices, and creating remarkable things (ahem, you) should be wary of.


Keep reading or listen on the What Works podcast.


High Standards

As a lifelong science fiction lover, I have pretty high expectations of something we call artificial intelligence. Some of those expectations are quite humane, like Star Trek’s Data or the robots of Becky Chambers’s Monk & Robot novellas. Others are quite the opposite, like the machines of The Matrix or HAL in 2001: A Space Odyssey. For good or for bad, these AI systems are powerful. 

OpenAI seems to have similar expectations for AGI. They believe that AGI could “elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.” Further, they lay out three overarching principles for AGI stewardship: (1) the development of AGI should “empower humanity to maximally flourish in the universe,” (2) its impact should be shared equitably, and (3) its developers should seek to “successfully navigate massive risks.”

So why is a company that believes in the near inevitability of this superhuman power devoting resources to making it so that I can create a mediocre slide deck or search for an unremarkable hotel room via chat?

The easy answer is money. And that’s not wrong. But I think this actually deserves a more complex—and useful—explanation, because we’ve all experienced something similar. If you’ve ever chosen an ambitious, unconventional, or deeply meaningful aim only to see your plan devolve into something far more run-of-the-mill, stick around.

Drift to Low Performance

Ever heard that cliche, “Shoot for the moon. Even if you miss, you’ll fall among the stars?” This astronomically dubious truism suggests that pursuing an ambitious goal will get you further than pursuing a “realistic” goal. I’m sympathetic to this reasoning, especially when it comes to identifying your course of action. However, “falling among the stars,” in practice, tends to convince you that you should aim for the stars rather than the moon. 

If you don’t hit the stars? Well, maybe you aim to fall into the upper atmosphere? I don’t know; it’s a scientifically inaccurate metaphor.

What I’m trying to describe is a behavior pattern that systems analysts call “drift to low performance.” In this pattern, a goal is set and actions to meet that goal are taken. When those actions don’t lead to the desired outcome, a solution is identified and implemented. However, there’s a delay between implementing the solution and achieving the original goal. In the space of that delay, pressure builds to adjust the goal downward. 

For example, let’s say I set a goal of reading 100 books in a year. By July, I’ve only read 30—far behind my target. So I decide to increase the time I have set aside for reading. One month later, I check my book list and discover that I’ve still only read 38 books. That’s nearly double the rate I was reading before! But when compared to my benchmark for 100 books in a year, I’m still woefully behind. So the pressure mounts to lower my goal. Okay, now I’ll read 90 books in a year. Not only have I lowered my standard, but it’s become harder to justify keeping the extra reading time. So my reading slows down again, and now 90 books seems impossible. The cycle continues, and I drift toward the status quo.

The pressure to adjust the goal is a powerful point of leverage. In an individual context (e.g., a goal you set for yourself), that pressure generally takes the form of negative self-talk, lack of confidence, or even embarrassment. By addressing those psychological pressures, we can resist capitulating and adjust our actions instead of our goals. 

In more complex systems (e.g., when an organization sets a goal), the pressure can be difficult to resist. With more stakeholders and financial entanglements, the pressure to lower expectations and go for the “sure thing” is often overwhelming. 

The situation at OpenAI seems to fit this ‘drift to low performance’ archetype. The organization launched with the goal of creating god-level AGI. Altman has said over and over again that they’re close—and even predicted its arrival by the end of this year. And yet, the odds are good that if you spend any time on ChatGPT or its kin, it will return “answers” that contain substantive mistakes. That’s not to say that the AI systems in development today aren’t powerful data processing tools, but it is to say that they in no way resemble god-level AGI.

Altman believes (or did believe) in his goal enough to raise nearly $60 billion in capital to pursue it. At least early on, OpenAI’s investors were quite patient. They’re still relatively patient by 2020s tech investment standards. But the pressure to adjust course (or at least develop a concurrent course) is mounting. Not only do investors want to see the potential for a payoff, but they’re also likely to get squeamish about the amount of resources OpenAI is sinking into non-revenue-oriented development. The result? Well, it’s all those product announcements we’ve seen over the last month.

Are we to believe that chatting our way through a Zillow search is a meaningful step toward AGI? Or are we seeing the evidence of a goal that’s drifting toward the proverbial stars?

Drift to the Default

Over time, erosion or drift in a system can fundamentally change its function. Systems theorists remind us that the function or purpose of a system is what it does rather than what we say it’s supposed to do. So the function or purpose of a company is what it does (e.g., the products it launches) rather than whatever it puts in its mission statement.

In the case of OpenAI, if AGI that benefits all of humanity is the goal and current efforts are falling short, there will be pressure to adjust the goal. If the goal, implicitly or explicitly, becomes "prove your insane valuation" rather than “create AGI,” the system will reconfigure to match that goal—and in the process, change what it does.

What we’re seeing with the AI industry and what we often run into in our own lives and work isn’t merely “low performance.” The goal and the actions required to achieve it hasn’t just been adjusted downward, it’s fundamentally changed. “Develop AGI and ensure it benefits all of humanity” is profoundly different from “prove your insane valuation.” The way an organization would deploy resources to work toward AGI is very different from the way it would deploy resources to develop profitable revenue streams. 

Another way to make sense of this pattern is as a "drift to default expectations." This is what happens when goals are ill-defined, strategy is lacking, and outside pressure fills the gaps they leave. Instead of building the capacity to create or do something remarkable, a company or individual builds the capacity to create or do something unremarkable—the economic, political, social, or cultural default expectation.

There are lots of reasons that a system might drift toward default goals, not least of which are material needs like paying your mortgage, caring for family, or proving to your billionaire friends that your company can actually make money. Other reasons are squishier—sticking with an unconventional goal requires persistence, creativity, and personal independence. We often don’t realize how much influence unspoken norms and explicit concern-trolling can have until we’re trying to pursue a novel path forward.

To get a little technical for a minute, the drift to low performance or drift to default expectations pattern consists of two balancing feedback loops joined by the goal of reducing the gap between current reality and desired reality. Each feedback loop represents one side of the challenge. 

In the case of OpenAI, we could say the goal is to close the gap between current artificial intelligence systems and AGI. In one feedback loop, there is the standard for what constitutes AGI (“AI systems that are generally smarter than humans”), and in the other is investment in that development. To close the gap between the current and desired reality of AI, either the development must continue at or beyond the current level of investment, or the standard for what AGI is needs to be lowered. That is, you can close the gap by simply making the goal less ambitious.

The longer it takes for the investment in AGI development to pay off, the greater the pressure to dial back the definition of AGI. This has already happened. In the first episode of the OpenAI podcast (June 2025), Altman declares that the definition they had for AGI five years ago has already been “well surpassed” and that the goals they have today are even more ambitious. That might sound like their expectations are becoming more rigorous, but in reality, Altman has backcast and even retconned us to believe that the hallucination-prone calculators we have today would have been recognizable as AGI five years ago.

Star Trek exists, my guy. You’re not fooling me.

Revising the standard downward is classic “drift to low performance” system behavior. But with OpenAI’s mission—as with our own personal ambitions often—there’s a deeper layer of drift. The standard isn’t only eroding; its character is changing. There’s pressure to lower the standard, but also to make it more conventional (i.e., profit-driven). In drifting from world-changing to banal, the new standard impacts how the company invests in its future. It has to reallocate resources from AGI development to revenue stream development. In the process, OpenAI starts to look like any other massive tech company rather than something visionary.

OpenAI’s drift to the default seems to have them pursuing the utterly unremarkable path of creating products that make nominally convenient services nominally more convenient to use. These products promise to deliver us into a future in which we can all consume more with greater ease and less friction. Are we supposed to believe that this is the next step toward an AGI that can eliminate hunger or negotiate a cease-fire? Is this an incremental step on the path toward fewer working hours and more time with loved ones? In short, no.

You don't get to pro-social goals through anti-social means. Maximally extractive, minimally beneficial consumer capitalism isn't a mere detour on the route to a better future for us all. Building an agentic AI system that can help you spend your money in algorithmically-determined ways isn't a mile marker on the road to a better AI, let alone a better future.

Last Thing

My point here isn’t to rage about the current state of AI development or even write it off entirely. My point is that systems do what systems do—no matter how smart, visionary, or wealthy the people involved in them are. The status quo is a mighty tether with just enough play to let us believe we’ve escaped it. Without diligent safeguards and careful stewardship, we’ll get reeled back in. Unconventional objectives will degrade to conventional ones; creative ideas will erode into conservative ones; revolutionary missions will drift into incrementalist ones. 

I also don’t want to give the impression that objectives and goals should never change. They absolutely should! Sometimes there are very good reasons to lower one’s aims or even fall back on something that resembles the status quo. Constructive obstinacy is often a luxury that few of us can afford. 

Being able to spot the pattern, call it what it is, and intentionally choose how to proceed is how we exercise agency in situations with conflicting priorities. One of the things that’s so galling (at least to me) about the field of AI development and OpenAI in particular is the refusal to acknowledge the pattern and communicate why they’re doing what they’re doing. The inconsistency between their rhetoric and their output makes these companies difficult to trust. 

We can (and should) do better.

As long as we continue to aim for goals or commit to growth, we’ll deal with the tendency to drift. As long as we point our organizations toward meaningful objectives, our leadership will be tested by the forces that encourage us to settle for less. The pressure to acquiesce shouldn’t come as a surprise, so we should be prepared for it when it arrives.


 
 
Tara McMullin

Tara McMullin is a writer, podcaster, and critic who studies emerging forms of work and identity in the 21st-century economy. Bringing a rigorous critique of conventional wisdom to topics like success and productivity, she melds conceptual curiosity with practical application. Her work has been featured in Fast Company, Quartz, and The Muse.

Next
Next

Rethinking Busyness