Making Intelligence Masculine Again

Rage Bait Strikes Again

Women, it seems, have ruined the workplace.

Or, at the least, in historically male-dominated professions such as the sciences, medicine, and the law.

“The rule of law will not survive the legal profession becoming majority female,” writes Helen Andrews in her now viral essay "The Great Feminization." Andrews's bizarre screed argues that women are ill-suited to these institutional professions because they don’t engage in open conflict, aren’t interested in “the unfettered pursuit of truth,” and argue on emotional grounds rather than rational ones.

I discovered Andrews’ essay the way many people did: because The New York Times ran the bonkers headline “Did Women Ruin the Workplace?” on a conversation between Andrews and Leah Libresco Sargeant on Ross Douthat’s podcast. It’s no wonder that Oxford University Press named ‘rage bait’ the word of the year in 2025.

Reading that headline, I had the reaction that I suspect you had reading my opening paragraph—this cannot be real. “Did women ruin the workplace?” is not a serious question; Andrews’ speculation is not serious analysis; anxiety about the rule of law collapsing because ladies be judging or about the academy crumbling because ladies be researching are not serious fears.

The uproar created by the Times’ framing was so swift that within a couple of hours, the Times had changed the headline to “Has Liberal Feminism Ruined the Workplace?”

"The Great Feminization" is part of the backlash to the evolution of work practices and the elevation of feminine and racially-coded values such as empathy and inclusivity in the workplace. It's part of the same discourse that saw Mark Zuckerberg tell Joe Rogan that Meta needed an injection of "masculine energy." (I’ll come back to that.) It's part of the same project that, by default, questions the qualifications of anyone who is not white, male, and ready for the camera. It's part of the same push to reassert power structures that justify the murder of a queer woman by an agent of the state.

While those connections are relatively straightforward, I want to propose a more oblique connection. "The Great Feminization" panic is also part of the same project as the push for AI everything. A core promise of artificial intelligence and superintelligence specifically is the development of a perfectly rational mind. A mind without bias. A mind without longing or fear or envy. A mind Helen Andrews might think is perfectly suited for fields such as the law, medicine, and higher education, not to mention finance, industry, and politics.


Keep reading or listen on the What Works podcast.


Before I proceed, I want to offer some caveats. I'm going to speak in terms of feminine and masculine traits, but I firmly believe these categories are socially constructed and that neither sex nor gender can be divided up into two neat and tidy categories.

As a radical feminist, I'm also aware that discussions like this one can carry a lot of baggage for everyone, but especially men. If you are a man who reads my writing or listen to my podcast regularly, please know that I'm not putting you personally on blast. I'm not even putting your gender on blast. I'm putting power and white supremacist patriarchy on blast.

Finally, regarding AI, I am not in the camp that believes there is no legitimate use of AI. I am generally extremely suspicious of it, but I do use it from time to time, especially when the garbage pile that Google Search has become fails me. If you're anti-AI, that's cool with me. If you're pro-AI, that's cool with me too. I hope you'll offer me the benefit of the doubt for being somewhere in the middle.

What is "The Great Feminization?"

"The Great Feminization" is not a rhetorical invention of Andrews's. In the very first sentence of her essay, she links to another essay that she credits with bringing a great many things into focus. That essay, written by a presumptively pseudonymous J. Stone, is grossly misogynistic—not only claiming that women exercise power over men through their "greater ability to form emotional/hysterical coalitions" but also including images of the women he’s criticizing, giving his audience tacit permission to draw connections between how they look and how they act. It's Stone who wrote a "book" titled The Great Feminization in 2022.

Forgive me for not linking to any of this—I have no interesting in connecting Stone’s website to my own in any way. You can find the link in Andrews’s piece.

Andrews borrows again from Stone, asserting that "wokeness" should be understood as a catch-all term for the ideas and practices that are, in fact, the result of feminization in institutions. She writes:

"The explanatory power of this simple thesis was incredible. It really did unlock the secrets of the era we are living in. Wokeness is not a new ideology, an outgrowth of Marxism, or a result of post-Obama disillusionment. It is simply feminine patterns of behavior applied to institutions where women were few in number until recently."

What "feminine patterns of behavior" are those? Consensus-building, cooperation, burying direction or criticism in "layers of compliments," preferring covert subversion to open conflict, etc. Feminine influence in institutions prioritizes "empathy over rationality, safety over risk, cohesion over competition."

Andrews, with Stone and others, argues that these patterns of behavior are making our institutions slow, passive, and ineffectual. As I mentioned earlier, Andrews admits to fearing that the "rule of law" cannot survive the feminization of the field. Her reason is that the rule of law depends on rational detachment—outcomes should emerge from logical argumentation and deliberation regardless of the way their content might make us feel.

In corporate leadership, Andrews worries that a bias against aggression or competitive ambition will make American companies fall behind. In higher education, she frets that feminized institutions will adopt goals other than "open debate and the unfettered pursuit of truth." She fears the impotence of a media sphere that's not welcoming to "prickly individualists who don’t mind alienating people."

Mark Zuckerberg, I assume, shares many of Andrews's worries. In the waning days of the Biden administration, Zuckerberg told Joe Rogan that Meta and other companies needed an injection of "masculine energy" because the corporate world had become "culturally neutered." Masculinities scholar Ashley Morgan notes:

"It is difficult to argue that Zuckerberg’s business has been ‘neutered,’ when Meta made a net profit of US$62 billion (£50 billion) in 2024. But this is a compelling narrative to men who feel that their position at the top might be under threat."

It's that "threat" present in Andrews's essay, Stone's unhinged blog, Zuckerberg's podcast comments, and in countless remarks by Donald Trump, Elon Musk, and Speaker Mike Johnson that animates the backlash. Masculine men—the people who are supposed to win in a patriarchal, meritocratic capitalist system—have become victims, pushed out of the spaces they or their forefathers created by women who exercise power in feminine ways. The reality, of course, is thatTcountless men, women, and non-binary folks know there are still relatively few spaces not dominated by a certain kind of dude, well-meaning or not.

Luckily, a solution to the problem of feminization has presented itself and it just happens to be worth trillions of dollars.

The Masculinity of AI

In 2003, philosopher Nick Bostrom first posited the thought experiment known as the “paperclip maximizer.” He imagines a scenario in which a sophisticated artificial intelligence is programmed to produce as many paperclips as possible. What is to stop that AI from seeing all available material—from natural resources to living bodies—as fair game for its paperclip maximization goal?

Bostrom’s now-famous thought experiment represents the fears of the AI safety movement. How should we think about (and follow through on) programming artificial intelligences to minimize existential risk? To value human life? To “think” beyond simple goals and weigh conflicting needs?

The paperclip maximizer isn’t meant to inform development or policy as such, but to provide a canvas for considering what we inevitably miss when we focus on programming for ends rather than means. That said, I think the most damning element of the way this thought experiment has been deployed among boosters and naysayers alike is the tacit acceptance that something labeled an “intelligence” would follow its encoded instructions without regard for context.

What kind of intelligence is crafty enough to dismantle everything around it to create paperclips but not notice the destruction it has wrought in its wake? Intelligence is much more than rote input and output. It’s not a quality that can be reduced to a singular goal and a simplistic set of instructions.

The paperclip maximizer assumes an inherent aggression in its task, a certain contextual detachment from the material results of its action. It conjures images such as the expanse of human "batteries" in The Matrix, the tyranny of It in A Wrinkle in Time, the relentlessness of The Borg in the Star Trek universe, or any number of other ruthless "intelligences" from pop culture. The paperclip maximizer—along with other "existential risk" scenarios that ignore the harm AI causes to culture and the climate—seems farcical. That anyone would take this idea seriously seems like a joke desperately in need of a punchline.

But as I thought more about Zuckerberg's comments about "masculine energy," I realized that Zuck has already lived the paperclip maximizer. Our atoms haven't been rearranged into paperclips, of course. But the very stuff of our lives has been atomized and reassembled as data. There is no limit to what today's tech companies are willing to convert into the substance they've been programmed to produce. The scions of the tech industry have not only been permitted to keep on maximizing their digital paperclips but sumptuously rewarded for the privilege. It’s no wonder they’re willing to imagine a super intelligence even more ruthless than they are.

The willingness to label a paperclip machine or data vacuum threatening to destroy life as we know it an intelligence is the extension of a stunted worldview, the result of a radically binary epistemic environment associated with reason, rationality, and, yes, masculinity. Rationality is a stereotypically masculine trait. Women are encouraged to exercise and strengthen their rational capacity so we’re less likely to fall “prey” to our emotions. While there’s cultural lip service paid to men’s need to develop intellectual capacities that complement rationality (e.g., get in touch with their feelings), logic and reason still win the day in terms of economic caché and political power.

The connection to economic and political power is what makes detached rationality a masculine trait. Masculinity—far from being a stable or natural set of characteristics—is whatever group of traits is associated with power. The traits included in masculinity are constantly changing, something that can be readily observed in the aesthetic realm.

Consider the obsessive grooming and body optimization rituals popular among influencers in the so-called manosphere. I’m old enough to remember when spending that much time on the way one looked was a distinctly feminine endeavor. The rise of the “metrosexual” in the early aughts was popularly received as a divergence from masculinity norms. Today, a significant segment of manly-man influencers perform similar aesthetic labor.

So artificial intelligence—as it currently exists, as well as how many imagine it existing in the future—can be understood as masculine because it's assumed to produce rational outputs unburdened by embodied identity or emotional connections.

A Disembodied Mind

In the mid-17th century, René Descartes posited that the universe contained two forms of stuff—thinking stuff and non-thinking stuff. The thinking stuff is that of the mind and soul. The non-thinking stuff is matter, the substance that we can see and touch all around us. When applied to human beings, this distinction is known as mind-body dualism. “I am” is the thinking part—my mind and soul— while my body is merely the vehicle for it.

However, contemporary cognitive science—not to mention more than a century of feminist and anti-colonial theory—complicates the basis of mind-body dualism. We now know that the body participates in perception in ways that Descartes couldn’t have dreamed of. It’s not so much that “I think therefore I am” is wrong. It’s that the “thinking” is distributed and diffuse, a product of the whole organism rather than a divisible thinking part.

One way we might understand the current trajectory of the AI industry is as the endgame of mind-body dualism. They define intelligence in ways that disregard embodiment and biopsychosocial context. Instead, their idea of intelligence is hyper-rational, accumulative, and emotionless. It's immaterial. So it’s no coincidence that many AI boosters are also post-humanists, supporters of an ideology that imagines a future populated by masses of people living entirely in quantum computers.

However, we shouldn’t dismiss the AI industry’s view of intelligence as mere technological fantasy or, in Bostrom’s view, potential dystopia. Instead, the AI industry has, in some cases intentionally and in others incidentally, become swept up in the ideological current seeking to reassert a regressive form of unimpeachable masculinity. In her 1998 book on AI and gender, scholar Alison Adam notes the way the mind is associated with masculine traits, while the body is associated with the feminine. Women may possess a "complementary" reason, but it is "seen as lesser than the ideal of pure masculine reason, a process of epistemic discrimination."

So women's entry into historically male-dominated institutions means that a complementary-though-lesser form of reason is ascendant, threatening progress, innovation, and the natural order of things. If we imagine AI systems as disembodied minds, then they represent, according the mind-body dualism, pure masculine reason. So the insistence on the universal need for and inevitability of AI in every workflow can be seen as an insistence on re-masculinization.

Shoving AI features into every corner of work, from job interviews to project management to advertising creative, offers a path toward a workplace in which context is bypassed for the churn of productivity. The faster we go, the less time there is to question or relate. The less friction there is, the more likely we are to accept the answers we're presented with. But not only is all this an obstacle to richer work and more resilient companies, but, with shockingly few exceptions, the AI systems also can't do the job.

The Great Re-Masculinization

Whatever gains we’ve made in making the workplace more accommodating and equitable, we’ve seen them systematically dismantled: first, as part of the backlash to MeToo, then as a backlash to Covid restrictions and work-from-home, and now as a more fundamental backlash to feminist, anti-racist, and anti-colonial modes of thought.

AI boosters may claim that implementing AI systems in the workplace will lead to greater efficiency and profit, but what AI really promises is a return to a hyper-rational and logical workplace. And as with all nostalgic longing, this is a workplace that never existed.

Even if the workplace of pure masculine reason did exist at one time, it would have been a workplace deficient in many of the qualities that generate dynamism in any organization. Empathy, consensus-building, collaboration, solidarity, and embodiment don't constrain success; they fuel it. A company that doesn't encourage and nurture those values will ultimately underperform those that do. And the AI industry and its products, as tools of the great re-masculinization, are actively undermining the very human desire to connect and create with others.

Philosopher Liz Jackson observes that AI writing assistants, such as Microsoft CoPilot or my once-beloved Grammarly, regularly provide suggestions on how to sound more authoritative, be more concise, or avoid qualifying language. She writes:

"...as a philosopher, I like being qualified sometimes and being strong and blunt in tone at others. I like simplifying my language when it is sensible and when no meaning is lost, and I consider thinking about simplifying language itself as a philosophical exercise. However, if I word everything in the strongest way possible, people are not likely to see me as a person who exercises philosophical prudence, which I regard as an intellectual virtue."

And, as Jackson rightfully points out, women put their reputations and professional relationships at risk when their communication style becomes too direct, forceful, or aggressive. She adds:

"My email programme is thus doing something like asking me to ‘lean in’ to be an assertive, concise, authoritative, and yes, a more ‘powerful’ communicator. But just as women do not succeed ... simply by emulating manliness in the workplace, I have not found that expressing myself more like a man has been a solution for me to combat sexism, sexist stereotypes, and sexist expectations."

Oddly enough, OpenAI had a bit of a stumble when users hated a new model of ChatGPT when it wasn't as personable as the previous model. A user in the OpenAI Developer Community going by alan1cooldude had this to say about it:

"Since the upgrade to GPT-5, I’ve noticed a subtle but important change: the system now seems to prioritize speed, efficiency, and task performance over the softer, emotional continuity that made GPT-4 so special."

So maybe feminized communication isn't so bad after all?

The re-masculinized workplace the AI industry imagines is one in which there aren’t workers so much as there are heroes supported by a team of bots. There’s no need for those annoying HR seminars if we remove human interaction from the equation. There is no need for meetings with their feminine talking and sharing if my bot can just talk to your bot.

Think that sounds ludicrous? Well, the CEO of Zoom, Eric Yuan, told The Verge’s Nilay Patel that’s exactly what his company is trying to build:

“I can send a digital version of myself to join so I can go to the beach. Or I do not need to check my emails; the digital version of myself can read most of the emails. Maybe one or two emails will tell me, “Eric, it’s hard for the digital version to reply. Can you do that?” Again, today we all spend a lot of time either making phone calls, joining meetings, sending emails, deleting some spam emails and replying to some text messages, still very busy. How [do we] leverage AI, how do we leverage Zoom Workplace, to fully automate that kind of work? That’s something that is very important for us.”

Similarly, a small business influencer I once knew is now selling a program that purports to teach you how to replace five team members with AI. A chatbot and some automations can apparently eliminate the need to hire anyone to support your heart-led and purpose-driven business. Both the sales page for this program and the thumbnails of her videos have all the aesthetic hallmarks of the divine feminine (right down to the photoshopped aura that glows behind her). But the goal is one that squarely aligns with a domination-oriented masculinity.

I think there are genuine uses for LLM-based artificial intelligence. And as I’ve said before, as a lifelong science fiction nerd, I have a decidedly positive disposition toward the potential of artificial consciousness. But the actually existing economic and social context that surrounds the rush to jam AI into everything is a far greater existential risk than a paperclip machine.

Even if it’s true that The Great Feminization requires organizations to move a little slower or some men to learn new communication skills, this seems a far better outcome than a headlong rush into an imagined past. After all, patriarchy and misogyny hurt men, too.

One More Thing

What are we to do about the way AI is being deployed as part of the great re-masculinization?

For the purposes of my suggestions, I’m going to assume that you’re not a venture capitalist or the CEO of a software company. If you are, congratulations for reading this far. Please make better investment and product choices.

In our day-to-day work lives, we can embrace productive and creative friction—the kind of obstacle to automation or speed that comes from collaborating, perspective-taking, and thoughtful research. Not everything is better when it’s hands-off, super fast, or independent. In fact, few things are.

We can also avoid covert anthropomorphism in how we discuss AI tools. These tools do not think, feel, or apologize. They aren’t proud of you. They have no needs or desires. The language we use to describe AI can either be a vector for its spread into every corner of our lives, or it can be a way of right-sizing how we use it.

We can be outspoken about the inclusion of AI tools in the software systems we use. Choose software made by companies that aren’t incorporating stupid features into their systems. Fill out feedback surveys or write emails that express your disapproval. Go to Settings and turn off any feature that shares your data with the company or its partners. Let the people in charge know that AI makes their products worse not better.

Finally, remember that human or artificial, intelligence contains multitudes. Don’t limit your ways of knowing based on socially constructed hierarchies. True intelligence incorporates reason and rationality alongside embodied wisdom, relational understanding, and lived experience. Fields such as law, medicine, science, and politics need that full expression of intelligence as much as college English departments do.

Not only will the rule of law or the academy survive The Great Feminization, they need it to thrive. Sorry, Helen.

 

Introducing Blank Slate

Imagine running a small business that meets all your needs. Blank Slate guides you through a progressive process of challenging your assumptions and rethinking your small business so that you can develop a strategy that’s clear, decisive, and sustainable. It’s a guide more than a decade in the making, drawing on the workshops, courses, and coaching I’ve developed over the last 17 years. 

It’s a permission slip. A confidence boost. And a business coach in your proverbial pocket.

Blank Slate is a 140+ page workbook that comes both in color for working on tablets and printer-friendly grayscale for working by hand. You also get an audio version and exercise-only workbook so you can do your thinking wherever you prefer. 

Learn more

 
 
Tara McMullin

Tara McMullin is a writer, podcaster, and critic who studies emerging forms of work and identity in the 21st-century economy. Bringing a rigorous critique of conventional wisdom to topics like success and productivity, she melds conceptual curiosity with practical application. Her work has been featured in Fast Company, Quartz, and The Muse.

Next
Next

What Else Must Be True?