Black Box Thinking
How is artificial intelligence impacting the way we think and what we know to be true?
Just about every app I use has added “helpful” AI features in the last year.
Canva wants to wave a magic wand at my designs. Descript would like to select clips and suggest titles for my podcast. Zoom wants to summarize my meetings. Chrome would like to sort my tabs for me.
Sometimes, I find these features legitimately helpful. But more often than not, the proliferation of AI notifications and suggestions is just a distraction—an interruption in my flow.
I'm not anti-artificial intelligence. I'm not even sure I'm upset about developers using my publicly available works in the training data for their models. I am interested in how AI will affect the workplace and job market—both the positives and the negatives.
But right now, what I am thinking about most when it comes to AI is how it’s impacting the way we think. How could artificial intelligence limit our capacity for critical thinking? In what ways might it further erode trust in science or expertise? How will it shape our experience of truth in the years to come?
Today's AI systems are often described as black boxes.
Developers create a machine-learning algorithm, feed the algorithm training data, and then create a model that others can use. But how exactly the model does its thing isn't well understood. AI developers focus on the inputs and outputs—and as long as they're getting what they want from the outputs, they let the machine do its work.
Here's how journalist Chloe Xiang put it in an article for Vice:
Rarely do we ever question the basic decisions we make in our everyday lives, but if we did, we might realize that we can’t pinpoint the exact reasons for our preferences, emotions, and desires at any given moment.
There's a similar problem in artificial intelligence: The people who develop AI are increasingly having problems explaining how it works and determining why it has the outputs it has.
To Xiang's point, our everyday decisions aren't often the subject of philosophical self-inquiry.1 We don't interrogate what we choose to make for dinner or why we selected one pair of shoes over another. And that's fine—most of the time.
As many of us have come to realize, our own black-box thinking often hides cultural baggage that doesn't align with our stated beliefs or worldview. What our subconscious mind believes and what our conscious mind thinks it believes are often at odds. We know that our productivity shouldn't determine our sense of self-worth or that our personal struggles are symptoms of systemic problems. But we still get down on ourselves when we don't measure up to the bar our cultural conditioning set decades ago.
Cultural conditioning shapes how we think and what we believe is true in ways that feel natural. We learn, “That’s just the way it is,” or, “That’s just how the world works,” to the point that we don’t question. Why would we? We take for granted that a rugged individualist disposition is a positive trait or that growth is good.
Maybe in this day and age, we need to think about technological conditioning, too
Recently, Google launched AI Overviews as a new feature at the top of some search results. AI Overviews are similar to the "snippets" that have graced our more straightforward searches for years. But instead of pulling out (hopefully) relevant content from one source, the AI Overview summarizes multiple sources. For instance, when I googled "Google snippets history," an AI Overview gave me a timeline and a few other facts, with links to four sources. There's also a bit of fine print at the bottom to say that "Generative AI is experimental."
After Google rolled out AI Overviews, plenty of people started to put the new feature through its paces. In one instance that went viral, a search for how to get the cheese to stick to a pizza resulted in the suggestion of adding some non-toxic glue to the sauce. This Overview drew on a farcical Reddit post. Another, about the number of rocks one should eat each day, drew on some findings from none other than the satirical “news” site The Onion.
It's easy to say that any thinking person would dismiss these results as unfortunate errors—a glitch in an otherwise impressive matrix. But is the average (or sophisticated) searcher really a “thinking person” today? Do we approach search results with critical thinking or cognitive bias? How often do we accept the answers the Almighty Google delivers so that we can get back to the task at hand? Our technological conditioning naturalizes the credibility of search results—after all, that’s why search ads are so valuable.
Instead of instigating healthy skepticism and critical examination, the “black box” engenders an odd credibility.
We can’t possibly understand how Google (or any other app) does what it does, so it must be doing it right.
On the developer side, the "black box" represents a lack of knowledge or understanding. At the very least, it connotes hidden knowledge. However, on the user's side, the black box lends authority to the information it spits out. The black box doesn't prompt us to question so much as it prompts us to accept in a hurry and move on to the next task. If the machine says it, it must be true.
That fine print warning at the bottom of the AI Overview, "Generative AI is experimental," isn't really a caution at all. It offers developers plausible deniability while lending the future-focused, optimistic aura of "experimental" to the tool's output. We want to believe in the experimental. We want to believe that scientific innovation can help us solve real-world problems. And after decades of filtering the vast majority of our online activity through Google, we want to trust it.
The black box creates the perfect conditions for what philosopher C. Thi Nguyen calls hostile epistemology, or the way "environmental features exploit our cognitive vulnerabilities."2 As artificial intelligence flirts with the unknowable, our mere organic intelligence must yield to sheer processing power. By occluding our view of the process by which an answer or suggestion is delivered, artificial intelligence features also occlude our prompts to think critically about the answers it provides.
As Nguyen puts it:
...we will only perform this error metabolism if we have access to the evidence of our errors. And that evidence can be hidden. If we are brought to trust and distrust wrongly, if we have been convinced to settle our minds in certain directions—then we can miss, or dismiss, the evidence of our error.
We can’t refute something we don’t understand. We can’t question the logic of an argument that doesn’t show its work.
We can easily perceive the black box environment and its experimental features as indicators of a superior intelligence—even a superior wisdom. We learn to defer to the black box because, certainly, it must know better than we do.
This black-box thinking isn't limited to our adoption of artificial intelligence. The Bezoses and Zuckerbergs of the world want us to trust their black-box corporations. Banks want us to trust their black-box money management. The Supreme Court wants us to trust its black-box deliberations and backroom dealings.
Black-box thinking also runs amok on social media. The posts we see are the result of hidden processes. We imagine what those processes must be—what's inside the black box—and attempt to recreate the output and feedback others receive.
Therefore, noticing and resisting black-box thinking must be one of our chief priorities as we forge ahead into the future.
We can't assume that what's unknown is credible, beneficent, or useful. We can't accept what we're told unless there's reasonable transparency in the process.
Resisting black-box thinking is a resource-intensive project. We're more vulnerable to epistemically hostile environments when we're trying to cut corners—mentally and temporally. While it's always been true that we need to be cautious about believing what we read on the internet, today, we must devote extra resources and care to the information and suggestions we come across.
In other words, we have reason—yet again—to slow down, make space, and rethink our assumptions.
New Workshop: Endurance Training for Work
When: Thursday, June 13 at 1pm EDT/10am PDT
Where: Crowdcast (This workshop will be recorded!)
Who: Both independent workers & traditionally employed people
How Much: Free for Premium Subscribers or á la carte sliding scale
Endurance Training for Work takes what I've learned about endurance training for running and applies it gently to planning your days, weeks, and months.
And you don’t even need to break a sweat!
You'll learn about:
Planning for small, incremental increases in work intensity
Structuring your day using heart rate training
Considering long-term 'health' and avoiding 'overuse injuries'
Incorporating cross-training to build strength and counter imbalances
Whether or not you've ever trained to run a marathon, hike a challenging trail, or—heck—dressed up for a Halloween fun run...
...this workshop will offer practical strategies for avoiding the pain of overwork and overwhelm.
This workshop is free for all Premium What Works subscribers. An á la carte, sliding scale workshop ticket is also available! Click here to register.
Although readers of this newsletter probably engage in philosophical self-inquiry more often than average!
I previously wrote about hostile epistemology as it relates to inflated prices in this essay.