What Artificial Intelligence Means for Men: A Serious Assessment
The economist Daron Acemoglu at MIT has spent his career studying the relationship between technology and labor, and his recent work on AI is worth attending to precisely because he is neither a utopian nor an alarmist. His 2023 paper “The Simple Macroeconomics of AI” concluded, after modeling the likely adoption curves and productivity effects of current AI systems, that the impact on GDP and employment over the next decade would be significantly more modest than either the most optimistic or most pessimistic predictions suggested — because the tasks AI can currently perform well are a subset of the tasks that actually drive economic value, and that subset is smaller than media coverage implies.
This is a useful starting point because it suggests that the question “what does AI mean for men?” requires more precision than most of the discourse about it has managed. Not all AI systems are the same. Not all jobs are equally exposed. Not all skills are equally replaceable. And the philosophical question of what AI changes about the meaning of human work is distinct from the economic question of what it does to human employment.
Both questions matter. This is an attempt to take both seriously.
The Economic Question: Which Men, Which Jobs
Men are heavily represented in the occupations that AI is most likely to automate — not because of any inherent male capacity for the relevant tasks, but because of the historical distribution of gender across job categories.
The Oxford study by Frey and Osborne (2013) — the most cited analysis of automation risk by occupation — identified software developers, data analysts, paralegals, accountants, and financial analysts as among the most exposed white-collar occupations. Men constitute significant majorities in most of these roles. More recently, OpenAI’s own analysis of GPT-4 exposure by occupation found that the most exposed roles — those where AI could perform more than 50% of constituent tasks — were concentrated in knowledge work: lawyers, writers, programmers, mathematicians.
This is a departure from previous waves of automation, which primarily displaced blue-collar and manufacturing work. The current wave is specifically targeting cognitive work — the work that the post-industrial economy moved men into as manufacturing declined.
Goldman Sachs’s 2023 global analysis estimated that 18% of global work could be automated by current AI systems, with higher exposure in developed economies where knowledge work concentrates. For men in their twenties and thirties who are building careers in the exposed sectors, this is not an abstract concern.
What Is Actually Hard to Automate
The research on AI capabilities is consistent on a few points. Current AI systems perform well at:
- Pattern recognition in large datasets
- Generation of text, code, and images that match specified patterns
- Translation and summarization
- Answering questions from established knowledge
They perform poorly at:
- Tasks requiring physical manipulation in unstructured environments (a fact that has led AI researchers to joke about Moravec’s Paradox — that tasks easy for humans, like picking up a glass, are hard for AI, while tasks hard for humans, like chess, are easy for AI)
- Tasks requiring genuine causal reasoning (as opposed to pattern matching)
- Tasks requiring ongoing relationship and trust over time
- Tasks requiring ethical judgment under genuinely novel conditions
- Tasks requiring the integration of embodied experience with abstract thought
This last category is particularly interesting for thinking about where men’s skills retain value. The surgeon who can feel resistance in tissue. The craftsman who can hear when a joint is right. The therapist who can read a room with their whole body. The construction manager who has walked fifty job sites and recognizes when something is wrong before he can articulate why. These forms of knowledge — what Michael Polanyi called “tacit knowledge” — are precisely what AI cannot replicate, because they are not stored in a form that can be codified and trained.
The Skills That Matter Now
The economists Erik Brynjolfsson and Andrew McAfee, in The Second Machine Age (2014), identified what they called the “O-ring” skills — referencing the NASA component whose failure destroyed the Challenger shuttle — as those that become more rather than less valuable as AI handles routine tasks. These are the skills that define whether the human element in a process succeeds or fails:
Judgment under uncertainty. AI systems optimize for average-case performance. The cases that matter most are often edge cases — the patient whose symptoms don’t fit the pattern, the legal situation that has no precedent, the engineering challenge that falls outside the training distribution. Human expertise in navigating these edge cases becomes more valuable as AI handles the standard cases.
Ethical leadership. The decisions that AI systems should not make — because they involve values, trade-offs between incommensurable goods, or accountability that must rest with a human — are precisely the decisions that will define organizations and careers in an AI-enabled economy.
Creative direction. AI systems generate outputs that are statistically likely given their training data. They are genuinely poor at the prior question: what should be made, and why? The creative director who knows what he wants and can use AI to produce it faster is not displaced by AI. He is amplified by it.
Relationship management. Trust, loyalty, the accumulated social capital of a long relationship — these are things AI systems cannot build. The man whose clients trust him personally, whose colleagues will go the extra mile for him because of who he is, is not replaceable by a system.
The Philosophical Question: What It Means to Be Good at Something
The economic analysis matters. But underneath it is a more fundamental question that men are already encountering: if AI can produce in seconds something that took me months to develop the skill to produce, what does it mean that I developed that skill?
This is not a new question. It was raised, in different form, by every previous automation wave. When mechanical looms replaced hand-weaving, the weavers’ skills did not become worthless in a human sense — their craftsmanship was still real, still beautiful, still took a human lifetime to develop — but the economic value of those skills collapsed. The question of what remains when economic value collapses is a philosophical question, and it is one that masculinity has historically been very poorly equipped to answer.
Because masculine identity in the modern era has been so heavily grounded in productive work — in being useful, in being skilled, in earning through competence — the question of what work means when machines can do it threatens the foundations of that identity in a direct way.
Richard Sennett’s The Craftsman (2008) offers the most sustained philosophical treatment of this question. Sennett argues that the process of developing a skill — the long apprenticeship, the gradual mastery, the embodied knowledge built through repetition and failure — has intrinsic value independent of its product. A man who has learned to play the piano at a high level has not simply acquired the ability to produce piano music. He has developed a form of attention, a quality of discipline, a relationship with difficulty, that is his regardless of whether a machine can produce equally good piano music more efficiently.
This argument is correct and important, and it is not complete. It leaves unaddressed the question of what happens to men whose sense of self is built around professional performance rather than craft mastery, and for whom the distinction between the process of developing a skill and the economic value of the skill has never been made. For these men — who are the majority — the philosophical framework needs building from scratch.
AI as Mirror
There is a dimension of the AI moment that is more intellectually interesting than the employment question: what the existence of systems that can produce plausible human outputs tells us about the nature of human cognition.
When GPT-4 passes the bar exam at the 90th percentile, the correct response is not “AI is now a lawyer.” The correct response is to ask: what does it mean that the bar exam is largely a test of pattern-matching on established legal doctrine, and that 90% of what lawyers currently do is in fact pattern-matching? The AI doesn’t reveal something new about itself. It reveals something about what we had mistaken for uniquely human.
The philosopher John Searle’s Chinese Room argument — his thought experiment against strong AI claims, in which he imagined a person in a room following rules for manipulating Chinese symbols without understanding their meaning — has been contested since its publication in 1980. But its most useful interpretation is not about AI. It is about humans: how much of what we take to be genuine understanding is itself a form of sophisticated symbol manipulation? How much of expertise is pattern recognition dressed as judgment?
These are questions that AI’s existence makes urgent in a way they weren’t before. And they push men toward a clearer reckoning with what genuine understanding actually is — which is not something AI has, and which is worth cultivating deliberately.
The Male Response: Some Patterns Worth Watching
There are patterns in how men are responding to AI that are worth naming.
The power user. Men who have integrated AI tools aggressively into their work and are extracting genuine productivity gains. These men are not worried about AI. They have made it a leverage point. Their risk is different: overconfidence in AI outputs, and the potential atrophy of skills they are no longer using.
The denier. Men who are not engaging with AI tools and are not updating their mental models about what the technology can do. These men face the largest displacement risk, because they will find themselves at a significant productivity and capability disadvantage in employment markets that increasingly reward AI fluency.
The catastrophizer. Men who have concluded that AI will eliminate most human work and are reorienting their lives accordingly — often in directions that range from productive (developing skills outside the automation exposure zone) to counterproductive (withdrawal from economic participation).
The philosopher. Men who have engaged with the deeper questions that AI raises about human purpose and identity, and who are using those questions as leverage for a more examined life. This is the healthiest response, and the rarest.
A Practical Orientation
The most useful posture toward AI for men currently in the workforce combines three elements:
Develop AI fluency as a tool user. The men who will benefit from AI are those who understand what it can and cannot do, who can direct it precisely, who can evaluate its outputs critically. This is a skill set that compounds: the more you use AI tools, the better you become at using them. Start now.
Invest in the skills that AI makes more valuable, not less. Judgment, relationship, ethical leadership, physical craft, genuine creative direction — these become premium capabilities as AI handles more of the standard cognitive work. The man who has developed these is not displaced by AI. He is the person AI needs to be useful.
Take the philosophical question seriously. The question of what work means when machines can do it — what a man’s life is organized around when competence alone is insufficient — is not a question you can afford to defer. It is arriving regardless of your preparation, and the men who have thought about it in advance will navigate it better than those who encounter it unprepared.
Further reading on Playboy-X: