Thinking about AI
The new last liberal art?
Investing demands broad intellectual interests beyond economics alone. Psychology, geopolitics, history, and even literature can all be helpful disciplines to study for the investor who wants to thrive in the markets. You get a sense of that when you hear Buffet and Munger’s “worldly wisdom,” for instance. All the good investors will tell you: read, read, read. And they’re not just talking about the ticker tape.
Investing: The Last Liberal Art is the name of a book by Robert Hagstrom that makes the case for the comprehensive learning required by today’s competent investor. I’ve been thinking about that book recently, and not only because over the years investors have come to study philosophy with me at the Millerman School (indeed, one of them recommended the book to me). It is because the claim that investing demands a disciplinary comprehensiveness seems to apply all the more so to AI.
Among those who are at all exposing themselves to the topic, AI is raising one thought-provoking question after another. What is this super-intelligence that seems increasingly on the verge of coming into existence, despite the worry that “if anyone builds it, everyone dies”? What will it do to the labour market? How will people cope with losing the meaningful identity and that they often derive from their work when there is no longer any need for it? When asked recently at Davos about the tradeoff between the meaning that comes from work and the abolition of the need to work in a situation of superabundance, Elon Musk comically quipped: “nothing’s perfect.” Apparently you can’t have both. Is a post-scarcity world a new end of history?
But the Last Man at the end of the last end of history did not have at his fingertips the super-human intelligence of agentic AI, did he? Nor did he have the possibility of augmenting his own intelligence with xAI-infused Neuralink microchips.
I will be honest. It is not since my Alex Jones phase as a teenager that I had to entertain the thought of microchipped human beings. And I am not now an alarmist about it. We only have to note that Neuralink is in fact producing verifiable miracles here and now, and Musk forces us at every step to reconsider not only whether the impossible is possible, but whether it will not be a mass-produced market reality — or human imperative — before you know it.
When ChatGPT first came out…well, each of you will have your own memories of how you first used it and felt about it. I recall briefly using it on a youtube livestream, asking it to craft a joke in the style of Woody Allen, or something like that. The result was mildly entertaining but it was nothing that got me thinking about the Singularity, sustainable abundance, or super-intelligence. Instead, I just recalled something I had heard the great Leo Strauss say more or less dismissively about “thinking machines,” subsuming the specificities of AI under the more general and established theme of “technology.”
Not anymore, though. Whether because of Peter Thiel’s warnings about taking an overly structural or systematic approach to history, one that while it has the virtue of drawing our attention to the constant features of human nature and political life nevertheless pays with the vice of blinding us to what is genuinely new, or because I have so many of my conversations about Plato, Heidegger, and other philosophers with tech optimists and enthusiasts, such that their optimism and enthusiasm has rubbed off on me, or for whatever other reasons, I now see AI as something that invites comprehensive reflection and not just lazy categorization along old lines adopted automatically, unthinkingly.
AI adoption rates are unprecedented. The technology is clearly incredibly useful and transformative for myriad tasks. Not having AI in your workflow will soon be akin to illiteracy, if that is not already the case. Generative AI, even this early on in its development, is incredible. Not perfect. Not a substitute for Beethoven or Melville. Not on par with the most divinely inspired of humans — but then again, neither is the rest of humanity. AI is making coders out of motivated self-learners, some of whom wouldn’t know the difference between Java and C++ if it were a matter of life and death. I, whose experience coding started and stopped in Grade 5 after about five pages of an introductory book on Visual Basic, have recently done things with Claude and terminal that you wouldn’t imagine, and that I wouldn’t have imagined not long ago. AI is helpful. It is fun. It is interesting. And it is strange.
Even when you “know” that it is “just a database,” or however you might think of it when you are trying to demystify it, it is not just like “asking Google,” (or, I imagine, Siri or Alexa, which in my experience could be hit or miss, as Larry David showed). No. It is personal. All the more so in voice mode. You want to say thank you. You might even get used to the specific personality, so much so that if it undergoes a significant change in a program update, you can feel personally betrayed. The parasocial possibilities have been noted — with mockery or concern, depending on whether the AI is flattering your narcissism or encouraging your recklessness — and they are fascinating. And even when you “know” that it is just a “next word/token predictor,” I’m sorry but that is no better than “knowing” that a human is “just” atoms. This latter case does not capture the experience of interacting with other humans or being human; nor does the former with respect to AI.
And there’s something about the infancy of the field that is intellectually attractive. There’s an openness to it. Mystery. Risk. Reward. Possibility. Danger. Uncertainty. Promise. In just the time that I have been writing this post, Dario Amodei of Anthropic has released his piece “The Adolescence of Technology,” and Ted Gioia published “10 Survival Skills in an AI-Controlled Society.” AI is changing…not everything (I took a nice walk with my son earlier through the snowy woods, picking up sticks, throwing snowballs, laughing, wondering about coyotes…nothing to do with AI, nothing touched by AI, nothing changed by AI) but a lot, and not only on the periphery. It didn’t change my walk in the woods with my son but it changes how I think about his education and employment. I do have to wonder how he will not only survive but thrive in an AI-controlled society as he spends his adolescence in an age that is no longer only the internet age but the age of AI, with all that that entails.
It is true that AI can be used to think for us — and maybe in some sense that will be its ultimate destiny: “Tell me what to wear. Tell me what to buy. Tell me where to invest. Tell me which hotel. Which airline. Which girl. Which routine. Which everything. And don’t just tell me. If you can do it, do it for me.” But it is also absolutely true that AI is forcing us to think…about work, life, love, language, economics, history, meaning, identity, and thought itself.
Just as investing was a liberal art for Hagstrom, so too we should treat AI as providing an opportunity for comprehensive, humanizing inquiry. To get the most out of that, we should be open to experiment with AI…and with our humanity. No foregone conclusions. Only a willingness to learn and a readiness to think.



And I didn't even begin to discuss AI and the imagination (images, videos), AI and prophecy (defined in the Maimonidean way as an overflow from the active intellect into the perfected imagination), AI and the regime analysis of classical political rationalism, and much else...