This Is Not AI Slop
A note from Michael
Nothing in what you’re about to read was written, revised, suggested, or seen by AI — well, I guess I can’t control whose watching on the internet, so if Substack has agent eyes on this…well, you know what I mean. It’s all me, Michael.
That’s worth stating because, in fact, I have been using AI recently. I’ve used it to make philosophical quizzes and visualizers, I’ve used it internally for helping to consider various business alternatives, I study with it…I would say that I have the good kind of AI psychosis, like that divine madness that is beloved my the gods, not the madness, or the psychosis, of a stinky, creepy subway murderer.
The AI has been extremely fun to use and to discuss. You can see some of my interest in the topic from recent posts here. AI and classical political philosophy. AI and language mysticism. AI and the holy. My AI-assisted business moves have lately gained me thousands of new mailing list subscribers. And so on.
On the other hand, though…
There is a natural and understandable hostility towards AI in certain circles. Unless you are completely out of touch, most of us — I mean, most of the people I interact with online — can smell AI-writing from a mile away. You don’t need me to rehearse the tells. They’re obvious and annoying. And somehow they can be an instant turnoff, even if what they’re saying is not so bad. It makes me think a bit about some French author I read long ago (was it Baudrillard? I think not) who discussed parenthesis and how our eyes often speed past parenthetical comments when we are reading. Yet for all we know those parenthetical comments may have merited our full attention nonetheless.
So I figured it is worth stopping briefly to consider why good use of AI is not slop. The best model I have heard is the Renaissance workshop model, where a Master has students, apprentices, and other workers help him in the production of various levels of commissioned works of art. Sometimes it is necessary for the Master to be directly involved with everything. At other times, his mere direction suffices. And there are works of art where it is unknown to what extent it was the Master or his Apprentices who played the larger role in the artwork.
AI can serve as this collection of Workers in a Renaissance Workshop, acting under the guidance of the Master, with his instructions, in his spirit, marked by his maniere, his manner. Someone once said that Frank Sinatra cannot sing anonymously: everyone knows it’s him. He has his signature style. And so do some thinkers, creators, poets, writers, psychonauts, and old souls. If you can see AI co-production as analogous to apprentices helping work within the signature style of a Master, if you can leave the touch of a Master’s hand on the work you do with AI, then I think it is completely wrong to interpret the use of AI in these cases as “slop.”
Let me give one quick example of how I am trying to do this.
I have many essays, courses, and youtube videos available online. A few months ago, I put them all into text form, did the specific computational steps required, and produced an “intelligence engine” drawing on my body of work.
I use that “intelligence engine” to fine-tune my AI products, so that they share my voice, my understanding of the authors and issues I teach, and the things I’ve actually written or said before. That gives my AI co-productions more gravitas than generic prompted slop, I believe. And I continue to try to make it better and better. When I recently made a quiz designed to be fun and also to onboard newcomers into basic philosophical schools and ideas, I made sure that the results of the quiz would be visible and informative even if nobody gave me their email address. So not only did I try to avoid slop, I also wanted to be generous and avoid the annoying marketing trap where you spend 20 minutes doing a test and only get the results if you give your email address. That seems unfair, so I don’t do it (if any of my many new quizzes happens to be built like that, let me know and I’ll fix it right away).
I don’t think “intellectuals” and others should have a strong aversion to the use of AI. You’re not obligated to use it, of course. But you can use it without producing slop. And you should.
You can find my quizzes here and my new series of visualizers here.


Good read. I like the Master / Apprentice model. I like using chat agents to assist with research and provide writing prompts. I also use it to check style and grammar edits. The agent remembers my previous sessions and makes suggestions for content, using my voice.
“I study with it”. This requires some amount of caution and preexisting knowledge of the topic under study. Warning: the knowledge gained *will* be incomplete and may cause you to make overstatements and/or come to false conclusions. Ask me how I know.
There are some efficiency gains using AI but I have not had even one session where several iterations weren’t required to produce meaningful output. Then that output required scrubbing to get rid of the slop. I’m not a professional writer, teacher or philosopher so the iterations, even when there are a lot of them, are faster than starting from scratch. I wonder if that’s true for someone who is a professional or academic.
About a year ago, I went through a period of time when I frequently asked Microsoft's AI to recognize obscure music if I gave it a musical phrase or topic. It's helped me pick out movies to watch on Prime Video. It led me to SNOWPIERCER and to DARK CITY (new movies I would compare to the best of Bogart's); but then I got a MacBook and lost interest in it. Will it eventually create a language that only other computers understand and end up making all future political and economic decisions? I think so. The fact that a sad percentage of current college students use the tool for theft of a diploma is tragic; but it pales in comparison to a silicon-based life-force having all the carbon-based creatures under its control.