I.
The cookie crumbles as follows: causation feels equivalent to inevitability, and inevitably feels like it precludes meaningful human action. This would be far more devastating than it seems at first: it’s not just that “your life has no meaning” implying a spiritual void; rather, if nothing can be otherwise, then no communication is possible. Another way to put this is that old chestnut “it’s easier to imagine the end of the world than the end of capitalism” is vacuously true: it’s easier to imagine the end of the world than that you didn’t just read this.
Classical concerns about artificial intelligence (outside the Heideggerean skeptic fringe) had to do with the “paperclip optimizer”: a vast material power that could radically bend physical reality to satirical ends. These ends were satirical because said radical power was axiologically illiterate — it couldn’t understand our values and purposes. But the satire flowed from us, master designers of new masters; it was easier to imagine the end of the world than a machine that was more axio-literate than us.
“Shallow AI” (the generative computer models that have taken center stage now) also flows from and through satire. Surely enough, “artificial intelligence” stands ultimately for an effect of surprise; at some point, computer vision that knew good coffee grain from bad was magical — before it became reliable and useful and vanished into the zuhandenheit. We want to be awed by the behaviors of the machine; we also want to see it pratfall and jump Jim Crow. Yes, yes: if there was any humanity to ChatGPT, it would be cruel to put it onstage to try and calculate “56 + 14”.
Is it possible to even think of machinery that isn’t secretly a parody of our own turpitude? Yes, but we’re unable to stand aside in ec-stasy before it.
II.
To the extent that theory identifies with the distance in theoros, it can never merge. This is why normcore is par excellence an ultratheoretical move: it identifies the basic tactical moves leading up to the destruction of theory. In this way, normcore echoes Heidegger’s destruction of metaphysics:
The Geshick of Being: a child that plays… Why does it play, the great child of the world-play [[Heraclitus]] brought into view in the aiôn? It plays, because it plays. The “because” withers away in the play. The play is without “why.”
Normcore wants the freedom to be with anyone. You might not understand the rules of football, but you can still get a thrill from the roar of the crowd at the World Cup… Normcore capitalizes on the possibility of misinterpretation as an opportunity for connection, not as a threat to authenticity.
Yet: nominating normcore to the ultratheoretical role of destroying theory ablates its initial formulation as a trend and a cluster in existential tactics. The normcore pdf tells you “calm down, man, connect“. It expects to raise the ceiling of ordinary life by embracing randomness and running with it. But these notions and expectations are ablated by their ultratheoretical formulation. ChatGPT also capitalizes on the possibilities of connection through good will and misinterpretation rather than through accuracy and fitness for purpose. How could have we better weaponized normcore for general axiology? How could have we promoted universal misinterpretation without somehow triggering Kali Yuga ourselves?
III.
I’ve refused to correlate theory and philosophy more times than it’s worth counting. Maybe the best explanation here has to do with supply chains: philosophy arises out of love and reason, while theory is feverish with narcissism and often — as in asemic horizon — almost automatic. However many thoughts cloud my mind at any given moment, these texts have always appeared to write themselves.
Now, free association (my method being mostly a subset theoreof) and token prediction have fundamental differences, but they both operate in this pas-d’hors-texte mode where they eschew linking to the outside for expediency and refer to themselves, building up meaning by recurrence (so “realization” means something technical because we’ve built this over time). And to the extent that ChatGPT is unable to reproduce my own rich universe of discourse, that’s because some of this goes beyond doodling and meandering — it designates some fixed points as technical terms so it folds inwards rather than endlessly unfolding.
This is also, of course, what makes this project almost perfectly opaque: it’s not open to the randomness of misinterpretation; it does not yield to the chilly, easy lotus of normcore. Normcore is the way, but we cannot follow it yet.