This post is a response to 2 readings in Ken Holstein’s Augmenting Human Intelligence course on the on-going topic of “augmenting creativity.”

First: Homogenization effects of large language models on human creative ideation by Chong et al., 2024 And second: Luminate: Structured generation and exploration of design space with large language models for human-AI co-creation by Suh, Chen et al., 2024

I think that the homogenization paper has become one of my new cult favorite papers that I’ll be thinking about for a long time. I love it because it says things that I’ve already had a feeling were true - that there is a lack of real, broad creativity coming out of it. Of course, I know I love this because it provides confirmation bias. I won’t even pretend to deny that.

As a research project, it is the sort of thing that I really wish someone else would be able to demonstrate (because I wasn’t sure myself how I’d even go about making a research study out of… it is hard to be so passionately biased about my hunches with LLMs without just feeling like I’m a jaded, out of touch, luddite).

I value creativity immensely. I care deeply about how people make things! But my experience of how people do this (and why) is just radically different than what LLMs offer. It is possible that language models will eventually evolve to become equal in weirdness and creativity to my freaky friends, but something about me doubts that on an intellectual level. And even if it did, I might despise it from the core of my being even more. Hayao Miyazaki said (back in 2016) in response to an AI demo of a monstrous creature that “whoever creates this has no idea what pain is” and that “I strongly feel that this is an insult to life itself.”

I passionately believe that pain, as a pre-linguistic thing we all experience, is far more important to creativity than language is. Language is something we created to escape pain, to distance ourselves from it. Language can evoke pain, but it takes real pain to stimulate what matters about human creativity. And pain is just one thing among many things that we feel before language (that are all central to creativity) - love, comfort, happiness - these are all pre-linguistic feelings too. How can someone be truly creative if they don’t understand what happiness feels like? If they don’t want to evoke the experience of happiness in someone else? If they aren’t fervently seeking joy, to escape from pain? Language has only just been a vehicle, an instrument, that we use on our journey alongside our joy and pain.

But language models don’t really understand these things because they can’t feel with us. There is no shared pre-linguistic sense of identity or humanity. They only have linguistic identity, divorced from our strange, pained, stinky, weird, fragile, bodies. We invented language to try to understand all of these things about ourselves an our world. Large-language models are just attempting to predict tokens of all of our collective expressions at once.

There is something that has offended me for a long time about the use of LLMs for things like creativity (art, music, and writing, especially). “It helps me express myself” I’ve heard in response. But I just think, “you couldn’t sit still for long enough to come up with anything else? You can’t suffer the world around you and your own body long enough to conjure up something yourself?” Where is that biological, fundamental force of nature that rewires our brains to screech, writhe, express, and create something that we are truly responsible for? Where is the desire to give birth to your own creativity? Are we all just adoptive parents of a statistically-average language model outputs now?

And part of what I think makes an LLM so appealing, even therapeutic, is precisely what philosophers like Julia Kristeva and Jaques Lacan were so haunted with in regards to language: that language is a distancing from the lacunae (the emptiness within us). Language creates distance from the horrors of reality.

But that empty space is what makes us who we are. So if language is an act of producing distance from the emptiness of our humanity, then what is a more tempting, tantalizing escape from our humanity than the most powerful model of language of all? It is one step towards hardware and away from our meatware. So is what defines humanity now simply our desperate need to escape our bodies? Are LLMs just an answer to someone who can’t sit still for long enough to really, truly feel themselves be? Is our desire to escape emptiness (and remove ourselves from anything that evokes a physical reminder of who we are) what makes the latest LLM craze so meteoric? Are we drawn to the comfort of language models because they are the ultimate post-real tool for assimilation? Is our only future now to make us forget ourselves as rapidly as possible? Is our future to put the responsibility of losing our humanity to a model, instead of having to take it upon ourselves to wrestle with this?

Language is a beautiful human invention because it demonstrates what we choose to wrestle with. But LLMs are abhorrent because they demonstrate that we have chosen to no longer wrestle at all. We do not care enough for language to make it our own. We would rather automate our dissociation than feel any reflexive relationship to our own bodies and the world around us.