LLMs: From having thoughts to managing them
Large-language models aren't just coming for our jobs, their coming for our thoughts and vibes.
Opening caveat before I begin: I don’t know anything about cognitive science (and I don’t really know much about anything else either). But I definitely know the vibes I feel. And I want to talk about the vibes I feel when I work with big, complex systems and do analytical work. So this is a post about my own vibes, really.
So to start, I don’t get up in the morning with any excitement to work with large-language models (LLMs). (This’s something a friend of mine Lukas first said of themselves and its become a reflection of my own ever since.)
I simply love the feeling of developing an understanding of and relationship with a system at a lower level. I like the feeling of some degree of manipulation and control of small details and deterministic functionalities. I like clearly expressing myself with a logic that can be consistently interpreted every time. The consistent interpretation of my expressions help me validate my own thoughts, find problems, and iterate.
Part of what I love about technical work is discovering myself as I reflexively work with technical materials. I have assumptions of how something should work and then can consistently test those assumptions quickly and directly against a system that provides fast, interpretable feedback. I enjoy code because it helps me feel a sense of confidence and understanding of my own thoughts.
But I read a recent provocation/post by Hullman exploring/poking at what sort of balance makes sense between human domain knowledge and machine assistance. In it, Hullman compares different types of analytical help as interventions from either a rule-based assistant, a human stats consultant, or an LLM that works like the human consultant. A major point in the piece is that there is a gray-area level of problem-solving that we often offload onto computational systems (LLMs, logic-based agents, interfaces, or otherwise). The ideal balance between human knowledge and machine assistance is fuzzy.
Of course, asking a human for assistance and asking an LLM for assistance both set a stage where there is some delegation of thinking going on. Some level of detail in control and understanding of an analytical system is sacrificed to another agent in order to collaborate. And I agree with Hullman that in the context of problem-solving or work, the human and LLM agent likely serve a relatively similar role here (not necessarily considering ethics, accountability, or other socio-technical things though).
We’ve had similar discussions in our own lab for the past two years (at least) about levels of abstraction in software. We don’t engineer electrical circuitry by hand, let alone actually move electricity by ourselves from one place to another. We rely on the existing computational “stack” where I could write something in typescript, which is transpiled into javascript, which is parsed by a browser, which then passes stuff down all the way through assembly and down to our physical machine in order to perform a vast amount of electrical operations in a matter of mere milliseconds. So does an LLM, as a new layer of abstraction, really matter? Why or why not? (This isn’t really a settled debate in the lab, either.)
Scripting (non-compiled) languages in particular had a strong resistance when they first came about. But now javascript and python, as just two examples, have utterly ruled the world we live in for the past 15 years. So is present-day resistance to LLMs really just a repeating example of lower-level engineers and workers getting frustrated that their level of abstraction is being replaced by something higher?
Perhaps. And that isn’t necessarily a bad critique to have, either. If solving problems quickly and a scale is the only moral good, then perhaps it is bad. But wanting job security or a sense of enjoyment with your work is actually quite important.
But if I’m going down in history as a luddite (or perhaps at best just irrational about where I think “good” automation is in computational interaction), I suppose I should at least explore my thoughts on this.
And I like how thinking about something feels. To me, that’s why I get up in the morning. I love tricky problems, big systems, and hard questions. I don’t like them because solving them is satisfying but because working though them is. I actually understand myself (and feel a sense of satisfaction with who I am) when I can reflexively work with technical systems. I have a hard time really understanding and trusting my own thoughts. Interaction with low level details is how I come to witness my own values and priorities form.
And unlike automation that I write myself to alleviate tedium for my own tasks (or automation I’ve never thought about like however assembly actually works when I write javascript), using an LLM to do certain work for me feels like lost opportunities to try to understand something. And this by extension becomes a lost chance to know more about myself, too. I miss out on the experience of feeling my own thoughts forming.
This brings me to the same reason that I don’t like generative models that produce art of any kind (images, videos, writing, etc). At a very personal level, I’ve been writing fiction pretty much my whole life. I first started forming characters and stories in my head when I was 9, developed a little world when I was 12, and then made the first version (of now 6+ versions) of a tabletop rpg where I could explore this world with my friends.
And it would be obscene to me to ask a model to create anything within this world for me. The joy of creation and forming my thoughts are one of the most precious experiences I have in my life. And the second most precious experience I have in my life is when I share this world and these characters I crafted with my closest friends.
I recently played in a D&D game that someone else hosted but for the first time I decided to use a generative model to help me visually illustrate my character. Then we show up to the first session and the host had hand-drawn all of our characters (and little tokens for us) by herself. And I realized how sickly and inhuman it was to use a model to accomplish something that could have instead connected me closer to my friends and the character whose story I was trying to craft with them. Sure, the style of the character I made with a model was interesting aesthetically. But then I also considered how this model really just extracted some other (actual) artist’s real style in order to produce my character.
So in the same way, I don’t like LLMs for analytical + systems work for the same reason I don’t like the idea of generative models taking art from me. I wouldn’t ask a robot to dance for me if I wanted to learn how to dance. That’s because I want to know how dancing feels, even while I am stumbling about and learning. Part of that stumbling (when you learn to dance) involves developing a better sense of feeling your own body. Asking a machine to dance takes away the opportunity to know your own body in the same way asking a machine to think takes away a sense of my own self-understanding. I certainly wouldn’t want an AI to go on a first date for me, I’d want to do that myself. I want to know what I think and how I feel about someone. Even though it is hard, it’s very important for me to do. And when I analyze data, build an interface, or engineer some kind of system, I also want to feel a sense of understanding form in my mind and then try to figure out technically how to make that understanding a reality.
I often feel that my “vibes” argument isn’t popular with computer and cognitive/behavioral scientists. Computer scientists are in it for speed and scale. And LLMs help speed and scale, so vibes are a hard argument with them. And I am far too ignorant to explain myself meaningfully to a cognitive or behavioral scientist, so I don’t even try. So I understand that a lot of other people simply won’t agree or relate at all to this post, which is fine. Other people don’t gain a sense of themselves through their work, I guess? I have no idea how other people form thoughts and know what they value.
But tech bros in particular I understand intimitely. And they love the managerial vibes of LLMs. So to tech bros, they understand a “vibes-based” approach but fundamentally disagree with my conclusion. Tech bros want the vibes that power offers them when they have an agent perform work for them. LLMs (borrowing the Hullman framing) help tech bros solve problems because of a sort of delegation of tasks. There’s some black box effect and trust required, but working with models largely converts any or all forms of difficult, analytical work into a sort of management work instead.
But I don’t like management. And LLMs don’t just convert work into management, they convert art into management. They convert thinking anything at all into thinking about management. They convert a love of a craft into managing a machine. They boil down all the passion and frustration and struggle and joy of creation into tasks that a perfectly vibeless worker completes on your behalf. It’s the singularity that capitalism fantasizes about: everyone becomes their own boss of a labor force that never complains. The only thing that matters is whether the outcome of the process works or not.
I see three major sides in this game: folks like me (who dislike LLMs for whatever reason), folks who are relatively ambivalent towards them or use them as if they are just some additional tool, and then the ones who are responsible for the everything-ai craze we are now experiencing.
My issue here lies with the latter. This feral, meteoric rise in popularity for LLMs and generative models is really due to an underlying, massively repressed cultural bedrock beneath a handful of people in the tech world. These dreamers believe in their bones that they should have already become billionares, but simply never got the break they deserved. And no billionare ever made billions off of their own labor. Being a billionare is only possible by profiting off of someone else’s work. And LLMs offer to deliver this fantasy that so many petite capitalists have delusions about. LLMs are popular because of a wider cultural obsession with capitalistic vibes.
But to me the cost is far too great. I of course will still write and love and work and do all the hard but wonderful things that give me a sense of who I am. But I’ll have fewer and fewer chances to do these things while still making a living. And so many others won’t have the same opportunities that I had (for example, to learn that they even liked working on hard problems and discovering their own thoughts). I can’t imagine being a high schooler today and ever caring about writing in a journal if an LLM is just going to write everything else in my life for me. I don’t just worry about an atrophy of skills, but an atrophy of understanding at a social and personal level that those skills facilitated for me.
Looking to the future, it’s likely bleak. If LLMs don’t make it and the bubble bursts, far too many people will be laid off and burnt out as a result. The road will be paved in blood, either way. And if LLMs do survive this bubble and all their promises come true? Then LLMs aren’t just coming for our jobs, but our thoughts and vibes.