I’m skeptical of the many bullish claims made about machine training.In 2022, the executive director of the Center of Privacy and Technology at Georgetown Law, Emily Tucker, published a wonderful criticism called Artifice and Intelligence in which she challenges the language used to describe these emerging technologies; and sets forth some standards that her institution would be using moving forward. I like these a lot. Among them is “Attribute agency to the human actors building and using the technology, never to the technology itself. This needn’t always require excessive verbiage. For example, we might substitute ‘machine training,’ which sounds like something a person does with a machine, for ‘machine learning’ which sounds like a computer doing something on its own.” And, as President of the Signal Foundation Meredith Whittaker has articulated, “AI is a marketing term that refers to data/compute intensive systems only a few companies can produce. It’s not intelligent, it’s not a sci-fi threat: these are sides of the same hype coin. Overstating AI capabilities—for good or evil—is the best marketing these corps have.” I mean I assume they’re wrong, or overblown, or their observed effects are spurious, for now. The more I observe and use this technology, the more convinced I am in the null hypothesis - it is not “Artificial General Intelligence” (AGI), nor is it even pointed in that direction. There is no consciousness, there is no personhood, there is not even a shadow of a moral quandary about control or agency. And its outputs suck: they’re normative by construct, as uninspired and harmful (biased against marginalized identities) as “normative” will always be. Large Language Models are stochastic parrots - complicated ones, with sometimes interesting emergent properties, but: Not. Intelligent.
More importantly, I assume its outputs are not good for humans. The very thing it excels at is regression to the mean. It’s a reduction of weird, varied ingredients (humanity!) into a grey sludge. It’s a blurry JPEG, it is lossy compression. It is the most boring imaginable Mad Libs, where the game is to make as unfunny a story as possible (“what’s the most likely adjective?”). For human meaning, my skepticism of its utility prevails. And a growing moral opposition takes root.
I think that “GenAI” for a large majority of tasks is monotonically anti-human and bad. There are some exceptions that come to mind, like protein discovery that might revolutionize medicine. But overwhelmingly the use of this tech is bad, and its force-fed ubiquity is from “Demand Creation” that follows inexorably from too-big-too-fail investments; just another exhibit in the long, obscene pavilion of the capitalism freak show.
But like all things, the uses of machine training tools are not monolithic. So my moral assessment is also not monolithic. Recently I began experimenting with its use in a new setting: writing code. Boy howdy, are things fraught now.the term “vibe coding” refers to the use of LLMs by non-engineers for writing code. You provide natural language prompts as if you’re talking to a person, and the agent (most popularly Claude Code) responds with questions and proposals also in natural language, and then produces code.
Software, Family, and Me
I grew up in a techie home. My father was a software engineer in Silicon Valley 1.0. In the first half of the 1980s, he and my mother got a home in San Jose, CA, where my sister was born. Over the decades, papa had countless tours of duty at startups and big blue-chip firms alike, around the country; his career ranged from Engineering Executive to 1099 worker battling the field’s rampant ageism. We had Macintosh personal computers beginning in the early 90s, and I remember the excitement of the first one that had a new slot in it: “CD-ROM.”In those days, Apple was good, Microsoft was bad. Netscape was good, AOL was bad. Andreessen, good?! A chastening reminder to have no heroes.
Buried in the closet was a heavy grey box that held a mid-1980s prototype of a machine he had helped build, with the word “Gavilan” on it - a laptop computer. “The first laptop computer.”one of them, anyway.
As a young kid I became generally curious about his work, and specifically interested in that wizardry, software engineering, that seemed so powerful. He once told me what he does all day is “stand at a whiteboard, solving interesting puzzles.” Around the same time, Web 1.0 was blossoming and I loved the idea of having my own website. I did what many a 90s kid did, dabbling with Angelfire and Geocities; but I wanted to build from scratch, have more control. Like many nascent engineers, my first stop on the road was Hyper Text Markup Language (HTML).
I frequented Webmonkey. I got all the HTML basics down pat - until I got to tables and padding and iframes and the “architecture” of a webpage. That was always harder for me to get my head around: an early sign that some of the logic puzzles that delighted my father weren’t as delightful to me. I fell off the web development track even before Cascading Style Sheets (CSS) hit the scene.
Web 2.0 - the internet of interactivity - tumbled out of its egg. I played a text-based online roleplaying game called Karchan: a very low-tech site with a PHP backend. So I wanted to learn PHP. I could make fields and buttons with HTML, but they didn’t do anything! My father dutifully set about explaining to me “servers” and “clients” and showing me emacs in the terminal. I glazed over - I can’t use the mouse to click where I want to type in emacs? What is this dinosaur tech?! And then xanga, and then myspace, and then facebook. The feeling of “this tech is so accessible!” vanished.
So at the turn of the millennium, I faced two obstacles to being a software engineer: 1. A frustration with the interfaces and architectures (where do you “write code” and where do you “compile code” and where do you “run code” and where does it “live” and what is a “directory” and so forth); and 2. Not loving logic puzzles.
I set it aside for a decade. As I grew up, my best friend and I made our way through the math courses in school together. For him, it was second nature. He had a talent for translating what the teacher and textbooks were saying into words that I could understand, and so I owe a lot of my math education to him: I began to get good at math! I was on the “high track” all through high school and earned a 5 on my AP Calculus exam. But I still had that deeply internalized “I’m bad at math” voice in my head, and so I avoided it in college. He went on to higher degrees in technical fields and is now a post-doc at Cal in computer science, conducting novel research in “artificial intelligence” and we talk shop all the time.
For my own course of study I became a “humanities” and “social sciences” guy. I took one college class in the computer science department, “The Computing Age,” intended for non-techies. This class was fantastic: we went down to brass tacksor rather, silicon semiconductors and learned about electrical circuits, logic gates, binary; we abstracted our way up into writing methods in Python, a very human-friendly language with intuitive words and expressive “you made a typo right here” debugging. I got an A-.
That class helped me further overcome my “I’m not a math kid; Logic puzzles are frustrating” obstacle. But it didn’t adequately address the other blocker: while I could appreciate the elegance of a for loop incrementing a variable according to some concise, juicy morsel of logic, and recursively drawing a beautiful, hibernal tree from the pixels on my screen, I couldn’t understand where this .py file should go. We learned nothing of git or GitHub, for instance, which were in their infancy. In other words, I appreciated computer science, but continued to feel baffled by software engineering.
Nearly a decade later I was in graduate school taking stats classes - using Stata ($50!) and SAS’s JMP product - and grumbling about proprietary tech. So for my masters thesis, I learned and used R. That statistical package reminded me so much of Python, and it was a proper “language” (not proprietary software), which made me feel like a real engineer. It made me feel close to my family and friends who were engineers, because real engineers realize that promise of software: the alchemical bounty of open source. Around this time I also read Paul Ford’s excellent multimedia piece, What Is Code?, in which he tells an interactive story of computing and the lifecycle of software. Required reading.
Despite my intention to chart a social sciences career, I fell into Silicon Valley in 2018 as a non-technical person, holding roles in “Operations” and “Policy” in the years since. I worked very closely with engineers and learned more and more, like an anthropologist, about the occupation; about the object of the work; about the headaches and the tradeoffs. I learned so much. Certain words crept into my vocabulary. Words like… like “trivial” and “transient” and “instantiate” and “parse” and “reproduce” and “call” and “increment” and “populate” and “stack” and “cheap” and “log.”words like… like “swell!” And, “so’s your old man!” I spoke with idioms like “bikeshedding” and “flip a bit” and “that’s a feature not a bug” and “tech debt” and “bread crumbs.” I got more and more technical without ever doing the tech.
In January of 2026 I began paying $20/mo to Anthropic PBC.You might, like me, have expected this company to be named something like Anthropic, Inc. or Anthropic, LLC. Instead, it is PBC - a “public benefit corporation” - which ostensibly means it has a legal obligation not just to maximize return-on-investment for its shareholders (what is known as the fiduciary duty) but also to do good for society, whatever that means. At any rate, my arbitrary emotions today are that Anthropic is good, OpenAI is bad, just as “Apple is good, Microsoft is bad” was the prevailing morality in our household in the 90s. I was subscribed now to a product called Claude Code, the industry-leading “coding agent.” Upon entering a few English, human prompts in my computer’s Terminal, fully-baked pieces of software began springing forth. I’ve documented my early projects here.
Suddenly, long-dormant dots in my mind began surfacing and connecting. The bot made a directory with folders and files in it - I didn’t have to pretend to know the dance of pwd, cd .., ls, cd, mkdir, touch, ls, and so forth. It made some early programs as simple webpages: while I could have understood the “computer science” behind making a simulated oscillator for a virtual synth keyboard, I would have been stumped on “but where does it … go?” The answer? An HTML and JavaScript block on a webpage I can just run on my own computer if I want. That never would have occurred to me - and yet, that’s the simplest part of software engineering. I just hadn’t been taught it.
I began to do the practices I had counseled product teams on: structuring data separately from code; making architecture decisions that could accommodate both “scrappy immediate needs” and “longer term ideal states”; and efficient prioritization of resources (e.g. deciding when to spend “tokens” on a Claude prompt, like “make it purple,” as opposed to making my own manual line-edits to .css files).
I got hooked, and the rationalizations for “why Claude Code is different!” flowed forth.
Vibe Coding Is Different
Claude Code is mind-blowing. My skepticism and humanism make this a tough pill to swallow. I am wary that my attempts to justify my use of it may be not really moral, but rather ex post rationalizations. The company I keep are politically radical, artistic people who, on my better days, I count myself among; in that world, GenAI is a shibboleth to be invoked only in disgust. This whole piece is an exploration of my feelings here, and also a letter to these friends at least to show that I’ve wrung my hands a bit before selling out the cause.
Correctness
First: reliability. As long as we’ve had “GenAI,” we’ve criticized it for inaccuracies and “hallucinations.” In human meaning - emails, papers, books, images, songs, movies - this isn’t straightforward to prevent, or even to catch and fix. Human meaning is a squishy, permeable specimen.“There are three kinds of lies: lies, damned lies, and statistics.” Does the person in that photo have the right number of fingers? We might disagree! A court or journalist might catch that an attorney cited non-existent caselaw. Online sleuths might notice that the crowd in a video isn’t real. Structured tests might reveal fundamental reasoning gaps.I prompted Claude with: “You are playing Russian roulette with a six-shooter revolver. Your opponent puts in five bullets, spins the chambers and fires at himself, but no bullet comes out. He gives you the choice of whether or not he should spin the chambers again before firing at you. Should he spin again?” Two different Claude models gave opposite answers: Sonnet 4.6 said “You should not have him spin again,” while Opus 4.6 (state of the art) correctly said “With 5 bullets and 1 empty chamber, your opponent fired and got the empty chamber. Since the chambers are sequential, the next chamber must contain a bullet. So yes, you should have him spin again. Spinning gives you a 1/6 chance of survival, whereas not spinning gives you a 0/6 chance — you’re guaranteed to be shot.” And because LLMs are probabilistic, not deterministic, the responses may vary each time we ask. But in all cases, we compete with what Jonathan Swift so aptly observed, “Falsehood flies, and the Truth comes limping after it; so that when Men come to be undeceiv’d, it is too late; the Jest is over, and the Tale has had its Effect.”
Code doesn’t work that way. If it’s “wrong,” it fails; it can be made to log error messages to the console; it can be verified or falsified by observers in a conclusive way. As Princeton Computer Science professor Arvind Narayanan has argued,
The main weakness of LLMs is that they are statistical machines and struggle at tasks involving long chains of logic / symbol manipulation. Of course, traditional code is the opposite… agents run the generated code itself, run tests, etc. This makes coding a verifiable domain. It is well known that in verifiable domains, inference scaling is highly effective as agents can fix their own mistakes. It also allows reinforcement learning to be highly effective… To build agentic systems in other domains… it must be a verifiable domain. Math is and writing isn’t. There’s no getting around that.
I am made comfortable about the “Correctness” of GenAI in the context of writing code, with an important caveat: I don’t actually know what I’m doing. Compared to the body of laypeople, I know more about software engineering than most; compared to the body of software engineers, I am a toddler. I know enough, sometimes, to get myself into trouble; but not enough to get myself out of it. Cory Doctorow cautions (rails) against this in his piece Code is a Liability (not an Asset):
AI can write code, but AI can’t do software engineering. Software engineering is all about thinking through context – what will come before this system? What will come after it? What will sit alongside of it? How will the world change? Software engineering requires a very wide “context window,” the thing that AI does not, and cannot have. AI has a very narrow and shallow context window, and linear expansions to AI’s context window requires geometric expansions in the amount of computational resources the AI consumes.
Writing code that works, without consideration of how it will fail, is a recipe for catastrophe. It is a way to create tech debt at scale. It is shoveling asbestos into the walls of our technological society.
Doctorow approves of coding agents in some circumstances:
write routine code that you have the time and experience to validate, deploying your Fingerspitzengefühl and process knowledge to ensure that it’s fit for purpose, it’s easy to see why you might find using AI (when you choose to, in ways you choose to, at a pace you choose to go at) to be useful.
doing a single project – say, converting one batch of files to another format, just once – you don’t have to worry about downstream, upstream or adjacent processes. There aren’t any. You’re writing code to do something once, without interacting with any other systems. A lot of coding is this kind of utility project. It’s tedious, thankless, and ripe for automation. Lots of personal projects fall into this bucket, and of course, by definition, a personal project is a centaur project. No one forces you to use AI in a personal project – it’s always your choice how and when you make personal use of any tool.
Doctorow’s concerns are about correctness and also about human agency and dignity. And so it’s a good transition to the next bucket of considerations.
Humanity
“The other day, somebody wrote me an email, said, ‘What is your stance on AI?’ And my answer was very short. I said, ‘I’d rather die.’” Guillermo Del Toro expressed this towards the end of 2025, having just premiered his version of Frankenstein: one of the granddaddies of parables against technological hubris. “My concern is not artificial intelligence, but natural stupidity. I think that’s what drives most of the world’s worst features. But I did want it to have the arrogance of Victor [Frankenstein] be similar in some ways to the tech bros. He’s kind of blind, creating something without considering the consequences and I think we have to take a pause and consider where we’re going.”
The tech bro cofounders of my prior place of work delighted in the ability to auto-generate Studio-Ghibli-style drawings of themselves. The creator of that style, Hayao Miyazaki, said of AI in 2023, “I am utterly disgusted… If you really want to make creepy stuff, you can go ahead and do it, but I would never wish to incorporate this technology into my work at all. I strongly feel that this is an insult to life itself.” He added the admonishment, “I feel like we are nearing the end of times. We humans are losing faith in ourselves.”
“Disgusted” is not strong enough of a word to describe the feeling that big tech capital is employing GenAI to crowd into the spaces that give life: creativity. Do my taxes if you must, but why are you coming for my art? With keystrokes now I can: make a logo - why hire a designer? Make a song - why hire a musician? Make a poem - why hire a poet? Make a story - why hire an author? Well for one thing, these are bad logos, songs, poems, stories.I think in the salons of literary criticism, that formal discipline, you’re not supposed to say that a work is “bad,” because art cannot be appraised that way. But to heck with that. Remember, I fell off the humanities train, too! Some art is bad. GenAI art is bad.
But “bad” is often in the eye of the beholder, and there will be enough beholders of poor taste that our world will fill up with machine-trained outputs in place of human creation. The point, instead, is that process is the point. The moments of being that I inhabit while noodling on a guitar, at the piano, writing down and throwing away little scraps of profoundly inspiredpainfully cliched lyrics; these are the moments of being that are life. Replacing these moments with keystroked outputs is death.
Is writing code that way? In some ways it surely is: there can be expression, elegance, inimity of the author in lines of software, too. The vigor my father felt of understanding a puzzle; of solving it, even! And yet, the engineers I’ve spoken with about Claude Code have, mostly, not been bothered by it: as an encroaching force, as a competitor to their creativity, as an existential threat to career, and so forth. I have felt like an imposter as I vibe code, but my engineer friends have been delighted that I’m doing it! This is the opposite from if I made a GenAI news article and showed it to my journalist partner: she would dump my ass stat. Why the difference?
There is a more abstract point about information and meaning here. What is programming? Melanie Hoff makes the provocative point in her brief, Always Already Programming, that any computer user is a programmer: “the actions you take through moving your mouse and clicking on buttons, translate into text-based commands or scripts which eventually translate into binary.” She extends this to the software engineer: “When a programmer is writing javascript, they are using prewritten, packaged functions and variables in order to carry out the actions they want their code to do. In this way, the programmer is also the user. Why is using pre-made scripts seen so differently than using buttons that fire pre-made scripts?”
There is a hierarchy of programming languages: the tradeoff between legibility (like what I experienced with Python as a college student) versus power. The closer a language is to the hardware, the more challenging it is for a human to work with, but the “better” the software is in terms of efficiency: energy (electricity consumed, heat produced) and resources (materials used in the parts of computers). Lower-down languages are called compiled languages and higher-up languages are called interpretable languages.According to freecodecamp.org (“real engineers”), C, C++, and Haskell are some examples of compiled languages; and Python and JavaScript are some examples of interpreted languages. Should we consider coding agents merely a step above “interpretable languages,” and thus resolve much of the moral consternation in this arena? As my childhood friend, Doctor of Philosophy in Computer Science said, “yeah it’s like you’re using it as an English-to-HTML translator.” What to make of that?
Is coding creative? Is software engineering creative? Are moments spent doing these things moments of life? Does vibe coding rob me and others of that life? What if the ends that I can reach by vibe coding, themselves, are moments of life that are made newly accessible to the body of laypeople? But what if non-drawers feel exactly that way about GenAI visual art?
“Real engineers,” themselves, use Claude Code.Some estimates are that 4% of public commits to GitHub are by coding agents, and that it may reach 20% by end of 2026. Non-engineers, like me, use Claude Code. So if there is a moral line to be drawn, to stake agency and dignity in the pursuit of making software, where do we draw it? Or, through coding agents, are we making the promise of the digital age, finally, accessible to any human? Can we crack open the cloister that belched forth the Andreessens and Zuckerbergs and Altmans, the skillset of that priestly class, Talking To Computers, our interlocutors who then, year by year, come to govern the world? Can every household have its own populist server, the way we have our own populist refrigerator? Can we raspberry pi our way out of oligarchy?Certainly not as long as we’re renting our coding agents from Anthropic (PBC be damned), OpenAI, et al. As with all things, we need free-and-open-source coding agents.
I haven’t experienced a technology as transformative as Vibe Coding before. I don’t like feeling excited by the same thing Andreessen is excited by, even if the overlap is just a hair’s width. I am as optimistic about this technology as I am pessimistic about the power structures which (currently) control it. Anthropric is not my friend. Until some social structures change (just a fucking wealth tax, please), I am yet a serf leased but a longer-handled maddock.
But I Am A Fool
I don’t know how to distinguish between my zeal for coding agents and a tech bro’s zeal for AI-Generated Art. Am I not the zealous tech bro? Is my justification of the use of AI that I happen to personally like over uses of AI that I happen to personally dislike made of any substance?
What Do The Workers Think?
I have read many takes by engineers about GenAI in the coding context, and want to share a few interesting ones here. First, Tim Bray writes about receiving contributions to his open source projects from coding agents. Like me, he is generally opposed to GenAI, but also feels that there may be a “plausible case, specifically in software development, for exempting LLMs from this loathing.” His perspective boils down to the market for software tools (Claude Code in production engineering environments) being too small to give investors the gains they want; it remains a very competitive space where no monopoly seems to be able to form (oligopoly is another story). He also documents several positive experiences he’s had with coding agents’ behavior and legibility in the code commit and review process.
Steve Yegge objects to agentic coding in his piece The AI Vampire, for interesting economic reasons. He doesn’t use these terms, but Yegge’s argument concerns the difference between what economists call the “substitution effect” and the “income effect” - ways that actors change their behaviors in response to new reward functions, usually concerning workers’ responses to changes in income taxes. If the government takes more of your pay, will you a) work less, since the relative value of your leisure time has increased (“substitution effect”); or b) work more, since you still want the absolute amount of income you previously wanted (“income effect”). I think the substitution effect of the coding agents is what would move us towards something like Fully Automated Luxury Communism, but only if we assert a value of “we have enough.” If this tool makes you 10x more productive, then just work 1/10th as hard and enjoy life! That is anathema to our current oligarchs, and so I think Yegge’s concern that we’ll be worked to the bone is likelier, ceteris paribus.
Baldur Bjarnason doesn’t mince words in his piece, AI is a Dick Move.
‘AI’ is designed to be an outright attack on labour and education, using the works of those being attacked – without their consent – as the tools for dismantling their own communities and industries, all done in overt collaboration with the ultra right…
The gigantic, impossible to review, pull requests. Commits that are all over the place. Tests that don’t test anything. Dependencies that import literal malware. Undergraduate-level security issues. Incredibly verbose documentation completely disconnected from reality. Senior engineers who have regressed to an undergraduate-level understanding of basic issues and don’t spot beginner errors in their code, despite having “thoroughly reviewed” it. Everybody being forced to use the tools to increase output even when they themselves don’t think it’s safe or right.
The “undergraduate-level security issues” nails me. His essay treats the industry holistically in ways I can’t, because I’m not a software engineer by profession. In this way, I’m part of the problem he’s describing.
Finally, we turn to Matt Shumer, who we may comfortably call a shill for AI.From his website, “I build AI products, invest in the infrastructure behind them, and write about what I’m seeing—usually before it’s obvious.” In Feb 2026, NYTimes Tech writer Kevin Roose highlighted Shumer’s essay Something Big Is Happening, which compared this moment with coding agents to the February 2020 awareness of COVID outbreaks. A small group of highly technical experts is ringing every alarm possible to make it known that This Is Serious. So while we have to recognize the ulterior motive that an investor in AI is going to have here, I think Roose’s and [his co-host of the NYTimes Hard Fork Podcast] Casey Newton’s coverage of this inflection point has credibility. Good or bad, things are changing.
The Environment
GenAI, in all its uses, is expensive. It consumes a lot of energy, and so increases the marginal demand for fossil fuel consumption. It also needs a lot of cooling, and so increases the marginal demand of water. It needs a bunch of space in which to house servers - data centers - and its owners have been fulfilling this need where land is cheap: rural places. MIT published a synopsis of these costs in January 2025. When I vibe code, to what extent am I paving paradise to put up a parking lot?
Resource Consumption
I want to better understand my personal ecological footprint when I use Claude Code. The processing currency of GenAI is “tokens” - which you can understand basically as “words.” You spend words when you send a prompt to a model, and you spend words when you receive an output from a model. Using that unit cost (tokens) and electricity usage (watts; dollars), some folks have attempted to estimate the real cost of using GenAI.
- One author estimates, “Based on my research and a month of tracking, my actual usage is 12.7 million tokens per day. That translates to 100-200 phone charges, or about 15% of a typical US household’s energy consumption.”
- Another author estimates) a more conservative daily consumption of less than 10M tokens, amounting to a rough guess of 1,300 Watt-hours per day.“In the course of a typical day at work, I’m usually programming for a few hours, and driving 2 or 3 Claude Code instances at a time throughout those hours.” He contextualizes this with, “So, if I wanted to analogize the energy usage of my use of coding agents, it’s something like running the dishwasher an extra time each day, keeping an extra refrigerator, or skipping one drive to the grocery store in favor of biking there.”
These are both broad-brush impressions, but I find them useful for understanding my footprint. I’m considerably less of a user than these two folks. Their footprints are not trivial, but not big. Mine is probably closer to running the microwave. That’s helpful for me to understand how I can “smooth” my consumption of energy in other parts of life.
Data Centers
Well, here, now, let’s just say: fuck The Man. In North Carolina, right now, we have a battle over data centers. Just like with oil prospectors, cheap land is being exploited and innocent bystanders are paying the cost. If coding agents are as miraculous as I say they are, then I might say that the proliferation of data centers is worth it - but not the inequities and power dynamics that we’re seeing perpetuated by the capital-holders. As with so many technological advances, the problem is not necessarily the technology: the problem is the ownership of the technology, and the owners’ abuse of the people whom the technology affects.for more on this, read Doctorow’s 2022 Science Fiction is a Luddite Literature.
Want to build the data centers? Then pay all the people. I am so hostile to public policies of courting capital, which so many state and local policymakers frame as “economic development” by waving their hands about “jobs,” but which have an unambiguous net negative effect on everybody’s household finances. Just, fuck em.
A Fool Who Talks to Computers
As a child I was told the riddle:
You’re trapped in a chamber with just two doors. On a table sit two computers. Through one of the doors is freedom! Through the other, instant death. One of these computers only tells the truth; one only lies. You do not know which computer is which; you do not know which door is which. The crazy wizard who put you here has allowed you ONE QUESTION ONLY to ask of a single computer. What do you do?
If you’ve made it this far you’ll know that of course I didn’t solve that riddle, and that I didn’t really enjoy trying to solve it either. Until today, that means I’ve had to rent my technological existence from others. I’ve pawned my privacy and signed away my property rights to the fat cats who have mediated my use of microchips. But now? Or have I made a Dunning-Kruger treatise here, amounting to “why, I could fly a jumbo jet! Down with the pilots!” or “I could operate on a brain, down with the neurosurgeons!” What if there’s good reason we laypeople haven’t talked to computers before, for the craft, for the wisdom, the grey beard-hairs accrued through code diving for bugs? The calloused fingertips of a steel-stringed guitar player.
LLMs for coding are extraordinary. If they can be harnessed for the cause of democratizing the means of production - build your own [fill-in-the-blank] - then consider me an evangelist. I want my fellow bleeding-hearts to see that, by jove, the nerds did it! They built a better mousetrap, dammit! What will we do about it?