My own experience over the last few months is quite the opposite so it's heartening to see some reputable Lispers reporting the same in the comments here.
Everything in this area is moving so quickly that I haven't yet crystallized my thinking or settled on a working methodology but I am getting a lot of value out of running Claude Code with MCP servers for Common Lisp and Emacs (cl-mcp & emacs-mcp-server). Among other things this certainly helps with the unbalanced parentheses rabbit hole.
Along with that I am showing it plenty of my own Lisp code and encouraging it to adopt my preferred coding style and libraries. It takes a little coaching and reinforcement (recalcitrant intern syndrome) but it learns as it goes. It's really quite a pleasant experience to see it write Lisp as I might have written it.
I have been using AI to write Clojure code this past half year. The frontline LLM has no problem with writing idiomatic Clojure code. Both Codex and Claude Code fix their missing closing parentheses quickly. So I won't say "Writing Lisp is AI resistant". In fact, Clojure is a great fit with AI coding agent: it is token efficient, and the existing Clojure code used for training are mostly high quality code, as Clojure tends to attract experienced coders.
I am glad you enjoyed it. I am happy to report that the next release will have many new features: Raft consensus based high availability (comes with an extensive Jepsen test suite); built-in MCP server; built-in llama.cpp for in DB embedding; JSON API; language bindings for Java, Python and Javascript.
I have found it to be the complete opposite tbh. Not lisp but I've been generating Scheme with claude for about 5 months and it's a pleasure. What I did was to make sure CLAUDE.md had clear examples and also I added a skill that leverages ast-grep for ast-safe replacement (the biggest pain is that some times claude will mess up the parens, but even lately it came up with its own python scripts to count the parens and balance the expressions on its own).
I created Schematra[1] and also a schematra-starter-kit[2] that can be spun from claude and create a project and get you ready in less than 5 minutes. I've created 10+ side projects this way and it's been a great joy. I even added a scheme reviewer agent that is extremely strict and focus on scheme best practices (it's all in the starter kit, btw)
I don't think the lack of training material makes LLMs poor at writing lisp. I think it's the lack of guidelines, and if you add enough of them, the fact that lisp has inherently such a simple pattern & grammar that it makes it a prime candidate (IMO) for code generation.
Thanks for the Scheme setup examples. I have created very simple skills markdown files for Common Lisp and Hylang/hy (Clojure-like lisp on top of Python). I need to spend more effort on my skills files though.
Interesting, and not quite my experience. While I do get better agentic coding results for Python projects, I also get good results working with Common Lisp projects. I do have a habit of opening an Emacs buffer and writing a huge prompt with documentation details, sometimes sample code in other languages or if I am hitting APIs I add a working CURL example. For Common Lisp my initial prompts are often huge, but I find thinking about a problem and prompt creation to be fun.
The article mentions a REPL skill. I don’t do that: letting model+tools run sbcl is sufficient.
Yes, I've also found llms can generate working common lisp code quite well, albeit I've only been solving simple problems.
I haven't tried integrating it into a repl or even command line tools though. The llm can't experience the benefit of a repl so it makes sense it struggled with it and preferred feeing entire programs into sbcl each time.
This rings true for me. LLMs in my experience are great at Go, a little less good at Java, and much less good at GCL (internal config language).
This is definitely partly training data, but if you give an LLM a simple language to use on the fly it can usually do ok. I think the real problem is complexity.
Go and Java require very little mental modelling of the problem, everything is written down on the page really quite clearly (moreso with Go, but still with Java).
In GCL however the semantics are _weird_, the scoping is unlike most languages, because it's designed for DSLs. Humans writing DSL content requires little thought, but authoring DSLs requires a fair amount of mental modelling about the structure of the data that is not present on the page. I'd wager that Lisp is similar, more of a mental model is required.
The problem is of course that LLMs don't have a mental model, or at least what they do have is far from what humans have. This is very apparent when doing non-trivial code, non-CRUD, non-React, anything that requires thinking hard about problems more than it requires monkeys at typewriters.
I bet it would do much better at hcl (or Starlark, maybe even yaml, something that it has seen plenty of examples of in the wild).
This is a weird moment in time where proprietary technology can hurt more than it can help, even if it's superior to what's available in public in principle.
How many docs do you put in the context? we maintain a lot of dsl code internally, and each file has a copy of the spec + guide as a comment at the top. Its about 50 locs and the relevant models are great at writing it.
Oh yeah the models are great at writing the DSLs, there are enough examples to do that very effectively. It's the building of the DSL, which is implemented in the config language, which is tricky. i.e, writing a new A/B test in the language is trivial, writing an A/B testing config DSL in the language is hard.
The main problem is the dynamic scoping (as opposed to lexical scoping like most languages), and the fact that lots of things are untyped and implicitly referenced.
Personally, I think we're using LLMs wrong for programming. Computer programs are solutions to a given constraint logic problem (the specs).
We should be using LLMs to translate from (fuzzy) human specifications to formal specifications (potentially resolving contradictions), and then solving the resulting logic problem with a proper reasoning algorithm. That would also guarantee correctness.
I've had it write Scheme with little issue -- it even completely the latter half of a small toy compiler. I think the REPL is the issue, not the coding; forcing it to treat the REPL like another conversation participant is likely the only way for that to work, and this article does not handle it that way. Instead, hand it a compiler and let it use the workflow it is optimized for.
Agreed. The article bemoans the fact that AIs don’t need to work in the inefficient way that most humans prefer, getting micro-level feedback from IDEs and REPLs to reduce our mistake count as we go.
If you take a hard look at that workflow, it implies a high degree of incompetence on the part of humans: the reason we generally don’t write thousands of lines without any automated feedback is because our mistake rate is too high.
I learned Common Lisp years ago while working in the AI lab at the University of Toronto, and parts of this article resonated strongly with me.
However, if you abandon the idea of REPL-driven development, then the frontier models from Anthropic and OpenAI are actually very capable of writing Lisp code. They struggle sometimes editing it (messing up parens)), but usually the first pass is pretty good.
I've been on an LLM kick the past few months, and two of my favorite AI-coded (mostly) projects are, interestingly, REPL-focused. icl (https://github.com/atgreen/icl) is a TUI and browser-based front end for your CL REPL designed to make REPL programming for humans more fun, whether you use it stand-alone, or as an Emacs companion. Even more fun is whistler (https://github.com/atgreen/whistler), which allows you to write/compile/load eBPF code in lisp right from your REPL. In this case, the AI wrote the highly optimizing SSA-based compiler from scratch, and it is competitive against (and sometimes beating) clang -O2. I mean... I say the AI wrote it... but I had to tell it what I wanted in some detail. I start every project by generating a PRD, and then having multiple AIs review that until we all agree that it makes sense, is complete enough, and is the right approach to whatever I'm doing.
Claude has really helped me improve my Emacs config (elisp) substantially, and sometimes even fix issues I've found in packages. My emacs setup is best it has ever been. Can't say it just works and produces the best solution and sometimes it would f** up with closing parens or even make things up (e.g. it suggest load-theme-hook which doesn't exist). But overall, changing things in Emacs and learning elisp is definitely much easier for me (I'm not good with elisp, but pretty good Racket programmer).
I used Emacs for about a decade and then switched to VS Code about eight years ago. I was curious about the state of Claude Code integration with Emacs, so I installed it to try out a couple of the Claude packages. My old .emacs.d that I toiled many hours to build is somewhere on some old hard drive, so I decided to just use Claude code to configure Emacs from scratch with a set of sane defaults.
I proceeded to spend about 45 minutes configuring Emacs. Not because Claude struggled with it, but because Claude was amazing at it and I just kept pushing it well beyond sane default territory. It was weirdly enthralling to have Claude nail customizations that I wouldn't have even bothered trying back in the day due to my poor elisp skills. It was a genuinely fun little exercise. But I went back to VS Code.
Came to post exactly this, except it’s got me using emacs again. I led myself into some mild psychosis where I attempted to mimic the Acme editor’s windowing system, but I recovered
Yeah, and all the little quirks here and there I had with emacs or things that I wish I had in workflow, I can just fix/have it without worrying about spending too much time (except sometimes maybe). The full Emacs potential I felt I wasn't using, I'm doing it and now I finally get it why Emacs is so awesome.
E.g. I work on a huge monorepo at this new company, and Emacs TRAMP was super slow to work with. With help of Claude, I figured out what packages are making it worse, added some optimizations (Magit, Project Find File), hot-loaded caching to some heavyweight operations (e.g. listing all files in project) without making any changes to packages itself, and while listing files I added keybindings to my mini buffer map to quickly just add filters for subproject I'm on. Could have probably done all this earlier as well, but it was definitely going to take much longer as I was never deep into elisp ecosystem.
From my experience Claude Code is not that bad with Common Lisp and can do REPL-style development. I've been using this MCP server (an older version with some tweaks): https://github.com/cl-ai-project/cl-mcp
(even though I'd probably prefer some MCP-to-swank adapter if it existed)
And this MCP server works quite well for Emacs
https://github.com/rhblind/emacs-mcp-server
There are some issues of course. Sometimes, Claude Code gets into "parenthesis counting loop" which is somewhat hilarious, but luckily this doesn't really happen too often for me. In the worst case I fix the problematic fragment myself and then let it continue. But overall I'd say Claude Code is not bad at all with Lisps
I'm finding the opposite: Claude Code is strikingly good at Common Lisp (unsurprising given how much CL material would have made it into the training set), and even much better than I expected with Arc.
However, a large part of OP is about REPLs and on that I've also had a hard time with CC. I was working on it this evening in fact, and while I got something running, it's clunky and slow.
I am a bit (ok very) worried AI will most likely kill language diversity in programming. I also don't see it settling on a more optimal solution it will probably just use the most available languages out there and be very hard to push out of that rut. And it's not limited to languages I expect knowledge ruts all over the place and due to humans and AI choosing the path of least resistance I don't see an active way to fight this.
Amusingly, some of the earliest AI research was using Lisp which beget AI winter. Now we’ve come full circle with LLMs that struggles to write valid Lisp. Almost poetic.
I have a feeling we'll care less about untyped languages going forward as LLMs prototype faster than we do, and fast prototyping was a big reason why we cared about untyped languages.
Pedantic but Lisp is not "untyped". (Neither are JS or Python.) All data has a type you can query with the type-of function. The typing is strong, you'll get a type-error if you try to add an integer to a string. Types can be declared, and some implementations (like SBCL) can and do use that information to generate better assembly and provide some compilation-time type checks. (Those checks don't go all the way like a statically typed language would, but Lisp being a programmable programming language, you can go all the way to Haskell-style types if you want: https://coalton-lang.github.io/)
I gave Copilot the other day my Elisp code, and it asked if I wanted improvements. Upon my approval, it immediately produced a revision that added two new, useful features and worked out of the box. Very impressive.
I think some kind of graph-capable model directly on the AST or a lower level IR would be the way to go, with bidirectionality so that changes propagate back up to the syntax without squandering LLM resources.
This must be specific to Common Lisp. I’ve had no significant issues with Fennel and Chez Scheme, although to be fair it was on existing projects and they are not languages I would start a project with today.
Isn't the whole problem here trying to wedge the LLM into using a REPL loop, when it could one-shot source files just fine? Python has a REPL too, but you don't see the LLM building python by REPL loop either...
Wildly speculating here, but if you buy that human brains have innate / evolved syntactic knowledge, and that this knowledge projects itself as the common syntactic forms across the bulk of human languages, then it’s no surprise that LLMs don’t have particularly deep grooves for s-expressions, regardless of the programming language distribution of the training set.
OK, I'll bite. I want to know more of the reasoning behind this, because I think it implies that S-expressions are alien to the innate/evolved syntactic knowledge in human languages. A lot of American linguistics, like Chomsky's gropings for how to construct universal grammar and deep syntax trees, or the lambda calculus of semantic functions, looks like S-expressions, and I think that's because there was some coordination between human linguists and computer science (Chomsky was, after all, at MIT). At the same time, I've had a gut instinct that these theories described some languages (like English) better than others (like ancient Greek), requiring more explanation of changes between deep structure and surface structure for languages that were less like English. If models trained on actual language handle s-expressions poorly, that could imply that s-expressions were not a good model for the deep structure of human language, or that the deep-structure vs surface-structure model did not really work. I'd be very happy to learn more about this.
There is an interesting on-going research https://dnhkng.github.io/posts/sapir-whorf/ that shows LLMs think in a language-agnostic way. (It will probably get posted to HN after it is finished.)
Expected, considering stuff like the recent post re: esolang benchmarks. Lisp is probably just out of distribution. This is just a popularity contest, not a reflection on anything else.
Same experience. I like Haskell a lot but I am not great at Haskell programming. LLM based coding agents are useful for helping with runtime errors, library versions, etc. (and as other people here have said, for tedious stuff like cleaning up Emacs customizations, etc.)
> I wonder what adaptations will be necessary to make AIs work better on Lisp.
Some are going to nitpick that Clojure isn't as lispy as, say, Common Lisp but I did experiment with Claude Code CLI and my paid Anthropic subscription (Sonnet 4.6 mostly) and Clojure.
It is okay'ish. I got it to write a topological sort and pure (no side effect) functions taking in and returning non-totally-trivial data structures (maps in maps with sets and counters etc.). But apparently it's got problems with...
... drumroll ...
The number of parentheses. It's so bad that the author of figwheel (a successful ClojureScript project) is working on a Clojure MCP that fixes parens in Clojure code spoutted by AI (well the project does more than that, but the description literally says it's "designed to handle Clojure parentheses reliably").
You can't make that up: there's literally an issue with the number of closing parens.
Now... I don't think giving an AI access to a Lisp REPL and telling it: "Do this by bumping on the guardrails left and right until something is working" is the way to go (yet?) for Clojure code.
I'm passing it a codebase (not too big, so no context size issue) and I know what I want: I tell it "Write a function which takes this data structure in and that other parameter, the function must do xxx, the function must return the same data structure out". Before that I told it to also implement tests (relatively easy for they're pure functions) for each function it writes and to run tests after each function it implements or modify.
There was a thread about this the other day [1]. It's the same issue as "count the r's in strawberry." Tokenization makes it hard to count characters. If you put that string into OpenAI's tokenizer, [2] this is how they are grouped:
Token 1: ((((
Token 2: ()))
Token 3: )))
Which of course isn't at all how our minds would group them together in order to keep track of them.
This is mostly because people wrongly assume that LLMs can count things. Just because it looks like it can, doesn't mean it is.
Try to get your favourite LLM to read the time from a clock face. It'll fail ridiculously most of the time, and come up with all kinds of wonky reasons for the failures.
It can code things that it's seen the logic for before. That's not the same as counting. That's outputing what it's previously seen as proper code (and even then it often fails. Probably 'cos there's a lot of crap code out there)
But for lisp, a more complex solution is needed. It's easy for a human lisp programmer to keep track of which closing parentheses corresponds to which opening parentheses because the editor highlights parentheses pairs as they are typed. How can we give an LLM that kind of feedback as it generates code?
Try asking an LLM a question like "H o w T o P r o g r a m I n R u s t ?" - each letter, separated by spaces, will be its own token, and the model will understand just fine. The issue is that computational cost scales quadratically with the number of tokens, so processing "h e l l o" is much more expensive than "hello". "hello" has meaning, "h" has no meaning by itself. The model has to waste a lot of computation forming words from the letters.
Our brains also process text entire words at a time, not letter-by-letter. The difference is that our brains are much more flexible than a tokenizer, and we can easily switch to letter-by-letter reading when needed, such as when we encounter an unfamiliar word.
I am lazy: when an LLM messes up parenthesis when working with any Lisp language I just quickly fix the mismatch myself rather than try to fix the tooling.
I had that issue with the AI doing some CL dabbling.
Things, on the whole, were fine, save for the occasional, rogue (or not) parentheses.
The AI would just go off the rails trying to solve the problem. I told it that if it ever encountered the problem to let me know and not try to fix it, I’d do it.
Sometimes LLMs astonish me with what the code they can write. Other times I have to laugh or cry.
As an example, I asked claude 3.5 back when that was the latest to indent all the code in my file by four more spaces. The file was about 700 lines long. I got a busy spinner for two minutes then it said, "OK, first 50 lines done, now I'll do the rest" and got another busy spinner and it said, "this is taking too long. I'm going to write a program to do it", which of course it had no problem doing. The point is that it is superhuman at some things and completely brain-dead about others, and counting parens is one of those things I wouldn't expect it to be good at.
I think LLMs are great at compression and information retrieval, but poor at reasoning. They seem to work well with popular languages like Python because they have been trained with a massive amount of real code. As demonstrated by several publications, on niche languages their performance is quite variable.
> There are reasons other than a lack of training data that makes lisp particularly AI resistant.
It's though to steal what doesn't exist.
> but AI can write hundreds of lines in one go so that it just makes sense for the AI to use a language that doesn't use the REPL. It is orders of magnitude easier and cheaper to write in high-internet-volume languages like Go and Python
Not really in the Lisp sense. If you consider how people typically develop and modify Python code (edit file -> run from beginning -> observe effects -> start over) and how people typically develop Lisp code (rarely do "start over" and "run from beginning" happen) it becomes obvious. Most Python development resembles Go or C++, you just get to skip the explicit "compile" step and go straight to "run". The Python "REPL" is nice for little snippets and little bits of interactive modification but the experience compared to Lisp isn't the same (and I think the experience is actually better/closer to Lisp in Java, with debug mode and JRebel).
I agree with you, but a Python REPL in Emacs (using the ancient Python Emacs support) is very nice: initially load code from a buffer, then just reload functions as you edit them. I find it to be a nice dev experience: quick and easy edit/run cycles.
"Expressive languages" like Lisp are for weak human minds.
Now is the time to switch to a popular language and let the machines wrangle it for you. With more training data available, you'll be far more productive in JavaScript than you ever were in Lisp.
Everything in this area is moving so quickly that I haven't yet crystallized my thinking or settled on a working methodology but I am getting a lot of value out of running Claude Code with MCP servers for Common Lisp and Emacs (cl-mcp & emacs-mcp-server). Among other things this certainly helps with the unbalanced parentheses rabbit hole.
Along with that I am showing it plenty of my own Lisp code and encouraging it to adopt my preferred coding style and libraries. It takes a little coaching and reinforcement (recalcitrant intern syndrome) but it learns as it goes. It's really quite a pleasant experience to see it write Lisp as I might have written it.
I created Schematra[1] and also a schematra-starter-kit[2] that can be spun from claude and create a project and get you ready in less than 5 minutes. I've created 10+ side projects this way and it's been a great joy. I even added a scheme reviewer agent that is extremely strict and focus on scheme best practices (it's all in the starter kit, btw)
I don't think the lack of training material makes LLMs poor at writing lisp. I think it's the lack of guidelines, and if you add enough of them, the fact that lisp has inherently such a simple pattern & grammar that it makes it a prime candidate (IMO) for code generation.
[1]: https://schematra.com/
[2]: https://forgejo.rolando.cl/cpm/schematra-starter-kit
The article mentions a REPL skill. I don’t do that: letting model+tools run sbcl is sufficient.
I haven't tried integrating it into a repl or even command line tools though. The llm can't experience the benefit of a repl so it makes sense it struggled with it and preferred feeing entire programs into sbcl each time.
This is definitely partly training data, but if you give an LLM a simple language to use on the fly it can usually do ok. I think the real problem is complexity.
Go and Java require very little mental modelling of the problem, everything is written down on the page really quite clearly (moreso with Go, but still with Java).
In GCL however the semantics are _weird_, the scoping is unlike most languages, because it's designed for DSLs. Humans writing DSL content requires little thought, but authoring DSLs requires a fair amount of mental modelling about the structure of the data that is not present on the page. I'd wager that Lisp is similar, more of a mental model is required.
The problem is of course that LLMs don't have a mental model, or at least what they do have is far from what humans have. This is very apparent when doing non-trivial code, non-CRUD, non-React, anything that requires thinking hard about problems more than it requires monkeys at typewriters.
This is a weird moment in time where proprietary technology can hurt more than it can help, even if it's superior to what's available in public in principle.
The main problem is the dynamic scoping (as opposed to lexical scoping like most languages), and the fact that lots of things are untyped and implicitly referenced.
We should be using LLMs to translate from (fuzzy) human specifications to formal specifications (potentially resolving contradictions), and then solving the resulting logic problem with a proper reasoning algorithm. That would also guarantee correctness.
LLMs are a "worse is better" kind of solution.
Agreed! This is why having LLMs write assembly or binary, as people suggest, is IMO moving in the wrong direction.
> then solving the resulting logic problem with a proper reasoning algorithm. That would also guarantee correctness.
Yes! I.e. write in a high-level programming language, and have a compiler, the reasoning algorithm, output binary code.
It seems like we're already doing this!
If you take a hard look at that workflow, it implies a high degree of incompetence on the part of humans: the reason we generally don’t write thousands of lines without any automated feedback is because our mistake rate is too high.
I learned Common Lisp years ago while working in the AI lab at the University of Toronto, and parts of this article resonated strongly with me.
However, if you abandon the idea of REPL-driven development, then the frontier models from Anthropic and OpenAI are actually very capable of writing Lisp code. They struggle sometimes editing it (messing up parens)), but usually the first pass is pretty good.
I've been on an LLM kick the past few months, and two of my favorite AI-coded (mostly) projects are, interestingly, REPL-focused. icl (https://github.com/atgreen/icl) is a TUI and browser-based front end for your CL REPL designed to make REPL programming for humans more fun, whether you use it stand-alone, or as an Emacs companion. Even more fun is whistler (https://github.com/atgreen/whistler), which allows you to write/compile/load eBPF code in lisp right from your REPL. In this case, the AI wrote the highly optimizing SSA-based compiler from scratch, and it is competitive against (and sometimes beating) clang -O2. I mean... I say the AI wrote it... but I had to tell it what I wanted in some detail. I start every project by generating a PRD, and then having multiple AIs review that until we all agree that it makes sense, is complete enough, and is the right approach to whatever I'm doing.
I proceeded to spend about 45 minutes configuring Emacs. Not because Claude struggled with it, but because Claude was amazing at it and I just kept pushing it well beyond sane default territory. It was weirdly enthralling to have Claude nail customizations that I wouldn't have even bothered trying back in the day due to my poor elisp skills. It was a genuinely fun little exercise. But I went back to VS Code.
E.g. I work on a huge monorepo at this new company, and Emacs TRAMP was super slow to work with. With help of Claude, I figured out what packages are making it worse, added some optimizations (Magit, Project Find File), hot-loaded caching to some heavyweight operations (e.g. listing all files in project) without making any changes to packages itself, and while listing files I added keybindings to my mini buffer map to quickly just add filters for subproject I'm on. Could have probably done all this earlier as well, but it was definitely going to take much longer as I was never deep into elisp ecosystem.
There are some issues of course. Sometimes, Claude Code gets into "parenthesis counting loop" which is somewhat hilarious, but luckily this doesn't really happen too often for me. In the worst case I fix the problematic fragment myself and then let it continue. But overall I'd say Claude Code is not bad at all with Lisps
However, a large part of OP is about REPLs and on that I've also had a hard time with CC. I was working on it this evening in fact, and while I got something running, it's clunky and slow.
Yep. Language and libraries too.
Damn. And here I have a Gemini Pro subscription sitting unused for a year now.
Some are going to nitpick that Clojure isn't as lispy as, say, Common Lisp but I did experiment with Claude Code CLI and my paid Anthropic subscription (Sonnet 4.6 mostly) and Clojure.
It is okay'ish. I got it to write a topological sort and pure (no side effect) functions taking in and returning non-totally-trivial data structures (maps in maps with sets and counters etc.). But apparently it's got problems with...
... drumroll ...
The number of parentheses. It's so bad that the author of figwheel (a successful ClojureScript project) is working on a Clojure MCP that fixes parens in Clojure code spoutted by AI (well the project does more than that, but the description literally says it's "designed to handle Clojure parentheses reliably").
You can't make that up: there's literally an issue with the number of closing parens.
Now... I don't think giving an AI access to a Lisp REPL and telling it: "Do this by bumping on the guardrails left and right until something is working" is the way to go (yet?) for Clojure code.
I'm passing it a codebase (not too big, so no context size issue) and I know what I want: I tell it "Write a function which takes this data structure in and that other parameter, the function must do xxx, the function must return the same data structure out". Before that I told it to also implement tests (relatively easy for they're pure functions) for each function it writes and to run tests after each function it implements or modify.
And it's doing okay.
> Are the parentheses in ((((()))))) balanced?
There was a thread about this the other day [1]. It's the same issue as "count the r's in strawberry." Tokenization makes it hard to count characters. If you put that string into OpenAI's tokenizer, [2] this is how they are grouped:
Token 1: ((((
Token 2: ()))
Token 3: )))
Which of course isn't at all how our minds would group them together in order to keep track of them.
[1] https://news.ycombinator.com/item?id=47615876 [2] https://platform.openai.com/tokenizer
Try to get your favourite LLM to read the time from a clock face. It'll fail ridiculously most of the time, and come up with all kinds of wonky reasons for the failures.
It can code things that it's seen the logic for before. That's not the same as counting. That's outputing what it's previously seen as proper code (and even then it often fails. Probably 'cos there's a lot of crap code out there)
Our brains also process text entire words at a time, not letter-by-letter. The difference is that our brains are much more flexible than a tokenizer, and we can easily switch to letter-by-letter reading when needed, such as when we encounter an unfamiliar word.
Things, on the whole, were fine, save for the occasional, rogue (or not) parentheses.
The AI would just go off the rails trying to solve the problem. I told it that if it ever encountered the problem to let me know and not try to fix it, I’d do it.
As an example, I asked claude 3.5 back when that was the latest to indent all the code in my file by four more spaces. The file was about 700 lines long. I got a busy spinner for two minutes then it said, "OK, first 50 lines done, now I'll do the rest" and got another busy spinner and it said, "this is taking too long. I'm going to write a program to do it", which of course it had no problem doing. The point is that it is superhuman at some things and completely brain-dead about others, and counting parens is one of those things I wouldn't expect it to be good at.
Edit: working on a lot of legacy code that needs boring refactoring, which Claude is great at.
That's what you get with every language. So, not much to really be disappointed by in terms of Lisp performance.
https://xkcd.com/297/
It's though to steal what doesn't exist.
> but AI can write hundreds of lines in one go so that it just makes sense for the AI to use a language that doesn't use the REPL. It is orders of magnitude easier and cheaper to write in high-internet-volume languages like Go and Python
Python doesn't have a REPL?
Not really in the Lisp sense. If you consider how people typically develop and modify Python code (edit file -> run from beginning -> observe effects -> start over) and how people typically develop Lisp code (rarely do "start over" and "run from beginning" happen) it becomes obvious. Most Python development resembles Go or C++, you just get to skip the explicit "compile" step and go straight to "run". The Python "REPL" is nice for little snippets and little bits of interactive modification but the experience compared to Lisp isn't the same (and I think the experience is actually better/closer to Lisp in Java, with debug mode and JRebel).
Now is the time to switch to a popular language and let the machines wrangle it for you. With more training data available, you'll be far more productive in JavaScript than you ever were in Lisp.
You guys are depressing.