In a coding session today, I ran into the following interaction with a Large Language Model. I had something like this:
const startTime = useMemo( () => {
// Do something with query.from
}, [ query.from ] );
const endTime = useMemo( () => {
// Do something with query.to
}, [ query.to ] );
The time variables need to change their value based on the query, but there’s also an extra source of state updates, so I wanted to refactor it to using local state and adding an effect. I entered a pairing session with Claude 3.5 Sonnet and this is what happened:
I asked Claude to do some changes to the startTime code. Then, I just requested to “apply the same changes to endTime” — I didn’t provide further context.
The changes aren’t correct because it’s using the same function getTimeFromValue to calculate endTime. I can fix that easily, or perhaps I should have provided better prompts. There’s also the question if the end result is better than the previous state, but that’s the programmer’s fault, not the machine’s. Whether this has produced better or correct code is not the point. The question I’m interested in is different: do LLMs to edit code improve conventional editing tools?
By any measure, this was a small refactoring. But just as small as it was, it hints at how they do. They give you access to many more code actions than any editor can bundle, and chaining them is a seamless process that doesn’t take you out of the flow. It’s also trivial to apply the same chain of actions to a different piece of code. Right after finishing this refactoring, I had a flashback: my younger self was using emacs macros to repeat some simple edits across a few dozens of lines — it was a more limited experience.
It feels like, with time, it’d go beyond incremental improvements and refactorings will be expressed in a higher-level vocabulary than conventional editing tools.
Leave a Reply