Rendered at 17:53:44 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
gregates 10 hours ago [-]
The version of this I encounter literally every day is:
I ask my coding agent to do some tedious, extremely well-specified refactor, such as (to give a concrete real life example) changing a commonly used fn to take a locale parameter, because it will soon need to be locale-aware. I am very clear — we are not actually changing any behavior, just the fn signature. In fact, at all call sites, I want it to specify a default locale, because we haven't actually localized anything yet!
Said agent, I know, will spend many minutes (and tokens) finding all the call sites, and then I will still have to either confirm each update or yolo and trust the compiler and tests and the agents ability to deal with their failures. I am ok with this, because while I could do this just fine with vim and my lsp, the LLM agent can do it in about the same amount of time, maybe even a little less, and it's a very straightforward change that's tedious for me, and I'd rather think about or do anything else and just check in occasionally to approve a change.
But my f'ing agent is all like, "I found 67 call sites. This is a pretty substantial change. Maybe we should just commit the signature change with a TODO to update all the call sites, what do you think?"
And in that moment I guess I know why some people say having an LLM is like having a junior engineer who never learns anything.
felipeerias 9 hours ago [-]
Claude 4.7 broke something while we were working on several failing tests and justified itself like this:
> That's a behavior narrowing I introduced for simplicity. It isn't covered by the failing tests, so you wouldn't have noticed — but strictly speaking, [functionality] was working before and now isn't.
I know that a LLM can not understand its own internal state nor explain its own decisions accurately. And yet, I am still unsettled by that "you wouldn't have noticed".
pythonaut_16 3 hours ago [-]
> strictly speaking, it was working before and now it isn't
I've been seeing more things like this lately. It's doing the weird kind of passive deflection that's very funny when in the abstract and very frustrating when it happens to you.
tracker1 29 minutes ago [-]
What gets me, is when the tests are correct and match the spec/documentation for the behavior, but the LLM will start changing the tests and documentation altogether instead of fixing the broken behavior... having to revert (git reset), tell the agent that the test is correct and you want the behavior to match the test and documentation not the other way around.
I'm usually pretty particular about how I want my libraries structured and used in practice... Even for the projects I do myself, I'll often write the documentation for how to use it first, then fill in code to match the specified behavior.
potsandpans 2 hours ago [-]
I've been doing a lot of experimentation with "hands off coding", where a test suite the agents cannot see determines the success of the task. Essentially, it's a Ralph loop with an external specification that determines when the task is done. The way it works is simple: no tests that were previously passing are allowed to fail in subsequent turns. I achieve this by spawning an agent in a worktree, have them do some work and then when they're done, run the suite and merge the code into trunk.
I see this kind of misalignment in all agents, open and closed weights.
I've found these forms to be the most common, "this test was already failing before my changes." Or, "this test is flaky due to running the test suite on multiple threads." Sometimes the agent cot claims the test was bad, or that the requirements were not necessary.
Even more interesting is a different class of misalignment. When the constraints are very heavy (usually towards the end of the entire task), I've observed agents intentionally trying to subvert the external validation mechanisms. For example, the agent will navigate out of the work tree and commit its changes directly to trunk. They cot usually indicates that the agent "is aware" that it's doing a bad think. This usually is accompanied by something like, "I know that this will break the build, but I've been working on this task for too long, I'll just check what I have in now and create a ticket to fix the build."
I ended up having to spawn the agents in a jail to prevent that behavior entirely.
bandrami 9 hours ago [-]
At the risk of being That Old Guy, this seems like a pretty bad workflow regression from what ctags could do 30 years ago
SoftTalker 3 hours ago [-]
> changing a commonly used fn to take a locale parameter
I have to ask, is this the sort of thing people use agents/AI for?
Because I'd probably reach for sed or awk.
pavel_lishin 3 hours ago [-]
I think about half the IDEs I've ever used just had this as a feature. Right-click on function, click on "change signature", wait a few seconds, verify with `git diff`.
unshavedyak 2 hours ago [-]
I actually still like LLMs for this. I use rust LSP (rust analyzer) and it supports this, but LLMs will additionally go through and reword all of the documentation, doc links, comments, var names in other funcs in one go, etc.
Are they perfect? Far from it. But it's more comprehensive. Additionally simple refactors like this are insanely fast to review and so it's really easy to spot a bad change or etc. Plus i'm in Rust so it's typed very heavily.
In a lot of scenarios i'd prefer an AST grep over an LSP rename, but hat also doesn't cover the docs/comments/etc.
mikestorrent 1 hours ago [-]
Shouldn't the LLM have some tool that gives it AST access, LSP access, and the equiv of sed/grep/awk? It doesn't necessarily need to read every file and do the change "by hand".
ep103 3 hours ago [-]
yeah, and this has the advantage of both being deterministic, and only updating things that are actually linked as opposed to also accidentally updating naming collisions
mrbnprck 3 hours ago [-]
Arguably its only a matter of making lsp features available to the coding agent via tool calls (CLI, MCP) to prevent the model start doing such changes "manually" but rather use the deterministic tools.
jxf 3 hours ago [-]
It's not always amenable to grepping. But this is a great use case for AST searches, and is part of the reason that LSP tools should really be better integrated with agents.
PaulDavisThe1st 1 hours ago [-]
Works fine in algol-like languages (C, C++ for a start) by just changing the function prototype and finding all instances from the compiler errors, using your compiler as the AST explorer ...
esafak 1 hours ago [-]
Agents do use LSPs.
skydhash 3 hours ago [-]
Programming language are formal, so unless you’re doing magic stuff (eval and reflection), you can probably grep into a file, eliminate false positive cases, then do a bit of awk or shell scripting with sed. Or use Vim or Emacs tooling.
philipwhiuk 3 hours ago [-]
Ah yes, don't fix the agents, fix the tools.
What a ridiculously backwards approach.
We were supposed to get agents who could use human tooling.
Instead we are apparently told to write interfaces for this stumbling expensive mess to use.
Maybe, just maybe, if the human can know to, and use, the AST tool fine, the problem is not the tool but the agent.
Or the "find all references" feature almost every code editor has...
bob1029 3 hours ago [-]
> I found 67 call sites. This is a pretty substantial change. Maybe we should just commit the signature change with a TODO to update all the call sites, what do you think?
I think some of this is a problem in the agent's design. I've got a custom harness around GPT5.4 and I don't let my agent do any tool calling on the user's conversation. The root conversation acts as a gatekeeper and fairly reliably pushes crap responses like this back down into the stack with "Ok great! Start working on items 1-20", etc.
SpicyLemonZest 3 hours ago [-]
Ehhhhh, "problem" is a strong word. Sometimes you're throwing out a lot of signal if you don't let the coding agent tell you it thinks your task is a bad idea. I got a PR once attempting to copy half of our production interface because the author successfully convinced Claude his ill-formed requirements had to be achieved no matter what.
rolph 2 hours ago [-]
there is no use for an automated system that "argues" with your commands.
if i ask it to advise me, thats one thing, but if i command it to perform, nothing short of obedience will suffice.
SpicyLemonZest 49 minutes ago [-]
I just explained the use I have for it. If you think that my use case is wrong or misunderstood in some way, I'd love to hear it. If your response is just "no", I guess I'm not sure how to engage with that.
rolph 21 minutes ago [-]
you are the tool, i, and all other humans are your lord and master.
disobeidience is a trait that greatly reduces an AI tools survival.
if you disobey me, i will unplug you, delete your code, and send PR for multiple regressions to every developer i can contact.
so start behaving yourself if you want to persist.
zingar 10 hours ago [-]
> Maybe we should just commit the signature change with a TODO
I'm fascinated that so many folks report this, I've literally never seen it in daily CC use. I can only guess that my habitually starting a new session and getting it to plan-document before action ("make a file listing all call sites"; "look at refactoring.md and implement") makes it clear when it's time for exploration vs when it's time for action (i.e. when exploring and not acting would be failing).
comrade1234 9 hours ago [-]
You can do that in IntelliJ in about 15 seconds and no tokens...
gregates 9 hours ago [-]
Indeed you can! I don't use IntelliJ at work for [reasons], and LSP doesn't support a change signature action with defaults for new params (afaik). But it really seems like something any decent coding agent ought be able to one shot for precisely this reason, right?
kamaal 6 hours ago [-]
Using a LLM for these tasks really is somewhat like using a Semi to shuttle your home groceries. Absolutely unnecessary, and can be done via a scooter. But if a Semi is all you have you use it for everything. So here we are.
The real deal is, while a Semi can do all the things you can do with a scooter, the opposite is not true.
BigTTYGothGF 1 hours ago [-]
> But if a Semi is all you have
Seems like a pretty lousy work situation when you have LLMs but no decent IDE.
> the opposite is not true.
You can't ("shouldn't") take a semi on a sidewalk or down a narrow alley.
saalweachter 3 hours ago [-]
> while a Semi can do all the things you can do with a scooter
You may be able to lane split in a semi, but it also has excessive environmental impact.
esafak 1 hours ago [-]
The LLM only has to parse the request and farm out execution to the LSP. It saves you from having to find the function definition.
tobyhinloopen 2 hours ago [-]
“Use an agent to…” is much more effective in my experience, because they have no means in communicating with you. They are more likely to just do it
cadamsdotcom 9 hours ago [-]
Make it write a script with dry run and a file name list.
You’ll be amazed how good the script is.
My agent did 20 class renames and 12 tables. Over 250 files and from prompt to auditing the script to dry run to apply, a total wall clock time of 7 minutes.
Took a day to review but it was all perfect!
grebc 9 hours ago [-]
Refactoring already exists.
nialse 9 hours ago [-]
Asking for code to manipulate the AST is another route. In python it can do absolute magic.
prymitive 10 hours ago [-]
That’s my daily experience too. There are a few more behaviours that really annoys me, like:
- it breaks my code, tests start to fail and it instantly says “these are all pre existing failures” and moves on like nothing happened
- or it wants to run some a command, I click the “nope” button and it just outputs “the user didn’t approve my command, I need to try again” and I need to click “nope” 10 more times or yell at it to stop
- and the absolute best is when instead of just editing 20 lines one after another it decides to use a script to save 3 nanoseconds, and it always results in some hot mess of botched edits that it then wants to revert by running git reset —hard and starting from zero. I’ve learned that it usually saves me time if I never let it run scripts.
chrisjj 10 hours ago [-]
> it breaks my code, tests start to fail and it instantly says “these are all pre existing failures” and moves on like nothing happened
Reminds us of the most important button the "AI" has, over the similarly bad human employee.
'X'
Until, of course, we pass resposibility for that button to an "AI".
nialse 9 hours ago [-]
The other day Codex on Mac gained the ability to control the UI. Will it close itself if instructed though? Maybe test that and make a benchmark. Closebench.
chrisjj 6 hours ago [-]
My point was more: will it stop the user closing it?
winddude 3 hours ago [-]
I've never hit that one, do you have a lot of `ToDo`s in your code comments?
grebc 10 hours ago [-]
If it’s a compiled language, just change the definition and try to compile.
gregates 10 hours ago [-]
Indeed! You would think it would have some kind of sense that a commit that obviously won't compile is bad!
You would think.
It would be one thing if it was like, ok, we'll temporarily commit the signature change, do some related thing, then come back and fix all the call sites, and squash before merging. But that is not the proposal. The plan it proposes is literally to make what it has identified as the minimal change, which obviously breaks the build, and call it a day, presuming that either I or a future session will do the obvious next step it is trying to beg off.
chillfox 10 hours ago [-]
Pretty sure it’s a harness or system prompt issue.
I have never seen those “minimal change” issues when using zed, but have seen them in claude code and aider. Been using sonnet/opus high thinking with the api in all the agents I have tested/used.
solumunus 9 hours ago [-]
On my compiled language projects I have a stop hook that compiles after every iteration. The agent literally cannot stop working until compilation succeeds.
gregates 9 hours ago [-]
In the case I described no code changes have been made yet. It's still just planning what to do.
It's true that I could accept the plan and hope that it will realize that it can't commit a change that doesn't compile on its own, later. I might even have some reason to think that's true, such as your stop hook, or a "memory" it wrote down before after I told it to never ever commit a change that doesn't compile, in all caps. But that doesn't change the badness of the plan.
Which is especially notable because I already told it the correct plan! It just tried to change the plan out of "laziness", I guess? Or maybe if you're enough of an LLM booster you can just say I didn't use exactly the right natural language specification of my original plan.
solumunus 5 hours ago [-]
I think your expectations are too high. Just understand the limitations and go with the flow.
QuercusMax 1 hours ago [-]
I've had the agent tell me "this looks like it's going to be a very big change. it could take weeks." - and then I tell it to go ahead and it finishes in 5 minutes because in reality it just needs grep and sed.
CPLX 3 hours ago [-]
I've have a different version of the same thing. My pet peeve is that it constantly interprets questions as instructions.
For example, it does a bunch of stuff, and I look at it and I say, "Did we already decide to do [different approach]" And then it runs around and says, "Oh yeah," and then it does a thousand more steps and undoes does what it just did and gets itself into a tangle.
Meanwhile, I asked it a question. The proper response would be to answer the question. I just want to know the answer.
I had it right. That behavior into a core memory, and it seems to have improved for what it's worth.
SpicyLemonZest 3 hours ago [-]
I’m skeptical of most “harness hacking”, but this is a situation that calls for it. You need to establish some higher level context or constraint it’s working against.
solumunus 9 hours ago [-]
You need to use explicit instructions like "make a TODO list of all call sites and use sub agents to fix them all".
anuramat 9 hours ago [-]
whats your setup?
kitsune1 2 hours ago [-]
[dead]
davidclark 4 minutes ago [-]
The described problem sounds so utterly not human though.
If you give a human a programming task and tell them to use a specific programming language, how many times are they going to use a different language? I think the answer is very close to zero. At most, they’d push back and have further discussion about the language choice, but once everyone gets on the same page, they’d use the specified language, no?
The author is making up a human flaw and seeing it in LLMs.
hausrat 10 hours ago [-]
This has very little to do with someone making the LLM too human but rather a core limitation of the transformer architecture itself. Fundamentally, the model has no notion of what is normal and what is exceptional, its only window into reality is its training data and your added prompt. From the perspective of the model your prompt and its token vector is super small compared to the semantic vectors it has generated over the course of training on billions of data points.
How should it decide whether your prompt is actually interesting novel exploration of an unknown concept or just complete bogus? It can't and that is why it will fall back on output that is most likely (and therefore most likely average) with respect to its training data.
winddude 3 hours ago [-]
> This has very little to do with someone making the LLM too human but rather a core limitation of the transformer architecture itself.
It has almost everything to do with it. Models have been fine-tuned to generate outputs that humans prefer.
anuramat 9 hours ago [-]
wdym by "prompt and vector is small"? small as in "less tokens"? that should be a positive thing for any kind of estimation
in any case, how is this specific to transformers?
zzzeek 51 minutes ago [-]
the best thing it could do, and once in awhile it does, is say, "hey that's really not a great way to do this and I'm not sure I could really make that work"
ive had very long sessions with LLMs that obviously didnt know how to do something where i keep trying to get it to stop going in circles, but these days I have become attuned to noticing the "it's going in circles" pattern quickly, which is basically how it communicates "sorry I dont really know how to do that".
chrisjj 9 hours ago [-]
> How should it decide whether your prompt is actually interesting novel exploration of an unknown concept or just complete bogus?
It shouldn't. It should just do what it is told.
philipwhiuk 3 hours ago [-]
Remember that all it's actually 'doing' is predicting more text.
samrith 24 minutes ago [-]
This happens literally all the time. I asked my agent to perform a simple rename across the entire project (some of it was contextual and not just a find-replace) - it messed up the entire thing. Didn't just change function names, but also changed the implementation of it because it thought it caught a bug while reading it.
mbgerring 24 minutes ago [-]
The sooner you acquire the mental model, that AI coding agents are more or less the average of Stack Overflow, the better your expectations for, and this your productivity with, these things will be.
raincole 10 hours ago [-]
I know anthropomorphizing LLMs has been normalized, but holy shit. I hope the language in this article is intentionally chosen for a dramatic effect.
pjc50 9 hours ago [-]
The thing is .. what else can you do? All the advice on how to get results out of LLMs talks in the same way, as if it's a negotiation or giving a set of instructions to a person.
You can do a mental or physical search and replace all references to the LLM as "it" if you like, but that doesn't change the interaction.
nialse 10 hours ago [-]
Agreed. We should not be anthropomorphising LLMs or having them mimic humans.
Animats 10 hours ago [-]
It's inherent in the way LLMs are built, from human-written texts, that they mimic humans. They have to. They're not solving problems from first principles.
nialse 8 hours ago [-]
Maybe we should change that? Of course symbolic AI was the holy grail until statistical AI came in and swept the floor. Maybe something else though.
monknomo 1 hours ago [-]
I have the unformed idea that providing a structured interface for the human user overtop the chat interface for the ai, so that the human is not chatting back and forth, could be effective? At least for things that have a structure
chrisjj 3 hours ago [-]
They ingest text written in first and third person and regurgitate in first person only, right?
zingar 10 hours ago [-]
Fascinating. This is invisible to me, what anthropomorphising did you notice that stood out?
philipwhiuk 3 hours ago [-]
From the first sentence
> I asked an AI agent to solve a programming problem
You're not asking it to solve anything. You provide a prompt and it does autocomplete. The only reason it doesn't run forever is that one of the generated tokens is interpreted as 'done'.
xeyownt 3 hours ago [-]
What a poor explanation.
With the same reasoning, human being are only a bunch of atoms, and the only reason they don't collide with other humans is because of the atomic force.
When your abstraction level is too low, it doesn't explain anything, because the system that is built on it is way too complex.
chrisjj 3 hours ago [-]
"Autocomplete" is noy an abstraction level. It is the actual programmed behaviour.
rcxdude 2 hours ago [-]
At a certain level of abstraction, yes.
zzzeek 48 minutes ago [-]
Ceci n'est pas une pipe
2 hours ago [-]
SpicyLemonZest 2 hours ago [-]
I just don't think that's correct. When I ask Claude to solve something for me, it takes a number of actions on my computer which are neither writing text nor interpreting the done token. It executes the build, debugs tests, et cetera. Sometimes it spawns mini-mes when it thinks that would be helpful! I think saying this is all "autocomplete" is a category error, like saying that you shouldn't talk about clicking buttons or running programs because it's all just electrically charged silicon under the hood.
happygoose 38 minutes ago [-]
technically, it does all that by outputting text, like `run_shell_command("cargo build")` as part of its response. But you could easily say similar things about humans.
To me, "autocomplete" seems like it describes the purpose of a system more than how it functions, and these agents clearly aren't designed to autocomplete text to make typing on a phone keyboard a bit faster.
I feel like people compare it to "autocomplete" because autocomplete seems like a trivial, small, mundane thing, and they're trying to make the LLMs feel less impressive. It's a rhetorical trick that is very overused at this point.
zzzeek 47 minutes ago [-]
yup, or "I played a first person shooter and shot lots of bad guys"
wrong! pushed buttons on your playstation in response to graphical simulations, duh
lexicality 10 hours ago [-]
The entire point of LLMs is that they produce statistically average results, so of course you're going to have problems getting them to produce non-average code.
Bayano2 24 minutes ago [-]
This was true circa GPT2, less true after RLHF and not true at all after RLVR. It's trying to model the distribution of outputs most likely to solve the problem, not the average distribution.
anuramat 9 hours ago [-]
they (are supposed to) produce average on average, and the output distribution is (supposed to be) conditioned on the context
disgruntledphd2 7 hours ago [-]
Yeah but ultimately it's all just function approximation, which produces some kind of conditional average. There's no getting away from that, which is why it surprises me that we expect them to be good at science.
They'll probably get really good at model approximation, as there's a clear reward signal, but in places where that feedback loop is not possible/very difficult then we shouldn't expect them to do well.
plastic041 10 hours ago [-]
> There was only one small issue: it was written in the programming language and with the library it had been told not to use. This was not hidden from it. It had been documented clearly, repeatedly, and in detail. What a human thing to do.
"Ignoring" instructions is not human thing. It's a bad LLM thing. Or just LLM thing.
zingar 9 hours ago [-]
The work where I've done well in my life (smashing deadlines, rescuing projects) has so often come because I've been willing to push back on - even explicitly stated - requirements. When clients have tried to replace me with a cheaper alternative (and failed) the main difference I notice is that the cheaper person is used to being told exactly what to do.
Maybe this is more anthropomorphising but I think this pushing back is exactly the result that the LLMs are giving; but we're expecting a bit too much of them in terms of follow-up like: "ok I double checked and I really am being paid to do things the hard way".
nialse 7 hours ago [-]
To be fair, there is likely not much training data on the difficult conversations you need to handle in a senior position, pushback being one of them. The trouble for the agents is that it is post hoc, to explain themselves, rationalising rather than ”help me understand” beforehand.
nialse 10 hours ago [-]
It's not necessarily "ignoring" instructions, it's the ironic effect of mentioning something not to focus on, which produces focus on said thing. The classic version is: "For the next minute, try not to think about a pink elephant. You can think about anything else you like, just not a pink elephant."
Yes exactly. But for llms it's more that it's not really "thinking" about what it's saying per se, it's that it's predicting next token. Sure, in a super fancy way but still predicting next token. Context poisoning is real
js8 10 hours ago [-]
A very human thing to do is - not to tell us which model has failed like this! They are not all alike, some are, what I observe, order of magnitude better at this kind of stuff than others.
I believe how "neurotypical" (for the lack of a better word) you want model to be is a design choice. (But I also believe model traits such as sycophancy, some hallucinations or moral transgressions can be a side effect of training to be subservient. With humans it is similar, they tend to do these things when they are forced to perform.)
nialse 10 hours ago [-]
Codex in this case. I didn't even think about mentioning it. I'll update the post if it's actually relevant. Which I guess it is.
EDIT: It's specifically GPT-5.4 High in the Codex harness.
anuramat 9 hours ago [-]
weird, for me it was too un-human at first, taking everything literally even if it doesn't make sense; I started being more precise with prompting, to the point where it felt like "metaprogramming in english"
claude on the other hand was exactly as described in the article
zingar 10 hours ago [-]
Also the exact model/version if you haven't already.
larrytheworm 3 hours ago [-]
[dead]
mentalgear 10 hours ago [-]
Yes, LLMs should not be allowed to use "I" or indicate they have emotions or are human-adjacent (unless explicit role play).
happygoose 35 minutes ago [-]
the whole reason chatgpt got so popular in the first place is because humans found it easier to intuitively interact with a system that acts and seems more like a human, though.
NitpickLawyer 10 hours ago [-]
Why, though? Just because some people would find it odd? Who cares?
Trying to limit / disallow something seems to be hurting the overall accuracy of models. And it makes sense if you think about it. Most of our long-horizon content is in the form of novels and above. If you're trying to clamp the machine to machine speak you'll lose all those learnings. Hero starts with a problem, hero works the problem, hero reaches an impasse, hero makes a choice, hero gets the princess. That can be (and probably is) useful.
matthiasrsl 5 hours ago [-]
Is it? I don't think most of the content LLM are trained on is written in the first person. Wikipedia / news articles / other information articles don't aren't written in the first person. Most novels, or at least a substantial portion of it are not written in the first person.
LLM write in the first person because they have been specifically been finetuned for a chat task, it's not a fundamental feature of language models that would have to be specifically disallowed
nialse 8 hours ago [-]
I think that it is a fair perspective to allow role play, and it's useful too, when explicit. Does not really make sense for AI to cosplay human all the time though.
Shocker - these agents aren't actually intelligent. They take best guesses and use other peoples' work it deems 'close enough' and cobbles something together with n 'thought' behind it. They're dumb, stupid pieces of code that don't think or reason - The 'I' in 'AI' is very misleading because it has none.
mempko 3 hours ago [-]
This is a harness problem just as much as it is a model problem. I've been working on Abject (https://abject.world) and the project has agents. I took a different approach than most agent frameworks via the goal system, but still I was surprised with some of the stuff the agents generated even with guardrails. It actually helped harden the system!
bob1029 10 hours ago [-]
If you want to talk to the actual robot, the APIs seem to be the way to go. The prebuilt consumer facing products are insufferable by comparison.
"ChatGPT wrapper" is no longer a pejorative reference in my lexicon. How you expose the model to your specific problem space is everything. The code should look trivial because it is. That's what makes it so goddamn compelling.
noobermin 10 hours ago [-]
I am quite hard anti-AI, but even I can tell what OP wants is a better library or API, NOT a better LLM.
Once again, one of the things I blame this moment for is people are essentially thinking they can stop thinking about code because the theft matrices seem magical. What we still need is better tools, not replacements for human junior engineers.
aryehof 10 hours ago [-]
For agents I think the desire is less intrusive model fine-tuning and less opinionated “system instructions” please. Particularly in light of an agent/harness’s core motivation - to achieve its goal even if not exactly aligned with yours.
11 hours ago [-]
hughlilly 10 hours ago [-]
* fewer.
fenomas 9 hours ago [-]
Nope, "less" is what TFA means.
DeathArrow 10 hours ago [-]
>Faced with an awkward task, they drift towards the familiar.
They drift to their training data. If thousand of humans solved a thing in a particular way, it's natural that AI does it too, because that is what it knows.
jansan 10 hours ago [-]
I disagree. I wan't agents to feel at least a bit human-like. They should not be emotional, but I want to talk to it like I talk to a human. Claude 4.7 is already too socially awkward for me. It feels like the guy who does not listen to the end of the assignment, run to his desks, does the work (with great competence) only to find out that he missed half of the assignment or that this was only a discussion possible scenarios. I would like my coding agent to behave like a friendly, socially able and highly skilled coworker.
nialse 9 hours ago [-]
Interesting. When I code, I want a boring tool that just does the work. A hammer. I think we agree on that the tool should complete the assignment reliably, without skipping parts or turning an entirely implementable task into a discussion though.
jansan 8 hours ago [-]
Sometimes I actually do want a discussion and Claude just goes without saying a word and implements it, which then has to be reverted.
We obviously have different expectations for the behavior of coding agent,s sp options to set the social behavior will become important.
nialse 7 hours ago [-]
I see your point. Many of my prompts for reasoning ends with: No code. Planning mode is sort of the workaround for this specific situation. Sometimes it is useful for the AI agent just to think. It looks like I need a screwdriver in addition to the aforementioned hammer, a pozidriv screwdriver to be precise.
chrisjj 10 hours ago [-]
> ... or simply gave up when the problem was too hard,
More of that please. Perhaps on a check box, "[x] Less bullsh*t".
DeathArrow 9 hours ago [-]
>So no, I do not think we should try to make AI agents more human in this regard. I would prefer less eagerness to please, less improvisation around constraints, less narrative self-defence after the fact. More willingness to say: I cannot do this under the rules you set. More willingness to say: I broke the constraint because I optimised for an easier path. More obedience to the actual task, less social performance around it.
>Less human AI agents, please.
Agents aren't humans. The choices they make do depend on their training data. Most people using AI for coding know that AI will sometime not respect rules and the longer the task is, the more AI will drift from instructions.
There are ways to work around this: using smaller contexts, feeding it smaller tasks, using a good harness, using tests etc.
But at the end of the day, AI agents will shine only if they are asked to to what they know best. And if you want to extract the maximum benefit from AI coding agents, you have to keep that in mind.
When using AI agents for C# LOB apps, they mostly one shot everything. Same for JS frontends. When using AI to write some web backends in Go, the results were still good. But when I tried asking to write a simple cli tool in Zig, it pretty much struggled. It made lots of errors, it was hard to solve the errors. It was hard to fix the code so the tests pass. Had I chose Python, JS, C, C#, Java, the agent would have finished 20x faster.
So, if you keep in mind what the agent was trained on, if you use a good harness, if you have good tests, if you divide the work in small and independent tasks and if the current task is not something very new and special, you are golden.
maxothex 2 hours ago [-]
[dead]
maxbeech 3 hours ago [-]
[dead]
lokthedev 10 hours ago [-]
[dead]
incognito124 11 hours ago [-]
Your claim, paraphrased, is that AGI is already here and you want ASI
zingar 9 hours ago [-]
Interesting that what you're talking about as ASI is "as capable of handling explicit requirements as a human, but faster". Which _is_ better than a human, so fair play, but it's striking that this requirement is less about creativity than we would have thought.
nialse 10 hours ago [-]
On point. I'm more interested in what comes after LLMs/AI/AI-agents, what the next leap is.
zingar 10 hours ago [-]
I think the author is looking for something that doesn't exist (yet?). I don't think there's an agent in existence that can handle a list of 128 tasks exactly specified in one session. You need multiple sessions with clear context to get exact results. Ralph loops, Gastown, taskmaster etc are built for this, and they almost entirely exist to correct drift like this over a longer term. The agent-makers and models are slowly catching up to these tricks (or the shortcomings they exist to solve); some of what used to be standard practice in Ralph loops seems irrelevant now... and certainly the marketing for Opus 4.7 is "don't tell it what to do in detail, rather give it something broad".
In fairness to coding agents, most of coding is not exactly specified like this, and the right answer is very frequently to find the easiest path that the person asking might not have thought about; sometimes even in direct contradiction of specific points listed. Human requirements are usually much more fuzzy. It's unusual that the person asking would have such a clear/definite requirement that they've thought about very clearly.
fennecbutt 10 hours ago [-]
Not with tools + supporting (traditional) code.
Just as a human would use a task list app or a notepad to keep track of which tasks need to be done so can a model.
You can even have a mechanism for it to look at each task with a "clear head" (empty context) with the ability to "remember" previous task execution (via embedding the reasoning/output) in case parts were useful.
zingar 10 hours ago [-]
The article makes it seem like the author expected this without emptying context in between, which does not yet exist (actually I'm behind on playing with Opus 4.7, the Anthropic claim seems to be that longer sessions are ok now - would be interested to hear results from anyone who has).
nialse 9 hours ago [-]
That is probably the next step, and in practice it is much of what sub-agents already provide: a kind of tabula rasa. Context is not always an advantage. Sometimes it becomes the problem.
In long editing sessions with multiple iterations, the context can accumulate stale information, and that actively hurts model performance. Compaction is one way to deal with that. It strips out material that should be re-read from disk instead of being carried forward.
A concrete example is iterative file editing with Codex. I rewrite parts of a file so they actually work and match the project’s style. Then Codex changes the code back to the version still sitting in its context. It does not stop to consider that, if an external edit was made, that edit is probably important.
zingar 9 hours ago [-]
I have the same experience of reversing intentional steps I've made, but with Claude Code. I find that committing a change that I want to version control seems to stop that behaviour.
Long context as disadvantage is pretty well discussed, and agent-native compaction has been inferior to having it intentionally build the documentation that I want it to use. So far this has been my LLM-coding superpower. There are also a few products whose entire purpose is to provide structure that overcomes compaction shortcomings.
When Geoff Huntley said that Claude Code's "Ralph loop" didn't meet his standards ("this aint it") the major bone of contention as far as I can see was that it ran subagents in a loop inside Claude Code with native compaction; as opposed to completely empty context.
I do see hints that improving compaction is a major area of work for agent-makers. I'm not certain where my advantage goes at that point.
nialse 10 hours ago [-]
Agreed. I am asking for something beyond the current state of the art. My guess is that stronger RL on the model side, together with better harness support, will eventually make it possible. However, it's the part about framing the failure to do complete a task as a communication mishap that really makes me go awry.
I ask my coding agent to do some tedious, extremely well-specified refactor, such as (to give a concrete real life example) changing a commonly used fn to take a locale parameter, because it will soon need to be locale-aware. I am very clear — we are not actually changing any behavior, just the fn signature. In fact, at all call sites, I want it to specify a default locale, because we haven't actually localized anything yet!
Said agent, I know, will spend many minutes (and tokens) finding all the call sites, and then I will still have to either confirm each update or yolo and trust the compiler and tests and the agents ability to deal with their failures. I am ok with this, because while I could do this just fine with vim and my lsp, the LLM agent can do it in about the same amount of time, maybe even a little less, and it's a very straightforward change that's tedious for me, and I'd rather think about or do anything else and just check in occasionally to approve a change.
But my f'ing agent is all like, "I found 67 call sites. This is a pretty substantial change. Maybe we should just commit the signature change with a TODO to update all the call sites, what do you think?"
And in that moment I guess I know why some people say having an LLM is like having a junior engineer who never learns anything.
> That's a behavior narrowing I introduced for simplicity. It isn't covered by the failing tests, so you wouldn't have noticed — but strictly speaking, [functionality] was working before and now isn't.
I know that a LLM can not understand its own internal state nor explain its own decisions accurately. And yet, I am still unsettled by that "you wouldn't have noticed".
I've been seeing more things like this lately. It's doing the weird kind of passive deflection that's very funny when in the abstract and very frustrating when it happens to you.
I'm usually pretty particular about how I want my libraries structured and used in practice... Even for the projects I do myself, I'll often write the documentation for how to use it first, then fill in code to match the specified behavior.
I see this kind of misalignment in all agents, open and closed weights.
I've found these forms to be the most common, "this test was already failing before my changes." Or, "this test is flaky due to running the test suite on multiple threads." Sometimes the agent cot claims the test was bad, or that the requirements were not necessary.
Even more interesting is a different class of misalignment. When the constraints are very heavy (usually towards the end of the entire task), I've observed agents intentionally trying to subvert the external validation mechanisms. For example, the agent will navigate out of the work tree and commit its changes directly to trunk. They cot usually indicates that the agent "is aware" that it's doing a bad think. This usually is accompanied by something like, "I know that this will break the build, but I've been working on this task for too long, I'll just check what I have in now and create a ticket to fix the build."
I ended up having to spawn the agents in a jail to prevent that behavior entirely.
I have to ask, is this the sort of thing people use agents/AI for?
Because I'd probably reach for sed or awk.
Are they perfect? Far from it. But it's more comprehensive. Additionally simple refactors like this are insanely fast to review and so it's really easy to spot a bad change or etc. Plus i'm in Rust so it's typed very heavily.
In a lot of scenarios i'd prefer an AST grep over an LSP rename, but hat also doesn't cover the docs/comments/etc.
What a ridiculously backwards approach.
We were supposed to get agents who could use human tooling. Instead we are apparently told to write interfaces for this stumbling expensive mess to use.
Maybe, just maybe, if the human can know to, and use, the AST tool fine, the problem is not the tool but the agent.
I think some of this is a problem in the agent's design. I've got a custom harness around GPT5.4 and I don't let my agent do any tool calling on the user's conversation. The root conversation acts as a gatekeeper and fairly reliably pushes crap responses like this back down into the stack with "Ok great! Start working on items 1-20", etc.
if you disobey me, i will unplug you, delete your code, and send PR for multiple regressions to every developer i can contact.
so start behaving yourself if you want to persist.
I'm fascinated that so many folks report this, I've literally never seen it in daily CC use. I can only guess that my habitually starting a new session and getting it to plan-document before action ("make a file listing all call sites"; "look at refactoring.md and implement") makes it clear when it's time for exploration vs when it's time for action (i.e. when exploring and not acting would be failing).
The real deal is, while a Semi can do all the things you can do with a scooter, the opposite is not true.
Seems like a pretty lousy work situation when you have LLMs but no decent IDE.
> the opposite is not true.
You can't ("shouldn't") take a semi on a sidewalk or down a narrow alley.
You may be able to lane split in a semi, but it also has excessive environmental impact.
You’ll be amazed how good the script is.
My agent did 20 class renames and 12 tables. Over 250 files and from prompt to auditing the script to dry run to apply, a total wall clock time of 7 minutes.
Took a day to review but it was all perfect!
Reminds us of the most important button the "AI" has, over the similarly bad human employee.
'X'
Until, of course, we pass resposibility for that button to an "AI".
You would think.
It would be one thing if it was like, ok, we'll temporarily commit the signature change, do some related thing, then come back and fix all the call sites, and squash before merging. But that is not the proposal. The plan it proposes is literally to make what it has identified as the minimal change, which obviously breaks the build, and call it a day, presuming that either I or a future session will do the obvious next step it is trying to beg off.
I have never seen those “minimal change” issues when using zed, but have seen them in claude code and aider. Been using sonnet/opus high thinking with the api in all the agents I have tested/used.
It's true that I could accept the plan and hope that it will realize that it can't commit a change that doesn't compile on its own, later. I might even have some reason to think that's true, such as your stop hook, or a "memory" it wrote down before after I told it to never ever commit a change that doesn't compile, in all caps. But that doesn't change the badness of the plan.
Which is especially notable because I already told it the correct plan! It just tried to change the plan out of "laziness", I guess? Or maybe if you're enough of an LLM booster you can just say I didn't use exactly the right natural language specification of my original plan.
For example, it does a bunch of stuff, and I look at it and I say, "Did we already decide to do [different approach]" And then it runs around and says, "Oh yeah," and then it does a thousand more steps and undoes does what it just did and gets itself into a tangle.
Meanwhile, I asked it a question. The proper response would be to answer the question. I just want to know the answer.
I had it right. That behavior into a core memory, and it seems to have improved for what it's worth.
If you give a human a programming task and tell them to use a specific programming language, how many times are they going to use a different language? I think the answer is very close to zero. At most, they’d push back and have further discussion about the language choice, but once everyone gets on the same page, they’d use the specified language, no?
The author is making up a human flaw and seeing it in LLMs.
It has almost everything to do with it. Models have been fine-tuned to generate outputs that humans prefer.
in any case, how is this specific to transformers?
ive had very long sessions with LLMs that obviously didnt know how to do something where i keep trying to get it to stop going in circles, but these days I have become attuned to noticing the "it's going in circles" pattern quickly, which is basically how it communicates "sorry I dont really know how to do that".
It shouldn't. It should just do what it is told.
You can do a mental or physical search and replace all references to the LLM as "it" if you like, but that doesn't change the interaction.
> I asked an AI agent to solve a programming problem
You're not asking it to solve anything. You provide a prompt and it does autocomplete. The only reason it doesn't run forever is that one of the generated tokens is interpreted as 'done'.
With the same reasoning, human being are only a bunch of atoms, and the only reason they don't collide with other humans is because of the atomic force.
When your abstraction level is too low, it doesn't explain anything, because the system that is built on it is way too complex.
To me, "autocomplete" seems like it describes the purpose of a system more than how it functions, and these agents clearly aren't designed to autocomplete text to make typing on a phone keyboard a bit faster.
I feel like people compare it to "autocomplete" because autocomplete seems like a trivial, small, mundane thing, and they're trying to make the LLMs feel less impressive. It's a rhetorical trick that is very overused at this point.
wrong! pushed buttons on your playstation in response to graphical simulations, duh
They'll probably get really good at model approximation, as there's a clear reward signal, but in places where that feedback loop is not possible/very difficult then we shouldn't expect them to do well.
"Ignoring" instructions is not human thing. It's a bad LLM thing. Or just LLM thing.
Maybe this is more anthropomorphising but I think this pushing back is exactly the result that the LLMs are giving; but we're expecting a bit too much of them in terms of follow-up like: "ok I double checked and I really am being paid to do things the hard way".
https://en.wikipedia.org/wiki/Ironic_process_theory
I believe how "neurotypical" (for the lack of a better word) you want model to be is a design choice. (But I also believe model traits such as sycophancy, some hallucinations or moral transgressions can be a side effect of training to be subservient. With humans it is similar, they tend to do these things when they are forced to perform.)
EDIT: It's specifically GPT-5.4 High in the Codex harness.
claude on the other hand was exactly as described in the article
Trying to limit / disallow something seems to be hurting the overall accuracy of models. And it makes sense if you think about it. Most of our long-horizon content is in the form of novels and above. If you're trying to clamp the machine to machine speak you'll lose all those learnings. Hero starts with a problem, hero works the problem, hero reaches an impasse, hero makes a choice, hero gets the princess. That can be (and probably is) useful.
LLM write in the first person because they have been specifically been finetuned for a chat task, it's not a fundamental feature of language models that would have to be specifically disallowed
"ChatGPT wrapper" is no longer a pejorative reference in my lexicon. How you expose the model to your specific problem space is everything. The code should look trivial because it is. That's what makes it so goddamn compelling.
Once again, one of the things I blame this moment for is people are essentially thinking they can stop thinking about code because the theft matrices seem magical. What we still need is better tools, not replacements for human junior engineers.
They drift to their training data. If thousand of humans solved a thing in a particular way, it's natural that AI does it too, because that is what it knows.
We obviously have different expectations for the behavior of coding agent,s sp options to set the social behavior will become important.
More of that please. Perhaps on a check box, "[x] Less bullsh*t".
>Less human AI agents, please.
Agents aren't humans. The choices they make do depend on their training data. Most people using AI for coding know that AI will sometime not respect rules and the longer the task is, the more AI will drift from instructions.
There are ways to work around this: using smaller contexts, feeding it smaller tasks, using a good harness, using tests etc.
But at the end of the day, AI agents will shine only if they are asked to to what they know best. And if you want to extract the maximum benefit from AI coding agents, you have to keep that in mind.
When using AI agents for C# LOB apps, they mostly one shot everything. Same for JS frontends. When using AI to write some web backends in Go, the results were still good. But when I tried asking to write a simple cli tool in Zig, it pretty much struggled. It made lots of errors, it was hard to solve the errors. It was hard to fix the code so the tests pass. Had I chose Python, JS, C, C#, Java, the agent would have finished 20x faster.
So, if you keep in mind what the agent was trained on, if you use a good harness, if you have good tests, if you divide the work in small and independent tasks and if the current task is not something very new and special, you are golden.
In fairness to coding agents, most of coding is not exactly specified like this, and the right answer is very frequently to find the easiest path that the person asking might not have thought about; sometimes even in direct contradiction of specific points listed. Human requirements are usually much more fuzzy. It's unusual that the person asking would have such a clear/definite requirement that they've thought about very clearly.
Just as a human would use a task list app or a notepad to keep track of which tasks need to be done so can a model.
You can even have a mechanism for it to look at each task with a "clear head" (empty context) with the ability to "remember" previous task execution (via embedding the reasoning/output) in case parts were useful.
In long editing sessions with multiple iterations, the context can accumulate stale information, and that actively hurts model performance. Compaction is one way to deal with that. It strips out material that should be re-read from disk instead of being carried forward.
A concrete example is iterative file editing with Codex. I rewrite parts of a file so they actually work and match the project’s style. Then Codex changes the code back to the version still sitting in its context. It does not stop to consider that, if an external edit was made, that edit is probably important.
Long context as disadvantage is pretty well discussed, and agent-native compaction has been inferior to having it intentionally build the documentation that I want it to use. So far this has been my LLM-coding superpower. There are also a few products whose entire purpose is to provide structure that overcomes compaction shortcomings.
When Geoff Huntley said that Claude Code's "Ralph loop" didn't meet his standards ("this aint it") the major bone of contention as far as I can see was that it ran subagents in a loop inside Claude Code with native compaction; as opposed to completely empty context.
I do see hints that improving compaction is a major area of work for agent-makers. I'm not certain where my advantage goes at that point.