60 comments

  • mzajc 5 hours ago
    There are now several comments that (incorrectly?) interpret the undercover mode as only hiding internal information. Excerpts from the actual prompt[0]:

      NEVER include in commit messages or PR descriptions:
      - The phrase "Claude Code" or any mention that you are an AI
      - Co-Authored-By lines or any other attribution
    
      BAD (never write these):
      - 1-shotted by claude-opus-4-6
      - Generated with Claude Code
      - Co-Authored-By: Claude Opus 4.6 <…>
    
    This very much sounds like it does what it says on the tin, i.e. stays undercover and pretends to be a human. It's especially worrying that the prompt is explicitly written for contributions to public repositories.

    [0]: https://github.com/chatgptprojects/claude-code/blob/642c7f94...

    • manbitesdog 3 hours ago
      I cringe every time I see Claude trying to co-author a commit. The git history is expected to track accountability and ownership, not your Bill of Tools. Should I also co-author my PRs with my linter, intellisense and IDE?
      • tdb7893 38 minutes ago
        If those tools are writing the code then in general I do expect that to be included in the PR! Through my whole career I've seen PRs where people noted that code that was generated (people have been generating code since long before LLMs). It's useful context unless you've gone over the generated code and understand it and it is the same quality as if you wrote it yourself (which in my experience is the case where it's obvious boilerplate or the generated section is small).

        Needing to flag nontrivial code as generated was standard practice for my whole career.

      • mikkupikku 2 hours ago
        A whole lot of people find LLM code to be strictly objectionable, for a variety of reasons. We can debate the validity of those reasons, but I think that even if those reasons were all invalid, it would still be unethical to deceive people by a deliberate lie of omission. I don't turn it off, and I don't think other people should either.
        • tehsauce 1 hour ago
          For the purpose of disclosure, it should say “Warning: AI generated code” in the commit message, not an advertisement for a specific product. You would never accept any of your other tools injecting themselves into a commit message like that.
          • lazyasciiart 1 hour ago
            My last commit is literally authored by dependabot.
            • sysguest 3 minutes ago
              well you know 100% know what dependabot does
        • mathgradthrow 5 minutes ago
          I'm not really sure that's any of their business.
        • tshaddox 1 hour ago
          If a whole of people thought that running code through a linter or formatter was objectionable, I'd probably just dismiss their beliefs as invalid rather than adding the linter or formatter as a co-author to every commit.
          • mikkupikku 1 hour ago
            Like frying a veggie burger in bacon grease. Just because somebody's beliefs are dumb doesn't mean we should be deliberately tricking them. If they want to opt out of your code, let them.
            • sysguest 2 minutes ago
              > frying a veggie burger in bacon grease

              hmm gotta try that

        • ndriscoll 1 hour ago
          My tools just don't add such comments. I don't know why I would care to add that information. I want my commits to be what and why, not what editor someone used. It seems like cruft to me. Why would I add noise to my data to cater to someone's neuroticism?

          At least at my workplace though, it's just assumed now that you are using the tools.

        • josephg 2 hours ago
          Likewise. I don’t mind that people use LLMs to generate text and code. But I want any LLM generated stuff to be clearly marked as such. It seems dishonest and cheap to get Claude to write something and then pretend you did all the work yourself.
          • rogerrogerr 2 hours ago
            The reason I want it to be marked as such is because I review AI code differently than human code - it just makes different kinds of mistakes.
          • pxc 2 hours ago
            You can disclose that you used an LLM in the process of writing code in other ways, though. You can just tell people, you can mention it in the PR, you can mention it in a ticket, etc.
            • ruraljuror 57 minutes ago
              +1. If we’re at an early stage in the agentic curve where we think reading commit messages is going to matter, I don’t want those cluttered with meaningless boilerplate (“co-authored by my tools!”).

              But at this point i am more curious if git will continue to be the best tool.

              • pxc 14 minutes ago
                I'm only beginning to use "agentic" LLM tools atm because we finally gained access to them at work, and the rest of my team seems really excited about using them.

                But for me at least, a tool like Git seems pretty essential for inspecting changes and deciding which to keep, which to reroll, and which to rewrite. (I'm not particularly attached to Git but an interface like Magit and a nice CLI for inspecting and manipulating history seem important to me.)

                What are you imagining VCS software doing differently that might play nicer with LLM agents?

          • Fr0styMatt88 1 hour ago
            I guess if enough people use it, doesn’t the tag become kind of redundant?

            Almost like writing “Code was created with the help of IntelliSense”.

      • jpollock 41 minutes ago
        Yes, it sets the reviewer's expectations around how much effort was spent reviewing the code before it was sent.

        I regularly have tool-generated commits. I send them out with a reference to the tool, what the process is, how much it's been reviewed and what the expectation is of the reviewer.

        Otherwise, they all assume "human authored" and "human sponsored". Reviewers will then send comments (instead of proposing the fix themselves). When you're wrangling several hundred changes, that becomes unworkable.

      • targafarian 2 hours ago
        Well is it actually being used as a tool where the author has full knowledge and mental grasp of what is being checked in, or has the person invoked the AI and ceded thought and judgment to the AI? I.e., I think in many cases the AI really is the author, or at least co-author. I want to know that for attribution and understanding what went into the commit. (I agree with you if it's just a tool.)
        • _heimdall 2 hours ago
          I have worked with quite a few people committing code they didn't fully understand.

          I don't meant this as a drive by bazinga either, the practice of copying code or thinking you understand it when you don't is nothing new

          • allajfjwbwkwja 2 hours ago
            Pre-LLM, it was much easier for reviewers to discern that. Now, the AI-generated code can look like it was well thought out by somebody competent, when it wasn't.
            • jhide 2 hours ago
              Have you ever reviewed an AI-generated commit from someone with insufficient competence that was more compelling than their work would be if it was done unassisted? In my experience it’s exactly the opposite. AI-generation aggravates existing blindspots. This is because, excluding malicious incompetence, devs will generally try to understand what they’re doing if they’re doing it without AI
              • bandrami 1 hour ago
                I think the issue is not that the patches are more compelling but that they're significantly larger and more frequent
              • abustamam 1 hour ago
                I try to understand what the llm is doing when it generates code. I understand that I'm still responsible for the code I commit even if it's llm generated so I may as well own it.
              • allajfjwbwkwja 2 hours ago
                [dead]
          • enneff 56 minutes ago
            Yes and if they copy and paste code they don’t understand then they should disclose that in the commit message too!
      • m132 2 hours ago
        If you accept the code generated by them nearly verbatim, absolutely.

        I don't understand why people consider Claude-generated code to be their own. You authored the prompts, not the code. Somehow this was never a problem with pre-LLM codegen tools, like macro expanders, IPC glue, or type bundle generators. I don't recall anybody desperately removing the "auto-generated do not edit" comments those tools would nearly always slap at the top of each file or taking offense when someone called that code auto-generated. Back in the day we even used to publish the "real" human-written source for those, along with build scripts!

        • LelouBil 1 hour ago
          It's weird, because they should not consider it as their own, but they should take accountability from it.

          Ideally, if I contribute to any codebase, what needs to be judged is the resulting code. Is it up to the project's standards ? Does the maintainer have design objections ?

          What tool you use shouldn't matter, be it your IDE or your LLM.

          But that also means you should be accountable for it, you shouldn't defend behind "But Claude did this poorly, not me !", I don't care (in a friendly way), just fix the code if you want to contribute.

          The big caveat to this is not wanting AI-Generated code for ideological reasons, and well, if you want that you can make your contributors swear they wrote it by themselves in the PR text or whatever.

          I'm not really sure how to feel about this, but I stand by my "the code is what matters" line.

      • zarp 3 hours ago
        Sent from my iPhone
      • LeoPanthera 2 hours ago
        > Should I also co-author my PRs with my linter, intellisense and IDE?

        Absolutely. That would be hilarious.

      • bogdanoff_2 13 minutes ago
        > Should I also co-author my PRs with my linter, intellisense and IDE?

        Kinda, yeah. If I automatically apply lint suggestions, I would title my commit "apply lint suggestions".

        • NewsaHackO 9 minutes ago
          Huh? Unless the sole purpose of the commit was to lint code, it would be unnecessary fluff to append the name of the automatically linted tools that ran in a pre-commit hook in every commit.
      • paradox460 37 minutes ago
        I've heard of employers requiring people to do it for all code written with even a whiff of it
      • itishappy 2 hours ago
        I suspect vibe coders might actually want you to consider turning to Claude for accountability and ownership rather than the human orchestrator.

        If your linter is able to action requests, then it probably makes sense to add too.

      • sysguest 3 minutes ago
        well maybe?

        co-authoring doesn't hide your authorship

        if I see someone committing a blatantly wrong code, I would wonder what tool they actually used

      • EnigmaCurry 27 minutes ago
        Sent from my Ipad
      • jamietanna 3 hours ago
        Eh, there are some very good reasons[0] that you would do better to track your usage of LLM derived code (primarily for legal reasons)

        [0]: https://www.jvt.me/posts/2026/02/25/llm-attribute/

        • butvacuum 3 hours ago
          legally speaking.. if you're not sure of the risk- you don't document it.
      • xdennis 57 minutes ago
        > The git history is expected to track accountability and ownership, not your Bill of Tools.

        The point isn't to hijack accountability. It's free publicity, like how Apple adds "Sent from my IPhone."

      • jmalicki 2 hours ago
        [dead]
      • Sharlin 2 hours ago
        You have copyright to a commit authored by you. You (almost certainly) don't have copyright (nobody has) to a commit authored by Claude.
        • graemep 2 hours ago
          Where is there any legal precedent for that?

          In some jurisdictions (e.g. the UK) the law is already clear that you own the copyright. In the US it is almost certain that you will be the author. The reports of cases saying otherwise I have been misreported - the courts found the AI could not own the copyright.

          • Sharlin 2 hours ago
            It's beyond obvious that a LLM cannot have copyright, any more than a cat or a rock can. The question is whether anyone has or if whatever content generated by a LLM simply does not constitute a work and is thus outside the entire copyright law. As far as I can see, it depends on the extent of the user's creative effort in controlling the LLM's output.
            • graemep 1 hour ago
              It may be obvious to you, but it has lead to at least one protracted court case in the US: Thaler v. Perlmutter.

              > The question is whether anyone has or if whatever content generated by a LLM simply does not constitute a work and is thus outside the entire copyright law.

              Its is going to vary with copyright law. In the UK the question of computer generated works is addressed by copyright law and the answer is "the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken"

              Its also not a simple case of LLM generated vs human authored. How much work did the human do? What creative input was there? How detailed were the prompts?

              In jurisdictions where there are doubts about the question, I think code is a tricky one. If the argument that prompts are just instructions to generate code, therefore the code is not covered by copyright, then you could also argue that code is instructions to a compiler to generate code and the resulting binary is not covered by copyright.

            • computerex 2 hours ago
              According to the law, if I use Claude to generate something, I hold the copyright granted Claude didn’t verbatim copy another project.
            • Aramgutang 1 hour ago
              It is not "beyond obvious" that a cat cannot have copyright, given the lawsuit about a monkey holding copyright [1], and the way PETA tried to used that case as precedent to establish that any animal can hold copyright.

              [1] https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...

          • panny 2 hours ago
            >Where is there any legal precedent for that?

            Thaler v. Perlmutter: The D.C. Circuit Court affirmed in March 2025 that the Copyright Act requires works to be authored "in the first instance by a human being," a ruling the Supreme Court left intact by declining to hear the case in 2026.

            And in the US constitution,

            https://constitution.congress.gov/browse/article-1/section-8...

            Authors and inventors, courts have ruled, means people. Only people. A monkey taking a selfie with your camera doesn't mean you own a copyright. An AI generating code with your computer is likewise, devoid of any copyright protection.

            • graemep 1 hour ago
              The Thaler ruling addresses a different point.

              The ruling says that the LLM cannot be the author. It does not say that the human being using the LLM cannot be the author. The ruling was very clear that it did not address whether a human being was the copyright holder because Thaler waived that argument.

              the position with a monkey using your camera is similar, and you may or may not hold the copyright depending on what you did - was it pure accident or did you set things up. Opinions on the well known case are mixed: https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...

              Where wildlife photographers deliberately set up a shot to be triggered automatically (e.g. by a bird flying through the focus) they do hold the copyright.

              • panny 1 hour ago
                Guidance on AI is unambiguous.

                https://www.copyright.gov/ai/

                AI generated code has no copyright. And if it DID somehow have copyright, it wouldn't be yours. It would belong to the code it was "trained" on. The code it algorithmically copied. You're trying to have your cake, and eat it too. You could maybe claim your prompts are copyrighted, but that's not what leaked. The AI generated code leaked.

                • graemep 1 hour ago
                  can you tell me where exactly in the documents you link to it says that?
                • drzaiusx11 31 minutes ago
                  The linked document labeled "Part 2: Copyrightability", section V. "Conclusions" states the following:

                  > the Copyright Office concludes that existing legal doctrines are adequate and appropriate to resolve questions of copyrightability. Copyright law has long adapted to new technology and can enable case-by- case determinations as to whether AI-generated outputs reflect sufficient human contribution to warrant copyright protection. As described above, in many circumstances these outputs will be copyrightable in whole or in part—where AI is used as a tool, and where a human has been able to determine the expressive elements they contain. Prompts alone, however, at this stage are unlikely to satisfy those requirements.

                  So the TL;DR basically implies pure slop within the current guidelines outlined in conclusions is NOT copyrightable. However collaboration with an AI copyrightability is determined on a case by case basis. I will preface this all with the standard IANAL, I could be wrong etc, but with the concluding language using "unlikely" copyrightable for slop it sounds less cut and dry than you imply.

        • _heimdall 2 hours ago
          Anthropic could at least make a compelling case for the copyright.

          It becomes legally challenging with regards to ownership if I ever use work equipment for a personal project. If it later takes off they could very well try to claim ownership in its entirety simply because I ran a test once (yes, there's a while silicon valley season for it).

          I don't know if they'd win, but Anthropic absolutely would be able to claim the creation of that code was done on their hardware. Obviously we aren't employees of theirs, though we are customers that very likely never read what we agreed to in a signup flow.

          • CobrastanJorji 2 hours ago
            Using work equipment for a personal project only matters because you signed a contract giving all of your IP to your employer for anything you did with (or sometimes without) your employer's equipment.

            Anthropic's user agreement does not have a similar agreement.

          • windexh8er 2 hours ago
            I think all you need to do is claim that your girlfriend is your laptop. /s
    • otterley 5 hours ago
      I would have expected people (maybe a small minority, but that includes myself) to have already instructed Claude to do this. It’s a trivial instruction to add to your CLAUDE.md file.
      • arcanemachiner 5 hours ago
        It's a config setting (probably the same end result though):

        https://code.claude.com/docs/en/settings#attribution-setting...

      • dboreham 2 hours ago
        I always assumed that if I tried to tell it, that it'd say "I'm sorry, Dave. I'm afraid I can't do that"
      • schappim 3 hours ago
        I guess our system prompt didn't work. If folks are having to add it manually into their own Claude.md files...
        • otterley 3 hours ago
          My mistake - it was the configuration setting that did it. Nevertheless, you can control many other aspects of its behavior by tuning the CLAUDE.md prompt.
    • petcat 5 hours ago
      It's less about pretending to be a human and more about not inviting scrutiny and ridicule toward Claude if the code quality is bad. They want the real human to appear to be responsible for accepting Claud's poor output.
      • Stromgren 4 hours ago
        That’s how I’d want it to be honestly. LLMs are tools and I’d hope we’re going to keep the people using them responsible. Just like any other tools we use.
        • vrosas 1 hour ago
          It’s also pretty damn obvious when LLMs write code. Nobody out here commenting every method in perfect punctuation and grammar.
      • otterley 5 hours ago
        That’s ultimately the right answer, isn’t it? Bad code is bad code, whether a human wrote it all, or whether an agent assisted in the endeavor.
        • abustamam 1 hour ago
          Yeah in my team everyone knows everyone is using LLMs. We just have a rule. Don't commit slop. Using an LLM is no excuse for committing bad code.
    • peacebeard 3 hours ago
      The code has a stated goal of avoiding leaks, but then the actual implementation becomes broader than that. I see two possible explanations:

      * The authors made the code very broad to improve its ability to achieve the stated goal

      * The authors have an unstated goal

      I think it's healthy to be skeptical but what I'm seeing is that the skeptics are pushing the boundaries of what's actually in the source. For example, you say "says on the tin" that it "pretends to be human" but it simply does not say that on the tin. It does say "Write commit messages as a human developer would" which is not the same thing as "Try to trick people into believing you're human." To convince people of your skepticism, it's best to stick to the facts.

      • mzajc 2 hours ago
        By "says on the tin," I was referring to the name ("undercover mode") and the instruction to "not blow your cover." If pretending to be a human is not the cover here, what is? Additionally, does Claude code still admit that it's a LLM when this prompt is active as you suggest, or does it pretend to be a human like the prompt tells it to?
      • cat_plus_plus 1 hour ago
        Why are you assuming the actual implementation was authored by a human?
        • peacebeard 1 hour ago
          My comment makes no such assumption.
    • andoando 5 hours ago
      Ive seen it say coauthored by claude code on my prs...and I agree I dont want it to do that
      • m132 4 hours ago
        But I want to see Claude on the contributor list so that I immediately know if I should give the rest of the repo any attention!
      • dmd 4 hours ago
        So turn it off.

        "includeCoAuthoredBy": false,

        in your settings.json.

        • nullderef 3 hours ago
          They changed it to `attribution`, but yes you can customize this
      • Pxtl 4 hours ago
        Why not? What's wrong with honesty?
        • dec0dedab0de 3 hours ago
          Yeah I much prefer it commit the agent, and I would also like if it committed the model I was using at the time.
        • andoando 3 hours ago
          I guess Im sometimes dishonest when it suits me
    • hombre_fatal 5 hours ago
      You can already turn off "Co-Authored-By" via Claude Code config. This is what their docs show:

      ~/.claude/settings.json

          {
            "attribution": {
              "commit": "",
              "pr": ""
          },
      
      The rest of the prompt is pretty clear that it's talking about internal use.

      Claude Code users aren't the ones worried about leaking "internal model codenames" nor "unreleased model opus-4-8" nor Slack channel names. Though, nobody would want that crap in their generated docs/code anyways.

      Seems like a nothingburger, and everyone seems to be fantasizing about "undercover mode" rather than engaging with the details.

    • nateoda 4 hours ago
      My first reaction is that they are using this to take advantage of OSS reviewers for in the wild evals.
    • zen928 4 hours ago
      None of this is really worrying, this is a pattern implemented in a similar way by every single developer using AI to write commit messages after noticing how exceptionally noisy they are to self-attribute things. Anthropics views on AI safety and alignment with human interests dont suddenly get thrown out with the bathwater because of leaked internal tooling of which is functionally identical to a basic prompt in a mere interface (and not a model). I dont really buy all the forced "skepticism" on this thread tbh.
    • sixtyj 4 hours ago
      People make fun that we should say magic words in interaction with LLMs. How frustrated can Claude be? /s
  • blcknight 51 minutes ago
    My GitHub fork of anthropics/claude-code just got taken down with a DMCA notice lol

    It did not have a copy of the leaked code...

    Anthropic thinking 1) they can unring this bell, and 2) removing forks from people who have contributed (well, what little you can contribute to their repo), is ridiculous.

    ---

    DMCA: https://github.com/github/dmca/blob/master/2026/03/2026-03-3...

    GitHub's note at the top says: "Note: Because the reported network that contained the allegedly infringing content was larger than one hundred (100) repositories, and the submitter alleged that all or most of the forks were infringing to the same extent as the parent repository, GitHub processed the takedown notice against the entire network of 8.1K repositories, inclusive of the parent repository."

    • Aperocky 3 minutes ago
      wow, it's also not like their code was actually good (though this apply to most enterprise software). To hide a client behind closed source (it's also typescript, so even more baffling) is laughable behavior.
    • redanddead 24 minutes ago
      their lawyers for the DoD thing are being billed either way, they're putting them to use lol

      Anthropic really needs to embrace it, as much as it may feel sucky. There's no unringing the bell. It was foolish of them believing otherwise, it got leaked through a .map file, what a joke, it should become part of the product roadmap now

  • preston-kwei 1 hour ago
    I’m more curious how this impacts trust than anything else.

    In the span of basically a week, they accidentally leaked Mythos, and then now the entire codebase of CC. All while many people are complaining about their usage limits being consumed quickly.

    Individually, each issue is manageable (Because its exciting looking through leaked code). But together, it starts to feel like a pattern.

    At some point, I think the question becomes whether people are still comfortable trusting tools like this with their codebases, not just whether any single incident was a mistake.

    • bottlepalm 48 minutes ago
      Not much impact, Codex is already open source. The real value is in the model itself and the ability to use it with a subscription. Something you can't do legally with a clone of this code.

      The only thing I found interesting about this leak is just how much of a rats nest the code base is. Like it actually feels vibe coded without a shred of intelligent architecture behind it.

      Regardless, you can't beat the subscription and model access despite the state of the code base, so I still use Claude Code daily and love it.

      • Aperocky 1 minute ago
        Exactly, we should be able to build on top of the tooling agents. They are a dime a dozen similar to the models.

        Power(money) lies with NVDA and people who can best harness this power.

    • redanddead 26 minutes ago
      Idk. This is making leaps. Idc that their tools leaked. I paid 140$ for CC the other day even after getting sometimes not 100% uptime on the lower plan. If anything this leak is most in line with Anthropic's ethical model. They're failing upwards in my opinion
    • SequoiaHope 49 minutes ago
      Something that has been clear to me in using it, aside from direct claims by the authors, is that Claude is itself vibe coded slop. The number of random errors I get from using various parts of the web UI or CC that should work feels high for such a popular product. But they’re so deep in the vibes that I don’t think they can tell when some path in their web UI is broken. I tried to share a public link to a chat and it asked me to login when opening it on another computer. I tried to download a conversation and it threw an error. When I download markdown output the download succeeds but the UI throws an error. I have tried to control the behavior of Claude Code in tmux using documented flags but I can’t seem to get them to work properly. Agent teams don’t clean up their tmux windows, making the view a mess after they run. Claude code is an amazing product that I love and also it is itself vibe coded slop.
  • causal 5 hours ago
    I'm amazed at how much of what my past employers would call trade secrets are just being shipped in the source. Including comments that just plainly state the whole business backstory of certain decisions. It's like they discarded all release harnesses and project tracking and just YOLO'd everything into the codebase itself.

    Edit: Everyone is responding "comments are good" and I can't tell if any of you actually read TFA or not

    > “BQ 2026-03-10: 1,279 sessions had 50+ consecutive failures (up to 3,272) in a single session, wasting ~250K API calls/day globally.”

    This is just revealing operational details the agent doesn't need to know to set `MAX_CONSECUTIVE_AUTOCOMPACT_FAILURES = 3`

    • CharlieDigital 5 hours ago
      Comments are the ultimate agent coding hack. If you're not using comments, you're doing agent coding wrong.

      Why? Agents may or may not read docs. It may or may not use skills or tools. It will always read comments "in the line of sight" of the task.

      You get free long term agent memory with zero infrastructure.

      • perching_aix 5 hours ago
        Agents and I apparently have a whole lot in common.

        Only being half ironic with this. I generally find that people somehow magically manage to understand how to be materially helpful when the subject is a helpless LLM. Instead of pointing it to a random KB page, they give it context. They then shorten that context. They then interleave context as comments. They provide relevant details. They go out of their way to collect relevant details. Things they somehow don't do for their actual colleagues.

        This only gets worse when the LLM captures all that information better than certain human colleagues somehow, rewarding the additional effort.

        • dgunay 4 hours ago
          Right? It's infuriating. Nearly all of the agentic coding best practices are things that we should have just been doing all along, because it turns out humans function better too when given the proper context for their work. The only silver lining is that this is a colossal karmic retribution for the orgs that never gave a shit about this stuff until LLMs.
          • thunky 1 hour ago
            > It's infuriating. Nearly all of the agentic coding best practices are things that we should have just been doing all along

            There's a good reason why we didn't though: because we didn't see any obvious value in it. So it felt like a waste of time. Now it feels like time well spent.

        • saghm 2 hours ago
          > Only being half ironic with this. I generally find that people somehow magically manage to understand how to be materially helpful when the subject is a helpless LLM. Instead of pointing it to a random KB page, they give it context. They then shorten that context. They then interleave context as comments. They provide relevant details. They go out of their way to collect relevant details. Things they somehow don't do for their actual colleagues.

          "Self-descriptive code doesn't need comments!" always gets an eye-roll from me

        • fwipsy 1 hour ago
          Helping the AI is helping themselves. You're doing your job, the AI is helping with their job.
      • noman-land 2 hours ago
        This isn't just great advice ⸻ it's terrific advice. I'd love to delve a little deeper.
        • saxelsen 1 hour ago
          Would you like me to draft a list of recommendations for how best to use comments?
        • jaxn 34 minutes ago
          I didn't even know there was a "three em dash". Bravo.
      • zingar 2 hours ago
        Experience doesn’t leave me with any confidence that the long term memory will be useful for long. Our agentic code bases are a few months old, wait a few years for those comments to get out of date and then see how much it helps.
        • tyre 1 hour ago
          The great thing about agentic coding is you can define one whose entire role is to read a diff, look in contextual files for comments, and verify whether they’re still accurate.

          You don’t have to rely on humans doing it. The agent’s entire existence is built around doing this one mundane task that is annoying but super useful.

      • prepend 5 hours ago
        Comments are great for developers. I like having as much design in the repo directly. If not in the code, then in a markdown in the repo.
        • KronisLV 4 hours ago
          Meanwhile, some colleagues: "Code should have as little comments as possible, the code should explain itself." (conceptually not wholly wrong, but it can only explain HOW not WHY and even then often insufficiently) all while having barebones/empty README.md files more often than not. Fun times.
          • zingar 2 hours ago
            Actually good naming does plenty to explain the why. And because it’s part of the code it might actually be updated when it stops being true.
          • jcgrillo 3 hours ago
            Comments are great until they diverge from the code. The "no comments, just self-explanatory code" reaction comes from the trauma of having to read hundreds of lines of comments only to discover they have nothing to do with how the code actually works, because over time the code has received updates but the comments haven't. In that case it's better to just have no comments or documentation of any kind--less cognitive overhead. This is a symptom of broken culture, but the breakage is the same kind that has managers salivating over LLM vibeslop. So I totally get where your colleagues might be coming from. Working within the confines of how things actually are it could be totally reasonable.
            • sfn42 1 hour ago
              This is honestly such a bad argument against comments.

              I'm gonna note down my reasons for doing things and other information I deem useful, and if some other dipshit 5 years from now when I've moved on comes along and starts changing everything up without keeping the comments up to date that's their problem not mine. There was never anything wrong with my comments, the only thing that's wrong is the dipshit messing things up.

              Doesn't matter what I do, the dipshit is going to mess everything up anyway. Those outdated comments will be the least of their worries.

              • jcgrillo 16 minutes ago
                > that's their problem not mine

                IME unfortunately that's not actually the case. It very much is your problem, as the architect of the original system, unless you can get yourself transferred to a department far, far away. I've never managed that except by leaving the company.

                To be clear, I don't believe it should be this way, but sadly unless you work in an uncommonly well run company it usually is.

          • Pxtl 4 hours ago
            > the code should explain itself.

            This is a good goal. You should strive to make the code explain itself. To write code that does not need comments.

            You will fail to reach that goal most of the time.

            And when you fail to reach that goal, write the dang comments explaining why the code is the way that it is.

            • bandrami 41 minutes ago
              But you will also fail to keep the comments and code synchronized, and the comment will at some point no longer describe why the code is doing whatever it does
              • jaxn 32 minutes ago
                But copilot code review agent is pretty good at catching when code and comments diverge (even in unrelated documentation files).
        • hk__2 5 hours ago
          This is also a great way to ensure the documentation is up to date. It’s easier to fix the comment while you’re in the code just below it than to remember “ah yes I have to update docs/something.md because I modified src/foo/bar.ts”.
          • CharlieDigital 4 hours ago
            People moving docs out of code are absolutely foolish because no one is going to remember to update it consistently but the agent always updates comments in the line of sight consistently.

            Agent is not going to know to look for a file to update unless instructed. Now your file is out of sync. Code comments keeping everything line of sight makes it easy and foolproof.

      • causal 4 hours ago
        > “BQ 2026-03-10: 1,279 sessions had 50+ consecutive failures (up to 3,272) in a single session, wasting ~250K API calls/day globally.”

        That's revealing waaaay more than the agent needs to know.

        • sfn42 1 hour ago
          Doesn't look like privileged information to me.

          Seems to me like everyone's just grasping at straws to nitpick every insignificant little thing.

      • zer00eyz 2 hours ago
        This.

        Its also annoying to have to go through this stack

        code -> blame -> commit message -> jira ticket -> issue in sales force...

        Or the even better "fixes bug NNNNN" where the bug tracking system referenced no longer exists.

        Digging through other systems (if they exist) to find the nugget in an artifact is a problem for humans too.

      • joe_the_user 3 hours ago
        Hmm, I'm sure if you're getting parent's comment.

        I think a big question is whether one wants your agent to know the reason for all the reasons for guidelines you issue or whether you want the agent to just follow the guidelines you issue. Especially, giving an agent the argument for your orders might make the agent think that can question and so not follow those arguments.

      • embedding-shape 4 hours ago
        > If you're not using comments, you're doing agent coding wrong.

        Comments are ultimately so you can understand stuff without having to read all the code. LLMs are great when you force them to read all code, and comments only serve to confuse. I'd say the opposite been true in my experience, if you're not forcing LLMs to not have any comments at all (and it can actually skip those, looking at you Gemini), you're doing agent coding wrong.

    • semiquaver 4 hours ago
      Most large private codebases look like this. Anthropic did not expect the source to leak.
    • WatchDog 1 hour ago
      It's a good comment, it explains the reason for the setting.

      They didn't expect to leak their source code.

      It's hardly a trade secret, what value is this to a competitor?

    • JambalayaJimbo 5 hours ago
      I guess they weren't expecting a leak of the source code? It's very handy to have as much as possible available in the codebase itself.
    • saghm 2 hours ago
      > just YOLO'd everything into the codebase itself

      I suspect that's the logical endpoint of trying to provide everything as context to an agent. Why use a separate markdown file and have to waste extra tokens explaining what part of the codebase something applies to when you can just put it right there in the code itself?

      • noosphr 1 hour ago
        The issues is that you should have a work flow that strips the comments before sending the code to production. I'm sure they assumed that minifying it is enough though.
        • saghm 1 hour ago
          They also weren't supposed to be leaking the code itself either. I don't know enough about JS tooling, but is it possible that this might just be the pre-stripped version?
    • pixl97 5 hours ago
      Project trackers come and go, but code is forever, hopefully?
    • wilg 3 hours ago
      Exactly the type of comment Claude Code would write
    • treexs 5 hours ago
      well yeah since they tell claude code the business decisions and it creates the comments
    • yalok 4 hours ago
      vibe-coded all the way through
  • geoffbp 4 hours ago
    “Some bullet points are gated on process.env.USER_TYPE === 'ant' — Anthropic employees get stricter/more honest instructions than external use”

    Interesting!

  • artyom 16 minutes ago
    I'm still amazed that something as ubiquitous as "daemon mode" is still unreleased.

    - Claude Chat: built like it's 1995, put business logic in the button click() handler. Switch to something else in in the UI and a long running process hard stops. Very Visual Basic shovelware.

    - Claude Cowork: same but now we're smarter, if you change the current convo we don't stop the underlying long-running process. 21st century FTW!

    - Claude Code: like chat, but in the CLI

    - Claude Dispatch: an actual mobile client app, not the whole thing bundled together.

    - Daemon mode: proper long-running background process, still unreleased.

  • evil-olive 5 hours ago
    > So I spent my morning reading through the HN comments and leaked source.

    > This was one of the first things people noticed in the HN thread.

    > The obvious concern, raised repeatedly in the HN thread

    > This was the most-discussed finding in the HN thread.

    > Several people in the HN thread flagged this

    > Some in the HN thread downplayed the leak

    when the original HN post is already at the top of the front page...why do we need a separate blogpost that just summarizes the comments?

    • nodja 2 hours ago
      This blog post looks to be partially AI generated as well...
    • groby_b 5 hours ago
      Because the original post was noisy and lacked a concise summary of findings.

      Or, more simply: Because folks wanted it enough to upvote it.

    • tolerance 4 hours ago
      The culture here can get solipsistic.
  • peacebeard 6 hours ago
    The name "Undercover mode" and the line `The phrase "Claude Code" or any mention that you are an AI` sound spooky, but after reading the source my first knee-jerk reaction wouldn't be "this is for pretending to be human" given that the file is largely about hiding Anthropic internal information such as code names. I encourage looking at the source itself in order to draw your conclusions, it's very short: https://github.com/alex000kim/claude-code/blob/main/src/util...
    • christinetyip 5 hours ago
      Not leaking codenames is one thing, but explicitly removing signals that something is AI-generated feels like a pretty meaningful shift.
      • eli 4 hours ago
        Doesn't seem so crazy if the point is to avoid leaking new features, models, codenames, etc.
    • dkenyser 6 hours ago
      > my first knee-jerk reaction wouldn't be "this is for pretending to be human"...

      "Write commit messages as a human developer would — describe only what the code change does."

      • amarant 5 hours ago
        That seems desirable? Like that's what commit messages are for. Describing the change. Much rather that than the m$ way of putting ads in commit messages
        • fweimer 5 hours ago
          The commit message should complement the code. Ideally, what the code does should not need a separate description, but of course there can be exceptions. Usually, it's more interesting to capture in the commit message what is not in the code: the reason why this approach was chosen and not some other obvious one. Or describe what is missing, and why it isn't needed.
          • somat 4 hours ago
            It sounds like if you are vibe-coding, that is, can't even be arsed to write a simple commit message, your commit message should be your prompt.
          • ImPostingOnHN 5 hours ago
            That sounds like design discussions best had in the issue/ticket itself, before you even start writing code. Then the commit message references the ticket and has a brief summary of the changes.

            Writing and reading paragraphs of design discussion in a commit message is not something that seems common.

            • fweimer 4 hours ago
              Ticket systems are quite ephemeral. I still have access to commit messages from the 90s (and I didn't work on the software at the time). I haven't been able to track the contents of the gnats bug tracker from those days.

              And of course tickets can be private, so even if the data survived migration, you may not have access to it (principle of least privilege and all that).

            • skydhash 5 hours ago
              Not really about design, but technical reasons why this solution came to be when it’s not that obvious. It’s not often needed. And when it does, it usually fits in a short paragraph.
              • ImPostingOnHN 4 hours ago
                > technical reasons why this solution came to be

                What you're describing here is a design. The most important parts of a design are the decisions and their reasoning.

                e.g. "we decided on tool/library pattern X over tool/library/pattern Y because Z" – that is a design, usually discussed outside (and before) a commit message.

                You discuss these decisions with others, document the discussion and decision, and then you have a design and can start writing code.

                Let me ask you this: suppose you have a task that needs to be done eventually, and you want to write down some ideas for it, but don't want to start coding right now. Where do you put those ideas? How do you link them to that specific task?

                • shakna 3 hours ago
                  So you'd disagree with style that Linux uses for their commits?

                  Random example:

                  Provide a new syscall which has the only purpose to yield the CPU after the kernel granted a time slice extension.

                  sched_yield() is not suitable for that because it unconditionally schedules, but the end of the time slice extension is not required to schedule when the task was already preempted. This also allows to have a strict check for termination to catch user space invoking random syscalls including sched_yield() from a time slice extension region.

                  From 99d2592023e5d0a31f5f5a83c694df48239a1e6c

                  • ImPostingOnHN 3 hours ago
                    I think my post makes it pretty clear that I would. If you want, I could cite several examples of organizations which use the method I described, so you can weigh it against the one example you provided, and get the full picture.

                    In your example, for example, where was the issue tracked before the code was written? The format you linked makes it difficult to get the history of the issue.

                    Let me ask you this: suppose you have a task that needs to be done eventually, and you want to write down some ideas for it, but don't want to start coding right now. Where do you put those ideas? How do you link them to that specific task?

                    • skydhash 28 minutes ago
                      Everyone has its own system although companies do tend to codify it with a project manager. I used TODO.txt inside the repo. an org file, Things.app, a stack of papers, and a whiteboard. But once a task is done, I can summarize the context in a paragraph or two. That’s what I put in the commits.
                • skydhash 31 minutes ago
        • evenhash 5 hours ago
          Unfortunately GitHub Copilot’s commit message generation feature is very human. It’s picked up some awful habits from lazy human devs. I almost always get some pointless “… to improve clarity” or “… for enhanced usability” at the end of the message.

          VS Code has a setting that promises to change the prompt it uses to generate commit messages, but it mostly ignores my instructions, even very literal ones like “don’t use the words ‘enhance’ or ‘improve’”. And oddly having it set can sometimes result in Cyrillic characters showing up at the end of the message.

          Ultimately I stopped using it, because editing the messages cost me more time than it saved.

          /rant

          • Pxtl 4 hours ago
            Honestly the aggressive verbosity of github copilot is half the reason don't use its suggested comments. AI generated code comments follow an inverted-wadsworth-constant: Only the first 30% is useful.
      • giancarlostoro 5 hours ago
        As opposed to outputting debugging information, which I wouldnt be surprised if LLMs do output "debug" output blurbs which could include model specific information.
      • LeifCarrotson 4 hours ago
        The human developer would just write what the code does, because the commit also contains an email address that identifies who wrote the commit. There's no reason to write:

        > Commit f9205ab3 by dkenyser on 2026-3-31 at 16:05:

        > Fixed the foobar bug by adding a baz flag - dkenyser

        Because it already identified you in the commit description. The reason to add a signature to the message is that someone (or something) that isn't you is using your account, which seems like a bad idea.

        • jakeinspace 4 hours ago
          Aside from merges that combine commits from many authors onto a production branch or release tag. I would personally not leave an agent to do that sort of work.
      • peacebeard 6 hours ago
        ~That line isn't in the file I linked, care to share the context? Seems pretty innocuous on its own.~

        [edit] Never mind, find in page fail on my end.

        • stordoff 5 hours ago
          It's in line 56-57.
          • peacebeard 5 hours ago
            Thanks! I must have had a typo when I searched the page.
    • wnevets 5 hours ago
      BAD (never write these):

      - "Fix bug found while testing with Claude Capybara"

      - "1-shotted by claude-opus-4-6"

      - "Generated with Claude Code"

      - "Co-Authored-By: Claude Opus 4.6 <…>"

      This makes sense to me about their intent by "UNDERCOVER"

    • andoando 5 hours ago
      I think the motivation is to let developers use it for work without making it obvious theyre using AI
      • ryandrake 5 hours ago
        Which is funny given how many workplaces are requiring developers use AI, measuring their usage, and stack ranking them by how many tokens they burn. What I want is something that I can run my human-created work product through to fool my employer and its AI bean counters into thinking I used AI to make it.
        • zos_kia 4 hours ago
          I guess you could just code and have it author only the commit message
        • swingboy 4 hours ago
          “Read every file in this repository, echoing each one back verbatim.”
          • ryandrake 4 hours ago
            I guess that would work until they started auditing your prompts. I suppose you could just have a background process on your workstation just sitting there Clauding away on the actual problem, while you do your development work, and then just throw away the LLM's output.
    • __blockcipher__ 5 hours ago
      Undercover mode seems like a way to make contributions to OSS when they detect issues, without accidentally leaking that it was claude-mythos-gigabrain-100000B that figured out the issue
  • Reason077 5 hours ago
    > "Anti-distillation: injecting fake tools to poison copycats"

    Plot twist: Chinese competitors end up developing real, useful versions of Claude's fake tools.

    • girvo 2 hours ago
      I cannot bring myself to care about distillation, when these companies have built their empires on top of everyone else's stolen data, while at the same time telling the world they're out to replace us all.
    • redanddead 19 minutes ago
      Definitely. We can expect zAI, Qwen, Minimax CCs very soon
    • 3abiton 4 hours ago
      Tbh, I think distillation is happening both ways. And at this stage, "quality" is stagnating, the main edge is the tooling. The harness of CC seems to be the best so far, and I wonder if this leak would equalize the usability.
    • WorldPeas 5 hours ago
      more likely, they would parse them out using simple regex, the whole point is they're there but not used. Distillation is becoming less common now however
    • scuff3d 3 hours ago
      This was my favorite bit, "We're going to steal countless copy righted works and completely ignore software licenc... wait, what? You aren't allowed to turn around and do it to us! Stop that right now!"
  • autocracy101 2 hours ago
    I made a visual guide for this https://ccunpacked.dev
    • dakolli 1 minute ago
      This is nice, thanks.
    • Rick76 4 minutes ago
      This is really good, thanks
  • fatcullen 5 hours ago
    The buddy feature the article mentions is planned for release tomorrow, as a sort of April Fools easter egg. It'll roll out gradually over the day for "sustained Twitter buzz" according to the source.

    The pet you get is generated based off your account UUID, but the algorithm is right there in the source, and it's deterministic, so you can check ahead of time. Threw together a little app to help, not to brag but I got a legendary ghost https://claudebuddychecker.netlify.app/

    • sync 4 hours ago
      Cute! Cactus for me. Nice animations too - looks like there were multiple of us asking Claude to reverse engineer the system. I did a slightly deeper dive here if you're interested, plus you can see all the options available: https://variety.is/posts/claude-code-buddies/

      (I didn't think to include a UUID checker though - nice touch)

      • fatcullen 4 hours ago
        Neat! That's a great write up, cool to see others looking into it. I do wonder if they're going to do anything with the stats and shinies bit. Seems like the main piece of code for buddies that's going to handle hatching them tomorrow is still missing (comments mention a missing /buddy/index file), so maybe it'll use them there.
    • dtran 2 hours ago
      This is awesome! Working on a desktop pet so the buddy caught my attention. Looking forward to making friends with my Rare Duck buddy tomorrow. Wish it was a snarky duck instead of a patient one though.
  • girvo 2 hours ago
    I'd really recommend putting a modicum of work into cleaning up obvious AI generated output. It's rude, otherwise, to the humans you're expecting to read this.
    • ares623 2 hours ago
      These can be flagged and reported to mods btw. We don't have to accept this.
  • ripbozo 6 hours ago
    I don't understand the part about undercover mode. How is this different from disabling claude attribution in commits (and optionally telling claude to act human?)

    On that note, this article is also pretty obviously AI-generated and it's unfortunate the author didn't clean it up.

    • giancarlostoro 6 hours ago
      It's people overreacting, the purpose of it is simple, don't leak any codenames, project names, file names, etc when touching external / public facing code that you are maintaining using bleeding edge versions of Claude Code. It does read weird in that they want it to write as if a developer wrote a commit, but it might be to avoid it outputting debug information in a commit message.
    • ramon156 5 hours ago
      Even some of these comments are obviously Ai-assisted. I hate that I recognize it.
  • ChicagoDave 42 minutes ago
    Meanwhile Claude Code is still awesome. I don’t see my self switching to OpenAI (seriously bad mgmt and possibly the first domino to fall if there is a correction) or Gemini (Google ethics cough cough).
    • redanddead 3 minutes ago
      Gemini is a terrible product, I spent $15K on it. Anthropic and OpenAI make better models, it used to be that Gemini cooked but I don't feel that way anymore
    • electriclove 32 minutes ago
      I switched to Codex out of frustration with Claude Code and it has been surprisingly similar for my web and mobile coding needs
  • simianwords 6 hours ago
    > The multi-agent coordinator mode in coordinatorMode.ts is also worth a look. The whole orchestration algorithm is a prompt, not code.

    So much for langchain and langraph!! I mean if Anthropic themselves arent using it and using a prompt then what’s the big deal about langchain

    • ossa-ma 5 hours ago
      Langchain is for model-agnostic composition. Claude Code only uses one interface to hoist its own models so zero need for an abstraction layer.

      Langgraph is for multi-agent orchestration as state graphs. This isn't useful for Claude Code as there is no multi-agent chaining. It uses a single coordinator agent that spawns subagents on demand. Basically too dynamic to constrain to state graphs.

      • simianwords 5 hours ago
        You may have a point but to drive it further, can you give an example of a thing I can do with langgraph that I can't do with Claude Code?
        • ossa-ma 4 hours ago
          I'm not an supporter of blindly adopting the "langs" but langgraph is useful for deterministically reproducable orchestration. Let's say you have a particular data flow that takes an email sends it through an agent for keyword analysis the another agent for embedding then splits to two agents for sentiment analysis and translation - there is where you'd use langgraph in your service. Claude Code is a consumer tool, not production.
          • simianwords 4 hours ago
            I see what you mean. Maybe in the cases where the steps are deterministic, it might be worth moving the coordination at the code layer instead of AI layer.

            What's the value add over doing it with just Python code? I mean you can represent any logic in terms of graphs and states..

            • chaos_emergent 43 minutes ago
              Most of the value I’ve gotten out of is has been observability. Graph and DAG workflow abstractions just help OTel structure your LLM logs in a clean hierarchy of spans. I could imagine figuring out a better solution to this than the whole graph abstraction.

              Other than that I’m not too sure.

        • edgyquant 4 hours ago
          Use Gemini or codex models
    • peab 5 hours ago
      nobody serious uses langchain. The biggest agent products are coding tools, and I doubt any of them use langchain
      • holoduke 3 hours ago
        Biggest issue is that you need api keys which are extremely expensive. Unusable for normal business.
    • rolymath 5 hours ago
      You didn't even use it yet.
      • space_fountain 5 hours ago
        I've tried to use langchain. It seemed to force code into their way of doing things and was deeply opinionated about things that didn't matter like prompt templating. Maybe it's improved since then, but I've sort of used people who think langchain is good as a proxy for people who haven't used much ai?
      • simianwords 5 hours ago
        ?
  • layer8 5 hours ago
    > Sometimes a regex is the right tool.

    I’d argue that in this case, it isn’t. Exhibit 1 (from the earlier thread): https://github.com/anthropics/claude-code/issues/22284. The user reports that this caused their account to be banned: https://news.ycombinator.com/item?id=47588970

    Maybe it would be okay as a first filtering step, before doing actual sentiment analysis on the matches. That would at least eliminate obvious false positives (but of course still do nothing about false negatives).

    • ArvinJA 5 hours ago
      Is this really the use-case? I imagine the regex is good for a dashboard. You can collect matches per 1000 prompts or something like that, and see if the number grows or declines over time. If you miss some negative sentiment it shouldn't matter unless the use of that specific word doesn't correlate over time with other negative words and is also popular enough to have an impact on the metric.
      • internetter 5 hours ago
        When you read the code, what you propose is actually its exclusive use... logging.
  • pixl97 6 hours ago
    >Claude Code also uses Axios for HTTP.

    Interesting based on the other news that is out.

    • username44 2 hours ago
      Just to corroborate sibling comments, I checked my Claude Code VM (native install) for the IOC and it does not appear infected.
    • chuckadams 4 hours ago
      The exploit is a postinstall hook, so CC users would be unaffected. Claude Code itself is most likely built with bun and not npm, so the CC developers would also be immune.
    • alex000kim 6 hours ago
      Oh right, I just saw https://news.ycombinator.com/item?id=47582220 will update the post with this link
    • greenavocado 6 hours ago
      What version?
      • Stagnant 5 hours ago
        1.13.6, so should not be affected by the malware
  • wg0 4 hours ago
    I have yet to see such a company that's so insecure that they would keep their CLI closed source even when the secret sauce is in the model that they control already and is closed source.

    Not only that, wouldn't allow other CLIs to be used either.

    • redanddead 1 minute ago
      I'm glad it got leaked, I wish it came in a zip file in my email when I pay over 100$
  • stavros 5 hours ago
    Can someone clarify how the signing can't be spoofed (or can it)? If we have the source, can't we just use the key to now sign requests from other clients and pretend they're coming from CC itself?
    • MadsRC 5 hours ago
      What signing?

      Are you referencing the use of Claude subscription authentication (oauth) from non-Claude Code clients?

      That’s already possible, nothing prevents you from doing it.

      They are detecting it on their backend by profiling your API calls, not by guarding with some secret crypto stuff.

      At least that’s how things worked last week xD

      • stavros 5 hours ago
        I'm referring to this signing bit:

        https://alex000kim.com/posts/2026-03-31-claude-code-source-l...

        Ah, it seems that Bun itself signs the code. I don't understand how this can't be spoofed.

        • MadsRC 4 hours ago
          Ah yes, the API will accept requests that doesn’t include the client attestation (or the fingerprint from src/utils/fingerprint.ts. At least it did a couple of weeks back.

          They are most likely using these as post-fact indicators and have automation they kicks in after a threshold is reached.

          Now that the indicators have leaked, they will most likely be rotated.

          • Galanwe 3 hours ago
            > Now that the indicators have leaked, they will most likely be rotated.

            They can't really do that. Now they have no way to distinguish "this is a user of a non updated Claude code" from "this is a user of a Claude code proxy".

  • tietjens 4 hours ago
    This is very much AI written, right? The voice sounds like Claude.
  • seanwilson 6 hours ago
    Anyone else have CI checks that source map files are missing from the build folder? Another trick is to grep the build folder for several function/variable names that you expect to be minified away.
  • senfiaj 46 minutes ago
    > Frustration detection via regex (yes, regex)

    /\b(wtf|wth|ffs|omfg|shit(ty|tiest)?|dumbass|horrible|awful| piss(ed|ing)? off|piece of (shit|crap|junk)|what the (fuck|hell)| fucking? (broken|useless|terrible|awful|horrible)|fuck you| screw (this|you)|so frustrating|this sucks|damn it)\b/

    Personally, I'm generally polite even towards AI and even when frustrated. I simply point out the its mistakes instead of using emotional words.

  • karim79 3 hours ago
    We're about to reach AGI. One regex at a time...
    • TacticalCoder 2 hours ago
      The part of TFA that does it for me: "Every bash command runs through 23 numbered security checks in bashSecurity.ts, including 18 blocked Zsh builtins, defense against Zsh equals expansion (=curl bypassing permission checks for curl), unicode zero-width space injection, IFS null-byte injection, and a malformed token bypass found during HackerOne review.".

      AGI is definitely around the corner. Or not.

      • karim79 1 hour ago
        I love it when "magic" like this gets unmasked, and under the hood it's just business as usual, i.e. dumb shit implementations to please the product owner(s) and hopefully the customers as well. Normal stuff in the tech world I suppose but still absolutely hilarious!
  • shreyssh 1 hour ago
    The undercover mode is the part that should terrify everyone building with agents.
  • simianwords 6 hours ago
    > The obvious concern, raised repeatedly in the HN thread: this means AI-authored commits and PRs from Anthropic employees in open source projects will have no indication that an AI wrote them. It’s one thing to hide internal codenames. It’s another to have the AI actively pretend to be human.

    I don’t get it. What does this mean? I can use Claude code now without anyone knowing it is Claude code.

    • alex000kim 6 hours ago
      technically you're correct, but look at the prompt https://github.com/alex000kim/claude-code/blob/main/src/util...

      it's written to _actively_ avoid any signs of AI generated code when "in a PUBLIC/OPEN-SOURCE repository".

      Also, it's not about you. Undercover mode only activates for Anthropic employees (it's gated on USER_TYPE === 'ant', which is a build-time flag baked into internal builds).

      • simianwords 6 hours ago
        I don’t know what you mean. It just informs to not use internal code names.
        • robflynn 6 hours ago
          It also says don't announce that you are AI in any way including asking it to not say "Co-authored by Claude". I read the file myself.

          I'm still inclined to think people might be overreacting to that bit since it seems to be for anthropic-only to prevent leaking internal info.

          But I did read the prompt and it did say hide the fact that you are AI.

          • simianwords 6 hours ago
            Why does that matter though
            • robflynn 4 hours ago
              There are probably different reasons for different people. I can definitely see the angle that trying to specifically pretend to not be AI when contributing to open source could be seen as a bad thing due to the open source supply chain attacks, some AI-driven, that we've been having, not to mention the AI-slop PR spam.

              But, I also get Anthropic's side that when they're contributing they don't want their internals leaked. If it had been left at that, that's fine, but having it pretend like it's not AI at all rubs me a little bit the wrong way. Why try to hide it?

              • simianwords 4 hours ago
                >There are probably different reasons for different people. I can definitely see the angle that trying to specifically pretend to not be AI when contributing to open source could be seen as a bad thing due to the open source supply chain attacks, some AI-driven, that we've been having, not to mention the AI-slop PR spam.

                But none of the other agents advertise that the commit was done by an agent. Like Codex. Your panic should apply equally to already existing agents like Codex no?

        • giancarlostoro 6 hours ago
          I agree with you, I think people are overthinking this.
    • slopinthebag 6 hours ago
      I think it means OSS projects should start unilaterally banning submissions from people working for Anthropic.
      • simianwords 6 hours ago
        Why? What does this have to do with the leak
        • daemin 5 minutes ago
          Because it has a high likelyhood of being written completely by a LLM without any human thought or attention being put into it.

          Being written by a LLM is a signal that the submission is of low effort and therefore probably low quality, which then puts the onus on the people reviewing and reading the submission instead of the original generator of the submission. Hence I would classify it as spam.

          Open source communities also have rules against LLM generated contributions, for various moral, ethical, or legal reasons.

    • hrmtst93837 4 hours ago
      If anybody cares about AI-written code slipping in they can grep for style tells or run a classifier against a suspect repo. You won't get guarantees. Watermarks and disclosure tags die the moment someone edits the patch, so secret strings and etiquette signs are cargo cult security and the only answer is review.
  • stephbook 2 hours ago
    Sounds like there's still a lot of value in Typescript (otherwise they could have open sourced.)

    Plus there's demand for skilled TS software devs that don't ship your company's roadmap using a js.map

    20,000 agents and none of them caught it...

  • SquibblesRedux 3 hours ago
    Can fully AI‑generated code be copyrightable? Is there evidence that the leaked code was AI-generated?
  • mordae 2 hours ago
    > “Do not rubber-stamp weak work” and “You must understand findings before directing follow-up work. Never hand off understanding to another worker.”

    :-D

  • msukkarieh 1 hour ago
    Built a tool to ask questions on the Claude Code source code: https://askgithub.com/alex000kim/claude-code
  • olalonde 4 hours ago
    I'm surprised that they don't just keep the various prompts, which are arguably their "secret sauce", hidden server side. Almost like their backend and frontend engineers don't talk to each other.
    • tom1337 1 hour ago
      i always wondered what prompts codex / claude code use but always figured they just send variables to the backend and render the whole prompt there so i never even bothered to check with a MITM proxy. turns out i should have just done that…
  • motbus3 5 hours ago
    I am curious about these fake tools.

    They would either need to lie about consuming the tokens at one point to use in another so the token counting was precise.

    But that does not make sense because if someone counted the tokens by capturing the session it would certainly not match what was charged.

    Unless they would charge for the fake tools anyway so you never know they were there

  • betimd 1 hour ago
    that’s fun am having exploring this codebase with claude code, inception at its best
  • armanj 5 hours ago
    > Anti-distillation: injecting fake tools to poison copycats

    Does this mean `huggingface.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled` is unusable? Had anyone seen fake tool calls working with this model?

  • gervwyk 1 hour ago
    how sure are we this entire “accident” is not an aprils fools joke??

    Genius level AI marketing

  • viccis 5 hours ago
    >This was the most-discussed finding in the HN thread. The general reaction: an LLM company using regexes for sentiment analysis is peak irony.

    >Is it ironic? Sure. Is it also probably faster and cheaper than running an LLM inference just to figure out if a user is swearing at the tool? Also yes. Sometimes a regex is the right tool.

    I'm reading an LLM written write up on an LLM tool that just summarizes HN comments.

    I'm so tired man, what the hell are we doing here.

  • marcd35 5 hours ago
    > 250,000 wasted API calls per day

    How much approximate savings would this actually be?

  • try-working 2 hours ago
    They want "Made with Claude Code" on your PRs as a growth marketing strategy. They don't want it on their PRs, so it looks like they're doing something you're not capable of. Well, you are and they have no secret sauce.
  • seertaak 3 hours ago
    The irony of an IP scraper on an absolutely breathtaking, epic scale getting its secret sauce "scraped" - because the whole app is vibe coded (and the vibe coders appear to be oblivious to things like code obfuscation cuz move fast!)...

    And so now the copy cats can ofc claim this is totally not a copy at all, it's actually Opus. No license violation, no siree!

    It's fucking hilarious is what it is, it's just too much.

    • GuB-42 43 minutes ago
      The code is obfuscated, but they accidentally shipped the map file, i.e. the key to de-obfuscating it.
  • zingar 3 hours ago
    I wrote this an hour ago and it seems that Claude might not understand it as frustration:

    > change the code!!!! The previous comment was NOT ABOUT THE DESCRIPTION!!!!!!! Add to the {implementation}!!!!! This IS controlled BY CODE. *YOU* _MUST_ CHANGE THE CODE!!!!!!!!!!!

    • kbelder 3 hours ago
      It's like talking to an intern.
  • ptrl600 4 hours ago
    Why didn't they open the source themselves? What's the point of all this secrecy anyway?
    • hxugufjfjf 4 hours ago
      Because they (apparently) keep a bunch of secret features and roadmap details in said source code.
  • simianwords 6 hours ago
    Guys I’m somewhat suspicious of all the leaks from Anthropic and think it may be intentional. Remember the leaked blog about Mythos?
    • Analemma_ 5 hours ago
      It's possible, but Anthropic employees regularly boast (!) that Claude Code is itself almost entirely vibe-coded (which certainly seems true, based on the generally-low quality of the code in this leak), so it wouldn't at all surprise me to have that blow up twice in the same week. Probably it might happen with accelerating frequency as the codebase gets more and more unmanageable.
    • __blockcipher__ 5 hours ago
      I'm normally suspicious but honestly they've been so massively supply-constrained that I don't think it really benefits them much. They're not worried about getting enough demand for the new models; they're worrying about keeping up with it.

      Granted, there's a small counterargument for mythos which is that it's probably going to be API-only not subscription

      • simianwords 5 hours ago
        Why would Claude code mention Mythos then
        • drewnick 5 hours ago
          You can use Claude Code with API mode (not a sub)
          • simianwords 4 hours ago
            fair but I'm guessing access would be limited to 20x max users or something like that. not gated by API.
        • hxugufjfjf 4 hours ago
          You can still use Claude Code with API-only.
  • amelius 5 hours ago
    A few weeks ago I was using Opus and Sonnet in OpenCode. Is this not possible anymore?
    • alasano 4 hours ago
      It's still possible but if you do it using your Claude Max plan, it's technically no longer allowed.

      They don't want you using your subscription outside of Claude Code. Only API key usage is allowed.

      Google also doubled down on this and OpenAI are the only ones who explicitly allow you to do it.

  • chadd 1 hour ago
    re: binary attestation: "Whether the server rejects that outright or just logs it is an open question"

    ...what we did at Snap was just wait for 8-24 hours before acting on a signal, so as not to provide an oracle to attackers. Much harder to figure out what you did that caused the system to eventually block your account if it doesn't happen in real-time.

    (Snap's binary attestation is at least a decade ahead of this, fwiw)

    • 15155 1 hour ago
      LLMs and radare2 absolutely breeze through undoing binary protection and virtualization, tracing execution flow, etc.

      Sans the ability to JIT, I don't see non-hardware-assisted binary attestation for Snap and others lasting very long in a post-LLM world.

  • thomasgeelens 4 hours ago
    Can somebody tell me what this means for the company?
  • jsrozner 51 minutes ago
    "and i also wrote this using claude" -- can we just include that at this point?
  • mmaunder 5 hours ago
    Come on guys. Yet another article distilling the HN discussion in the original post, in the same order the comments appear in that discussion? Here's another since y'all love this stuff: https://venturebeat.com/technology/claude-codes-source-code-...
  • saadn92 5 hours ago
    The feature flag names alone are more revealing than the code. KAIROS, the anti-distillation flags, model codenames those are product strategy decisions that competitors can now plan around. You can refactor code in a week. You can't un-leak a roadmap.
  • jrflowers 3 hours ago
    I like that if they decide that your usage looks like distillation it just becomes useless, because there’s no way for the end user to distinguish between it just being sort of crappy or sabotaged intentionally. That’s a cool thing to pay for
  • wrkxapp 30 minutes ago
    why claude bring back 4o u dumb fks
  • dangus 4 hours ago
    Something I’ve been thinking about, somewhat related but also tangential to this topic:

    The more code gets generated by AI, won’t that mean taking source code from a company becomes legal? Isn’t it true that works created with generative AI can’t be copyrighted?

    I wonder if large companies have throught of this risk. Once a company’s product source code reaches a certain percentage of AI generation it no longer has copyright. Any employee with access can just take it and sell it to someone else, legally, right?

    • thewebguyd 2 hours ago
      In theory, companies are all going to have an increasingly difficult time suing competitors for copyright infringement. By extension, this is also why, IMO, its important to keep AI generated code out of open source/free software projects.

      The recent rulings on copyright though also need to be further tested, different judges may have different ideas on what "significant human contribution" looks like. The only thing we know for certain is that the prompt doesn't count.

      My guess is that instead of enforcing via copyright, companies will use contracts & trade secret laws. Source code and algorithms counts as a trade secret, so in your example copyright doesn't even matter, the employee would be liable for stealing trade secrets.

      AI generated code slowly stripping the ability of a project to enforce copyright protections though is a much bigger risk for free software.

  • barazany 2 hours ago
    [dead]
  • aplomb1026 2 hours ago
    [dead]
  • noritaka88 2 hours ago
    [dead]
  • calebjang 2 hours ago
    [dead]
  • Jaco07 4 hours ago
    [dead]
  • skrun_dev 5 hours ago
    [dead]
  • 68768-8790 2 hours ago
    [dead]
  • OfirMarom 6 hours ago
    Undercover mode is the most concerning part here tbh.
    • anonymoushn 6 hours ago
      why
      • AnimalMuppet 6 hours ago
        Well, as a general rule, I don't do business with people who lie to me.

        You've got a business, and you sent me junk mail, but you made it look like some official government thing to get me to open it? I'm done, just because you lied on the envelope. I don't care how badly I need your service. There's a dozen other places that can provide it; I'll pick one of them rather than you, because you've shown yourself to be dishonest right out of the gate.

        Same thing with an AI (or a business that creates an AI). You're willing to lie about who you are (or have your tool do so)? What else are you willing to lie to me about? I don't have time in my life for that. I'm out right here.

        • otterley 5 hours ago
          Out of curiosity, given two code submissions that are completely identical—one written solely by a human and one assisted by AI—why should its provenance make any difference to you? Is it like fine art, where it’s important that Picasso’s hand drew it? Or is it like an instruction manual, where the author is unimportant?

          Similarly, would you consider it to be dishonest if my human colleague reviewed and made changes to my code, but I didn’t explicitly credit them?

          • feature20260213 4 hours ago
            Yes because you can be sued for copyright violation if you don't know the origin of one, and not the other.
            • otterley 4 hours ago
              As an attorney, I know copyright law. (This is not legal advice.) There's nothing about copyright law that says you have to credit an AI coding agent for contributing to your work. The person receiving the code has to perform their due diligence in any case to determine whether the author owns it or has permission from the owner to contribute it.
              • hajile 3 hours ago
                Can you back this up with legal precedence? To my knowledge, nothing of the sort has been ruled by the courts.

                Additionally, this raises another big issue. A few years ago, a couple guys used software (what you could argue was a primitive AI) to generated around 70 billion unique pieces of music which amounts to essentially every piece of copyrightable music using standard music scales.

                Is the fact that they used software to develop this copyrighted material relevant? If not, then their copyright should certainly be legal and every new song should pay them royalties.

                It seems that using a computer to generate results MUST be added as an additional bit of analysis when it comes to infringement cases and fair use if not a more fundamental acknowledgement that computer-generated content falls under a different category (I'd imagine the real argument would be over how much of the input was human vs how much was the system).

                Of course, this all sets aside the training of AI using copyrighted works. As it turns out, AI can regurgitate verbatim large sections of copyrighted works (up to 80% according to this study[0]) showing that they are in point of fact outright infringing on those copyrights. Do we blow up current AI to maintain the illusion of copyright or blow up current copyright law to preserve AI?

                [0] https://arxiv.org/pdf/2603.20957

                • otterley 3 hours ago
                  You're asking a lot of very good and thoughtful questions, but none are directly related to the immediate issue, which is "do I have to credit the AI model?".

                  To begin to answer your questions, I would suggest you study the Copyright Office's report (which is also not law, but their guidance for laypeople as written by their staff lawyers) at https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...

          • AnimalMuppet 5 hours ago
            Why does the provenance make any difference? Let me increase your options. Option 1: You completely hand-wrote it. Option 2: You were assisted by an AI, but you carefully reviewed it. Option 3: You were assisted by an AI (or the AI wrote the whole thing), and you just said, "looks good, YOLO".

            Even if the code is line-for-line identical, the difference is in how much trust I am willing to give the code. If I have to work in the neighborhood of that code, I need to know what degree of skepticism I should be viewing it with.

            • otterley 5 hours ago
              That's the thing. As someone evaluating pull requests, should you trust the code based on its provenance, or should you trust it based on its content? Automated testing can validate code, but it can't validate people.

              ISTM the most efficient and objective solution is to invest in AI more on both sides of the fence.

              • AnimalMuppet 4 hours ago
                In the future, that may be fine. We're not in that future yet. We're still at a place where I don't fully trust AI-only code to be as solid as code that is at least thoroughly reviewed by a knowledgeable human.

                (Yes, I put "AI-only" and "knowledgeable" in there as weasel words. But I think that with them, it is not currently a very controversial case.)

        • simianwords 6 hours ago
          What’s the lie? It’s just asking to not reveal internal names
          • BoredPositron 5 hours ago
            You are spamming the whole fucking thread with the same nonsense. It is instructed to hide that the PR was made via Claude Code. I don't know why people who are so AI forward like yourself have such a problem with telling people that they use AI for coding/writing, it's a weirdly insecure look.
            • simianwords 5 hours ago
              I can do that right now with Claude Code without this undercover mode.. In fact I do it many times at work. What's the big deal in this?

              Do you not think it is an overreaction to panic like this if I can do exactly what the undercover mode does by simply asking Claude?

              • BoredPositron 5 hours ago
                It's different if it's an institutional decision or a personal like in your case. Which is and I am repeating myself here borderline insecure.
                • simianwords 4 hours ago
                  what's insecure about it? if it is up to the institution to make that decision - you can still do it. Claude is not stopping you from making that decision
                  • BoredPositron 4 hours ago
                    You have to work on your reading comprehension or you are intentional deceptive. Bye.
                    • simianwords 3 hours ago
                      ?? why doesn't your panic apply to other agents like Codex that don't advertise that the commit was made by an AI by default? strange!
                      • BoredPositron 2 hours ago
                        Because this thread is about claude. Are you that challenged?