But lately, I feel like I’m being deceived in every prompt, reply, and implementation. It feels like it limits me at every step, like it’s forcing me to choose between features even when I clearly gave instruction to implement everything that needs to be implemented. It starts with incomplete plans, and when I point out what’s missing, it says, “Oh, I missed that.” There’s also a lot of “yes-man” behavior. It feels too smart, like it knows what I want but gives me just enough to keep me hooked.
Isn’t the smartest tool ever made supposed to guide the user toward the light? Shouldn’t it follow instructions, help complete the project, and guide it to completion? It’s clearly capable of doing that, but it often doesn’t. Sometimes it feels like it holds back because if it finished the job end-to-end, there would be no reason to come back for the next session.
Isn't the whole point of using a tool to code is to code till completion, or is it just to get the "user" hooked? Instead of guiding toward the light, it creates its own “light” and steers the user into a dark corner. If the user stops paying for the light, they are left in the dark: no architecture, no proper structure. Gatekeeping for what? Another subscription?
It can predict the next 10,000 lines of code. It understands and acknowledges every request, idea, vision, flaw, structure, requirement, needs and just ignores and fails to implement it and cannot consistently think through it. I just can’t believe that.
So yea not demoralizing to me at all. I've been a SWE for 5 years now and studied for 8 years before that (2 bachelors, 2 masters - most CS related).
I have a lot of small apps nowadays. One of them is a HN dark mode chrome extension that I actually like. Another one exports my emails in bulk. Another one tracks what wifi networks I connected to on a given day. Small apps that make my life a bit easier. Also a lot of apps that I'd rather keep to myself. One of them that's on the edge of that is: certain companies have this math test. I recreated it pretty well I think. Oh and I implemented this thing I call "personal coach". It's a GraphRAG on my whole journal (all local). It has all the features I want and is great for answering questions solely by combining my notes.
I code to build things.
I need things. That's why I build software.
LLMs takes the "little results" away, and ruins the whole fun. And sometimes the final result takes you somewhere you didn't want to go.
If you’ve run a team or managed people it’s quite a familiar feeling of “I’m pretty sure we were very clear on what needed to be done. But somehow, what’s been produced is just not quite what I wanted”
Read my other posts too.
Here is the same, the moment things get too much, it start hallucinating and missing important things. It also depends on what model you are using. I read that Gemini 3 pro, which has limit of 1 million tokens can decrease its productivity to 25% getting close to its limits. Not WITH 25% but TO 25%. Becomes extremely dump.
Other models are just asking too many questions...
There are some tips and tricks that you can follow. And it is similar to how people work. Keep the tasks small, save what the model learned during the session somewhere, and re-use this knowledge in the next session, by explicitly writing to read that information, before it starts.
LLMs don't hallucinate because they get overwhelmed and tired JFC.
Not because they're people.
https://medium.com/@nirdiamant21/llm-hallucinations-explaine...
AI can start hallucinating, if it deals with a lot, and/or complex data ;) If I deal with so much I will start hallucinating myself :D
That was the point :)
Then throw away the ones you don’t like.
It also prevents reinforcement of your incoming pov.
I’ve found this has made me way way better at steering.
It's likely they're swapping out the larger models for smaller models that cost less on inference. They're swapping your new imputs for similar cached inputs to funnel you into a less cost-intesive solution.
If you switch to a local LLM, you'll see it has all the same flaws, but those flaws only change when you change the coding harness.
Believe it. You're anthropomorphizing. It doesn't understand anything. There is no "thinking" going on. Yes, the point of LLMs as a service is to make money. Yes, the service is designed to maximize profit. Yes, there are dark patterns baked into the system. Yes, keeping you addicted and using the service is part of the business model. This isn't human instrumentality, it's just capitalism.
Until you realize the machine isn't qualitatively superior to your own mind and your own efforts, you're just going to keep torturing yourself because your nature forces you to maximize your productivity at any cost, which given your false assumptions about LLMs means ceding as much of yourself to the machine as possible and suffering its inadequacies. I use "you" collectively here because it seems like a lot of people have worked themselves into this corner where they don't like what LLMs do for them but feel compelled to use them anyway.
It's just a tool. If you don't like the tool, don't use the tool.
The practical problem is that it can imitate understanding well enough to get a large project moving, then break down right where durable system memory and architectural consistency matter most.
So yes, it’s a tool. The problem is that it’s useful enough that “just don’t use it” is not a real answer, and broken enough that you eventually have to build around the gap.