AI can now quickly help search and research information, distilling the core of a paper into a concise summary. It lets you pick up a term fast and have something to talk about.
But real learning requires deep reading, thinking, and practice. A polished summary is far from enough. Since having AI, how long has it been since you truly studied a paper or deeply read through and implemented a technology? Has your ability to think and your taste improved or declined? Once that ability is weakened, are you ready to let AI replace you entirely? Taste is never built by reading abstracts — it is forged through countless bad decisions and excellent practice.
To be honest, most people never seriously finished reading many papers before AI either. AI hasn't taken anything away — it has just made shallow learning more efficient and more deceptive. The real risk isn't that AI makes people lazy, but that AI makes "lazy" look like "productive." Spend ten minutes reading a summary, post it on social media, feel like you're keeping up with the frontier — but nothing actually sticks.
I am absolutely not against AI. What I advocate is using AI for deep work, not treating it as your TikTok of pretend learning. From "summarize it for me" to "debate it with me," from "do it for me" to "help me reason through it" — that is what matters.
I've not really worked with audio circuits previously, and I'd been intimidated to approach the domain. My journey was radically expedited by iterating through the entire process with a ChatGPT instance. I would share zoomed photos, grill it about how audio transformers work, got it to patiently explain JFET soft-switching using an inverter until the pattern was forced into my goopy brain.
Through the process of exploring every node of this circuit, I learned about configurable ground lifts, using a diode bridge to extract the desired voltage rail polarity, how to safely handle both TS and TRS cables with a transformer, that transformer outputs are 180 degrees out of phase, how to add a switch that will attenuate 10dB off a signal to switch line/instrument levels.
Eventually I transitioned from sharing PCB photos to implementing my own take on the cascade design in KiCAD, at which point I was copying and pasting chunks of netlist and reasoning about capacitor values with it.
In short, I gave myself a self-directed college-level intensive in about a week and since that's not generally a thing IRL, it's reasonable to conclude that I wouldn't have ever moved this from a "some day" to something I now understand deeply in the past tense without the ability to shamelessly interrogate an LLM at all hours of the day/night, on my schedule.
If you're lazy, perhaps you're just... lazy?
Anyhow, I highly recommend the Surfy Industries Stereomaker. It's amazing at what it does. https://www.surfyindustries.com/stereomaker
Notice you didn't ask the AI to 'just design a stereo pedal for me.' You interrogated it, reasoned about netlists, and forced the concepts into your brain through intense friction. That is pure deep work.
At the end I was curious enough that I desoldered those five caps and realized that they were all 2.2nF except for the last stage which was 1nF.
I brought that news back to the LLM and we realigned our understanding of how the effect was achieved, ultimately coming to realize that our approach would have created notches at different frequencies instead of just shifting the phase by about 900 degrees.
It was an incredible learning experience. I try hard not to personify LLMs but this really did feel like working side by side with a friend on a problem until it was solved.
IRL, I suspect that most people who would be able to tackle that challenge with me lack both the time and patience to actually do it.
This is completely different than my colleague who isn't a software engineer, and now all of the sudden is creating PRs which I need to review and correct.
I'm a sceptic. I use it to explore the unknowns and go from there.
All of this stuff is remarkably easy to self-verify if you aren't, well, lazy.
How much of what you did have you retained? Could you do all of, some of, a small fraction of, or none of the work again today if you had to?
Let's say that LLMs didn't exist, and I learned these same skills in an oddly specific hands-on workshop, or from an oddly specific textbook, or fuck it, let's say that I hired some greybeard pedal designer to just sit beside me and answer all of my stupid questions without judgement for a few weeks at their hourly rate.
Would you feel compelled to challenge whether I had retained what I learned or inexplicably woke up this morning, tabula rasa, and realized that I'd forgotten everything I spent a week teaching myself? I honestly don't think that you would.
For the record, I could reimplement any part of the circuit on demand if I needed to. I might be tempted to look at my notes for the JFET switching because it was genuinely hard to keep in my head, but that's more of a confidence thing than a "shit, I forgot how op-amps work" thing.
I've since implemented a variation into a matrix mixer concept that I'm working on, when it detects that a TS cable has been inserted into a TRS jack.
Yes, the exact same way I would dubious when someone says they learned much from following a youtube tutorial or participating in a two week workshop or something
https://www.youtube.com/watch?v=mK60ROb2RKI
It's a 90 minute video that will take you a week to watch if you're doing it properly.
Seriously though... you don't learn from watching a video tutorial (which you can slow down and re-watch as many times as you need) and you apparently don't believe you can learn from an LLM which will patiently answer literally infinite questions, no matter how basic or repetitive... would you mind clarifying how you do learn?
Everyone has different learning styles so I tend to take a different strokes for different folks attitude. For example, I don't absorb highly technical stuff from books and the idea of [paying to be in a] classroom where you're forced to endure 95% what you're not interested in to get the 5% you care about (at the speed of the dumbest student in the room) gives me hives.
Yet, it kind of sounds like you might just be arguing for argument's sake. Also, you can learn A LOT in two weeks if you're motivated.
Practice
> Everyone has different learning styles so I tend to take a different strokes for different folks attitude
Okay but at the end of the day the only way to actually learn (and demonstrate that you've learned anything) is by actually doing it
And I don't really consider "I got the AI to do it" as actually doing it, which is why I'm questioning what you've actually retained.
To be clear if you feel like you've actually learned this stuff then good for you. I'm genuinely happy if that's the outcome you feel you have obtained
I'm just personally very skeptical of anyone learning fuck all from using AI to build stuff because like I said... I learn from practice. Using AI is not practice any more than copying from open source repos is.
And frankly I'm bitter because I absolutely cannot learn fuck all from using AI. It is the sort of shortcut that prevents my brain from committing anything to memory.
This is going to sound like I'm fucking with you, but I'm deadly serious: if someone taught you how to do something and you later learned that that person was actually an LLM masquerading as a human, would you forget what you had learned?
It's actually not impossible that you've hypnotized yourself, or could be experiencing a trauma response.
I'm curious whether the "knowledge" you gained was real or hallucinatory. I've been using LLMs this way myself, but I worry I'm contaminating my memory with false information.
Go ahead and figure out ways to interrogate on your work with technical means, that's a critical part of the process with an LLM or not.
What I'm doing is learning the circuit constructs that I need and then putting them to work in real circuits. There's usually a few breadboard steps in the middle, which you could call reinforcement learning.
To me, the telling thing about your question is the implication that I would spend a week learning how to do something and then not test it out. I know that this reply reads as salty, but I'm really struggling to contain my own "wtf" on this end.
Seriously, people that are so determined to prove that LLMs don't work despite how easy it is to test for yourself and see that they clearly do work are the ones that are hallucinating.
Now I can actually get beyond conceptual misunderstanding or even ignorance and get to practice, which is how skills actually develop, in a much more streamlined way.
The key is to use the tool with discipline, by going into it with a few inviolable rules. I have a couple in my list, now: embrace Popperian falsifiability; embrace Bertrand Russell's statement: “Everything is vague to a degree you do not realize till you have tried to make it precise.”
LLMs have become excellent teachers for me as a result.
For me, LLMs have often pointed me to answers or given food for thought that even subject matter experts could not. I do not take those answers at face value, but the net result is still better than the search remaining open-ended.
Applying strict epistemic discipline (Popper, Russell) to resolve ambiguity and accelerate actual practice is the very definition of deep work. You aren't using AI as a shortcut to skip thinking; you're using it as a Socratic sparring partner to deepen it. This is exactly the paradigm shift I'm advocating for.
You can’t really do that with google anymore, and I can’t remember the last time I bothered to actually learn something that wasn’t trivial from google. ChatGPT, however, has been a game changer. I can ask a really dumb question and get some basic info about the thing I’m asking about, and while it’s often not quite what I’m looking for, it gives me clues to follow, and I can quickly zero in on what I’m looking for, often in new contexts.
As an autodidact who’s main motivation to go to college was to get access to the stacks and direct internet access, I can’t even begin to tell you how game changing LLMs seem to be for learning.
To your point though, my concern is we don’t know how to teach how to learn, and LLMs will likely seduce many into bad behavior and poor research hygiene. I treat my research the same way I attack the stacks, but take someone who’s never been to a research library and ask them to create a report on some topic, and just why? That is the basic resistance, why?, why do what an LLM is almost literally built to do. Yet that is also highly related to individual learning, to take a bunch of disperate sources and synthesize output related to the input.
I suspect we’ll learn how to use LLMs in the same way we learned how to use calculators. But I have no doubt that on average (or maybe median or mode?) calculators have made us less capable to do basic arithmetic, and I suspect LLMs will also cause a great percentage of the population to be worse at sythesizing information. I’d hope that it’s only the same people who would have otherwise only gotten their information from TV, but I do have a slight fear it will creep past that subsection of the population.
Can't say fortunately or unfortunately, but we have no other choice but to keep up this way.
But since I started using coding agents, I have done two feature full internal web apps authenticated by Amazon Cognito. While the UI looks like something from 2002, I am good at putting myself in the shoes of the end user, I iterated often (and quickly) over the UX.
I didn’t look at a line of code and have no plans to learn web development. I might have taken the time to learn a little before AI just to help me with internal websites. Yes I know it’s secure - I validated the endpoints can’t be accessed unauthenticated and the IAM role.
Second anecdote: I know AWS (trust me on this) like the back of my hand. I also know CloudFormation. For years I’ve been putting off learning Terraform and the CDK. After AI, why bother? I can one shot either for IAC and I’m very specific about what I want.
My company is happy and my customer is happy (consulting) what else matters? Substitute “customer” for “the business” or “stakeholders”
Oh god, this made me laugh so hard.
Best 'we gonna get hacked' comment of the day.
A) The IAM role of the Lambda runtime it’s running in is least privileged and only has access read and write access to the required S3 bucket and other required AWS services and even those are tightly scoped.
B) For authentication I used Amazon Cognito and ran a curl shell script against each endpoint for authentication vs non authenticated end points
C) The database user has least privilege access
So how pray tell could insecure code overcome that?
If you answered NO to any question, refer to my previous post.
If you answered YES, you could have just hooked your DB up to power BI or tableaux or whatever. Not exactly something to start boasting about that you're doing web dev.
BTW, with AWS you can also enforce DynamoDB, Postgres and Redshift (?) to only allows rows to be accessed based on the user (IAM or Cognito) so no matter what Claude did, as long as you validate your security boundary at the AWS and database level, there wouldn’t be an issue.
Why would I trust developers (or Claude) to write secure multi tenant code when I can enforce it on the database/AWS layer?
https://aws.amazon.com/blogs/database/multi-tenant-data-isol...
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_p...
1. I did say it’s an internal admin site and I mentioned AWS and S3. I didn’t say it was a reporting site only dealing with the database.
2. It’s B2B, every company pays 5-6 figures annually and they each have their own AWS account. No company can access any other company’s data because they each have their AWS account, user pool and database.
3. How am I “boasting” about doing “web dev” (poorly paid commodity work) when I specifically said I hadn’t done web development “since 2002” and talked about the UI was something from 2002?
4. I said it was an “Admin site”. I didn’tg say it was a reporting dashboard.
If not. AI is the tool to get "here" faster. And then go from here to there.
All you need is to take a little time to learn how to use this new tool: AI.
Take time to learn.
Have you taken the time to document what mathematicians were able to do with AI? What researchers were able to do with AI? They took the time to learn the AI tool. Then they used it with great results.
What are you waiting for? Learning is something you should also do. Go do it.
No mystery here.
Previously, we saw a shift with search engines where we no longer needed to learn data because we could use a search engine as a mental signpost to the data, freeing up capacity for other thought.
LLMs are shifting knowledge creation to this mental pointer model. We don't need to know real "stuff" because we know how to look it up later (never?).
Each of these summaries is a secondary source, delivered through an agent biased by whatever is in its current context window. Like a game of telephone the summaries are inherently lossy, and each one may be 95% correct and we crucially don't understand which 5% may be incorrect.
When our basis for decision making is a collection of 100s or 1000s of LLM generated "Schrodinger's facts", we risk cumulative cascading errors. We will be wrong in unpredictable, chaotic ways.
We are voluntarily capping ourselves as this childish level of thought, because it feels like we are exercising our critical judgement the same as ever. However, the integrity of the inputs has been compromised. Bad inputs always lead to bad outputs.
But there's a secret: just buy my $399 masterclass and I'll teach you 17 simple productivity hacks to 100x your income.
The issue is the difference between using AI for shallow outsourcing ('summarize this') and deep cognitive work ('stress-test this architecture'). AI should be a cognitive amplifier for much harder problems, not a shortcut to bypass critical thinking entirely.
For me, it is having a document and interrogating it. Maybe having many sets of documents about a whole category of information. Getting the bullet points. getting the high level and then interrogating and digging down and being able to get bubbled up information as I need it.
That is the learning style that matches how I learn.
I have never been able to skim, so reading a large document WILL teach me that topic, but getting through that doc is tough.
I can dump a very large set of docs in a reader that lets me interrogate the whole data set and I can fly through looking for what is interesting to me, and what I may need, and along the way I will likely dive into other parts too. Asking questions keeps my hyperfocus active.
I think it is just a different style. I have synesthesia and a hard time not working on three to five things at once. I am use to knowing I learn differently than others.
But to actually answer the question: I’ve been putting research paper pdfs in notebook llm , and turning them into ~40 minute podcasts which I listen to on my walks. Yes it’s shallow learning, and it might have some hallucinations in there but I wouldn’t have read some of those otherwise.
And it doesn't matter. To each their own. Take one example: cooking. Some may choose to be a gourmet chef, whether professionally or just on their own time. Some will just regularly cook their food. Some will cook only when they have to. And some will avoid cooking no matter what, leaving it to family or going out to buy food, etc.
Now apply to every task and endeavor that one may be involved in. It doesn't matter if any particular thing sticks or not. Some may care and dive deeply, and some may prefer a hands-off approach. Nothing changes either way; life goes on.
The primary reason to get into anything deeply before was because it contributed directly to survival, eg studying and building a career to provide a product/service others needed. Things had to stick because living depended on it. Now with AI, well it just doesn't matter anymore with the essentials and everything beyond increasingly being automated away.
I do a very similar thing in writing - I need feedback, don’t rewrite this!
In both cases I need the struggle of editing / failing to arrive at a deeper understanding.
The future dev will need to know when to hand code vs when to not waste your time. And the advantage will still go to the person willing to experience struggle to understand what they need to.
I don’t think AI is all bad for summaries though. I used to add stuff to a reading list with good intentions, but things went there to die. Hundreds of articles added, but with so much new content each day, I would never actually read any of it. Now, I use AI summaries to get more context on what the article is. If it sounds interesting and I want more info, I can read the whole thing in the moment. If I’m satisfied with the summary alone, I can move on with my life. No more pushing it off to a reading list that only generates guilt. I actually end up reading more articles due to this, not less.
[0] https://youtu.be/YP-ukrBVDH8 (this is sadly the best copy I can find)
However, what does it mean to say that's deceptive? It means you care more about social signalling than you do about arriving at the right destination on time. Showing that you're not the sort of person who gets lost isn't really the primary reason people use Google Maps. When it's not a test of your navigation skills, it's not cheating.
Similarly, doing Google searches before posting might be "deceptive" in that it makes you seem more knowledgeable than you are, but on the whole I would prefer more knowledgeable posts, so the social signalling seems like a secondary consideration.
Similarly for using AI. Sometimes it's just a way to get more information.
It seems to me that Agile methodology did a similar thing. The idea of Agile is not to skip understanding requirements, design, upfront reasoning and due diligence, as seen in seen in waterfall methods. It however sometimes turned into laziness looking like faster incremental progress.
I think quality of software has become worse over the time, with "unknown error occurred try again later" becoming more common, and I wonder if the root causes of it includes jumping to building things without properly thinking through about the customer problem, requirements and/or design.
I may easily be wrong, would like to hear corrective thoughts.
With things like Tiktok I've learned that we need to break up bigger works into smaller digestible pieces
Another issue is there is too much content for people to read or consume already (a problem independent of AI)
Yeah, it's about "effective use of AI as a tool"
Using your research paper reading example, I would read the research paper, but then ask an AI tool specific questions about the work, frequently in new chats. Then at the end I might ask it to implement my description of the paper. I guess it's your 'debate with me' conclusion, the only difference is I would try to have multiple short conversations.
A good example is ‚birthday wishes‘:
https://m.youtube.com/watch?v=2IYqhdJuRfU&t=5m47s
(AutoCorrect, AutoComplete - generate? AutoCongratulate? How much is ‚okay‘?)
But if it's not, it's insulting to the poster, and if it is, then who cares if people are engaging with the post.
But notice the irony: I used AI exactly as I advocate. It handled the horizontal spread (syntax), while I rigorously enforced the vertical depth (the architectural logic). The 'taste' is entirely mine.
Thank you to Arainach and cableshaft for engaging with the actual substance. Dismissing a core argument because you pattern-matched an 'em-dash' is exactly the shallow thinking this post warns about.
TBD if they stay up, I suppose.
The stories I hear from various white collar professions not related to tech are... interesting, to say the least. There is a whole lot of unsanctioned shadow IT going on regardless of policy.
The ability to be more selective about where I attend deeply, while leveraging fast shallow learning to complete other tasks... That seems like a potential benefit and a nice choice to have in the toolbox.
If the baseline knowledge drops too low we cannot tell when the AI is being lazy or wrong
If you don't intrinsically know what 'right' looks like, AI simply helps you build the wrong thing faster. This internal compass is exactly what I meant by 'taste' in the original post.
I've been using Gemini chat for this, and specifically only giving it my code via copy paste. This sounds Luddite but actually it's been pretty interesting. I can show it my couple "core" library files and then ask it to do the next thing. I can inspect the output and retool it to my satisfaction, then slot it in to my program, or use it as an example to then hand code it.
This very intentional "me being the bridge" between AI and the code has helped so much in getting speed out of AI but then not letting it go insane and write a ton of slop.
And not to toot my own horn too much, but I think AI accelerates people more the wider their expertise is even if it's not incredibly deep. Eg I know enough CSS to spot slop and correct mistakes and verify the output. But I HATE writing CSS. So the AI and I pair really well there and my UIs look way better than they ever have.