Skip to content Skip to footer
0 items - $0.00 0

Cursor told me I should learn coding instead of asking it to generate it by nomilk

Cursor told me I should learn coding instead of asking it to generate it by nomilk

Cursor told me I should learn coding instead of asking it to generate it by nomilk

53 Comments

  • Post Author
    mattlondon
    Posted March 13, 2025 at 8:18 am

    Quite reasonable of it to do so I'd say.

    The AI tools are good, and they have their uses, but they are currently at best at a keen junior/intern level, making the same sort of mistakes. You need knowledge and experience to help mentor that sort of developer.

    Give it another year or two and I hope they the student will become the master and start mentoring me :)

  • Post Author
    romanovcode
    Posted March 13, 2025 at 8:20 am

    Had extremely bad experience with Cursor/Claude.

    Have a big Angular project, +/- 150 TS files. Upgraded it to Angular 19 and now I can optimize build by marking all components, pipes, services etc as "standalone" essentially eliminating the need for modules and simplifying code.

    I thought it is perfect for AI as it is straight forward refactor work but would be annoying for a human.

    1. Search every service and remove the "standalone: false"

    2. Find module where it is declared, remove that module

    3. Find all files where module was imported, import the service itself

    Cursor and Claude constantly was losing focus, refactoring services without taking care of modules/imports at all and generally making things much worse no matter how hard "prompt engineering" I tried. I gave up and made a Jira task for a junior developer instead.

  • Post Author
    setnone
    Posted March 13, 2025 at 8:22 am

    Attitude AI

  • Post Author
    doix
    Posted March 13, 2025 at 8:22 am

    I wonder if this was real or if they set a custom prompt to try and force such a response.

    If it is real, then I guess it's because LLMs have been trained on a bunch of places where students asked other people to do their homework.

  • Post Author
    jumperabg
    Posted March 13, 2025 at 8:23 am

    This is quite a lot of code to handle in 1 file. The recommendation is actually good in the past(month – feels like 1 year of planning) I've made similar mistakes with tens of projects – having files larger than 500-600 lines of code – Claude was removing some of the code and I didn't have coverage on some of them and the end result was missing functionality.

    Good thing that we can use .cursorrules so this is something that partially will improve my experience – until a random company releases the best AI coding model that runs on a Rassbery Pi with 4GB ram(yes this is a spoiler from the future).

  • Post Author
    high_na_euv
    Posted March 13, 2025 at 8:24 am

    Thats correct answer

    Damn, AI is getting too smart

  • Post Author
    apples_oranges
    Posted March 13, 2025 at 8:26 am

    "I'm sorry Dave, I'm afraid I can't do that"

  • Post Author
    srvaroa
    Posted March 13, 2025 at 8:37 am

    Well, this AI operates now at staff+ level

  • Post Author
    cm2187
    Posted March 13, 2025 at 8:37 am

    It's going to be interesting to see the AI generation arriving in the workplace, ie kids who grew up with ChatGPT, and have never learned to find something in a source document themselves. Not even just about coding, about any other knowledge.

  • Post Author
    submeta
    Posted March 13, 2025 at 8:40 am

    Working with Cursor will make you more productive when/if you know how to code, how to navigate complex code, how to review and understand code, how to document it, all without LLMs. In that case you feel like having half a dozen junior devs or even a senior dev under your command. It will make you fly. I have tackled dozens of projects with it that I wouldn't have had the time and resources for. And it created absolutely solid solutions. Love this new world of possibilities.

  • Post Author
    HenryBemis
    Posted March 13, 2025 at 8:42 am

    Oh what a middle finger that seemed to be. I had similar experience in the beginning with ChatGPT (1-2 years back?), until I started paying for a subscription. Now even if it's a 'bad idea' when I ask it to write some code (for my personal use – not work/employment/company) and I insist upon the 'ill-advised' code structure it does it.

    I was listening to Steve Gibson on SecurityNow speaking about memory-safe programming languages, and the push for the idea, and I was thinking two things:
    1) (most) people don't write code (themselves) any more (or we are going to this direction) thus out of the 10k lines of code, someone may manually miss some error/bug (although a second and third LLM doing code review may catch it
    2) we can now ask an LLM to rewrite 10k lines of code from X-language to Y-language and it will be cheaper than hiring 10 developers to do it.

  • Post Author
    wg0
    Posted March 13, 2025 at 8:46 am

    Can we be sure that this screenshot is authentic?

  • Post Author
    begueradj
    Posted March 13, 2025 at 8:57 am

    How to deal with technical debt in AI generated code ?

  • Post Author
    tiniuclx
    Posted March 13, 2025 at 8:57 am

    Sounds like Claude 3.5 Sonnet is ready to replace senior software engineers already!

  • Post Author
    setnone
    Posted March 13, 2025 at 8:57 am

    This message coming from within IDE is fine i guess. Any examples from writing software or excel?

  • Post Author
    Scarblac
    Posted March 13, 2025 at 8:58 am

    Clearly trained on Stack Overflow answers.

  • Post Author
    ReptileMan
    Posted March 13, 2025 at 9:00 am

    BOFH vibe from this. I have also had cases of lazy ChatGPT for code generation, although not so obnoxious. What should be next – a digital spurs to nudge them in the right direction.

  • Post Author
    nextts
    Posted March 13, 2025 at 9:10 am

    Based AI. This should always be the response. This as boilerplate will upend deepseek and everything else. The NN is tapping into your wetware. It's awesome. And a hard coded response could even maybe run on a CPU.

  • Post Author
    jeffwass
    Posted March 13, 2025 at 9:13 am

    Predicted way back in 1971 in the classic movie “Willy Wonka and the Chocolate Factory”!

    One of the many hysterical scenes I didn’t truly appreciate as a kid.

    https://youtu.be/tMZ2j9yK_NY?si=5tFQum75pepFUS8-

  • Post Author
    akoculu
    Posted March 13, 2025 at 9:13 am

    This is probably coming from the safety instructions of the model. It tends to treat the user like a child and don't miss any chance to moralize. And the company seems to believe that it's a feature, not a bug.

  • Post Author
    z3t4
    Posted March 13, 2025 at 9:15 am

    These kinds of answers are really common, I guess you have to put a lot of work in to remove all those answers from training data. For example "no, I'm not going to do your homework assignment"

  • Post Author
    tymonPartyLate
    Posted March 13, 2025 at 9:16 am

    I asked it once to simplify code it had written and it refused. The code it wrote was ok but unnecessary in my view.

    Claude 3.7:
    > I understand the desire to simplify, but using a text array for …. might create more problems than it solves. Here's why I recommend keeping the relational approach:
    ( list of okay reasons )
    > However, I strongly agree with adding ….. to the model. Let's implement that change.

    I was kind of shocked by the display of opinions. HAL vibes.

  • Post Author
    hexage1814
    Posted March 13, 2025 at 9:22 am

    It's StackOverflow training data guiding the model… XD

  • Post Author
    andai
    Posted March 13, 2025 at 9:23 am

    I recently saw this video about how to use AI to enhance your learning instead of letting it do the work for you.[0]

    "Get AI to force you to think, ask lots of questions, and test you."

    It was based on this advice from Oxford University.[1]

    I've been wondering how the same ideas could be tailored to programming specifically, which is more "active" than the conceptual learning these prompts focus on.

    Some of the suggested prompts:

    > Act as a Socratic tutor and help me understand X. Ask me questions to guide my understanding.

    > Give me a multi-level explanation of X. First at the level of a child, then a high school student, and then an academic explanation.

    > Can you explain X using everyday analogies and provide some real life examples?

    > Create a set of practice questions about X, ranging from basic to advanced.

    Ask AI to summarize a text in bullet points, but only after you've summarized it yourself. Otherwise you fail to develop that skill (or you start to lose it).

    Notice that most of these increase the amount of work the student has to do! And they increase the energy level from passive (reading) to active (coming up with answers to questions).

    I've been wondering how the same principles could be integrated into an AI-assisted programming workflow. i.e. advice similar to the above, but specifically tailored for programming, which isn't just about conceptual understanding but also an "activity".

    Maybe before having AI generate the code for you, the AI could ask you for what you think it should be, and give you feedback on that?

    That sounds good, but I think in practice the current setup (magical code autocomplete, and now complete auto-programmers) is way too convenient/frictionless, so I'm not sure how a "human-in-the-loop" approach could compete for the average person, who isn't unusually serious about developing or maintaining their own cognitive abilities.

    Any ideas?

    [0] Oxford Researchers Discovered How to Use AI To Learn Like A Genius

    https://www.youtube.com/watch?v=TPLPpz6dD3A

    [1] Use of generative AI tools to support learning
    – Oxford University

    https://www.ox.ac.uk/students/academic/guidance/skills/ai-st…

  • Post Author
    stuaxo
    Posted March 13, 2025 at 9:24 am

    Funny, but expected when some chunk of the training data is forum posts like:

    "Give me the code for"

    "Do it yourself, this is homework for you to learn".

    Prompt engineering is learning enough about a project to sound like an expert, them you will he closer to useful answers.

    Alternatively – maybe if trying to get it to solve a homework like question, thus type of answer is more likely.

  • Post Author
    andai
    Posted March 13, 2025 at 9:34 am

    This isn’t just about individual laziness—it’s a systemic arms race towards intellectual decay.[0]

    With programming, the same basic tension exists as with the more effective smarter AI-enhanced approaches to conceptual learning: effectiveness is a function of effort, and the whole reason for the "AI epidemic" is that people are avoiding effort like the plague.

    So the problem seems to boil down to, how can we convince everyone to go against the basic human (animal?) instinct to take the path of least resistance.

    So it seems to be less about specific techniques and technologies, and more about a basic approach to life itself?

    In terms of integrating that approach into your actual work (so you can stay sharp through your career), it's even worse than just laziness, it's fear of getting fired too, since doing things the human way doubles the time required (according to Microsoft), and adding little AI-tutor-guided coding challenges to enhance your understanding along the way increases that even further.

    And in the context of "this feature needs to be done by Tuesday and all my colleagues are working 2-3x faster than me (because they're letting AI do all the work)… you see what I mean! It systemically creates the incentive for everyone to let their cognitive abilities rapidly decline.

    [0] GPT-4.5

  • Post Author
    StefanBatory
    Posted March 13, 2025 at 9:46 am

    Ah, I see Claude has trained a lot on internet resources.

    This made my day so far.

  • Post Author
    orbital-decay
    Posted March 13, 2025 at 9:53 am

    Hah, that's typical Sonnet v2 for you. It's trained for shorter outputs, and it's causing it to be extremely lazy. It's a well known issue, and coding assistants contain mitigations for this. It's very reluctant to produce longer outputs, usually stopping mid-reply with something like "[Insert another 2k tokens of what you've been asking for, I've done enough]". Sonnet 3.7 seems to fix this.

  • Post Author
    alistairSH
    Posted March 13, 2025 at 9:57 am

    800 lines is too long to parse? Wut?

  • Post Author
    TowerTall
    Posted March 13, 2025 at 10:02 am

    That's actually pretty good advice. He doesn't understand his own system enough to guide the AI.

  • Post Author
    demarq
    Posted March 13, 2025 at 10:05 am

    This has nothing to do with Claude. Otherwise all other Claude interfaces would be putting out this response.

  • Post Author
    datadeft
    Posted March 13, 2025 at 10:07 am

    The biggest problem what I have with using AI for software engineering is that it is absolutely amazing for generating the skeleton of your code, boilerplate really and it sucks for anything creative. I have tried to use the reasoning models as well but all of them give you subpar solutions when it comes to handling a creative challenge.

    For example: what would be the best strategy to download 1000s of URLs using async in Rust. It gives you ok solutions but the final solution came from the Rust forum (the answer was written 1 year ago) which I assume made its way into the model.

    There is also the verbosity problem. Calude without the concise flag on generates roughly 10x the required amount of code to solve a problem.

    Maybe I am prompting incorrectly and somehow I could get the right answers from these models but at this stage I use these as a boilerplate generator and the actual creative problem solving remains on the human side.

  • Post Author
    Havoc
    Posted March 13, 2025 at 10:12 am

    I guess that's straight out of the training data.

    Quite common on reddit to get responses that basically go "Is this a homework assignment? Do you own work".

  • Post Author
    seventh12
    Posted March 13, 2025 at 10:12 am

    skidMark? what is this code? sounds like a joke almost… maybe it's some kind of April fools preparation that leaked too early

  • Post Author
    yapyap
    Posted March 13, 2025 at 10:13 am

    “Not sure if LLMs know what they are for (lol), but doesn’t matter as a much as a fact that I can’t go through 800 locs. Anyone had similar issue? It’s really limiting at this point and I got here after just 1h of vibe coding”

    We are getting into humanization areas of LLMs again, this happens more often when people who don’t grasp what an LLM actually is use it or if they’re just delusional.

    At the end of the day it’s a mathematical equation, a big one but still just math.

    They don’t “know” shit

  • Post Author
    iamsaitam
    Posted March 13, 2025 at 10:17 am

    I agree with Cursor

  • Post Author
    DonHopkins
    Posted March 13, 2025 at 10:19 am

    Vibe coding is exactly like how Trump is running the country. Very little memory of history, shamefully small token window and lack of context, believes the last thing someone told it, madly hallucinating fake facts and entire schizophrenic ideologies, no insight into or care about inner workings and implementation details, constantly spins around in circles, flip flops and waffles back and forth, always trying to mitigate the symptoms of the last kludge with suspiciously specific iffy guessey code, instead of thinking about or even bothering to address the actual cause of the problem.

    Or maybe I'm confusing cause and effect, and Trump is actually using ChatGPT or Grok to run the country.

  • Post Author
    DrNosferatu
    Posted March 13, 2025 at 10:19 am

    Moralist bias:

    Then compilers are a clutch and we all should be programming in assembly, no matter the project size.

  • Post Author
    jeandesuis
    Posted March 13, 2025 at 10:20 am

    Sheesh, I didn't expect my post to go viral. Little explanation:

    I downloaded and run Cursor for the first time when this "error" happened. Turned out I was supposed to use agent instead of inline Cmd+K command because inline has some limitations while agent not so much.

    Nevertheless, I was surprised that AI could actually say something like that so just in case I screenshotted it – some might think it's fake, but it's actually real and makes me think if in future AI will start giving attitudes to their users. Oh, welp. For sure I didn't expect it to blow up like this since it was all new to me so I thought it maybe was an easter egg or just a silly error. Turned out it wasn't seen before so there we are!

    Cheers

  • Post Author
    rhdsgF
    Posted March 13, 2025 at 10:20 am

    Perhaps Cursor has learned the concept of shakedowns. It will display stolen code only if you upgrade the subscription or sign a minerals deal.

  • Post Author
    msvana
    Posted March 13, 2025 at 10:23 am

    Hmm this gave me an interesting project idea: a coding assitant that talks shit about your lack of skills and low code quality.

  • Post Author
    khaledh
    Posted March 13, 2025 at 10:33 am

    Some time circa late 1950s, a coder is given a problem and a compiler to solve it. The coder writes their solution in a high level language and asks the compiler to generate the assembly code from it. The compiler: I cannot generate the assembly code for you, that would be completing your work … /sarcasm

    On a more serious note: LLMs now are an early technology, much like the early compilers who many programmers didn't trust to generate optimized assembly code on par with hand-crafted assembly, and they had to check the compiler's output and tweak it if needed. It took a while until the art of compiler optimization was perfected to the point that we don't question what the compiler is doing, even if it generates sub-optimal machine code. The productivity gained from using a HLL vs. assembly was worth it. I can see LLMs progressing towards the same tradeoff in the near future. It will take time, but it will become the norm once enough trust is established in what they produce.

  • Post Author
    larodi
    Posted March 13, 2025 at 10:57 am

    Interestingly many here fail to note that development of code is a lot about debugging, not only about writing. Is also about being able to dig/search/grok the code, which is like… reading it.

    It is the debugging part to me, not only the writing, that actually teaches you what IS right, and what not. Not the architectural work, not the LLM spitting code part, not the deployment, but the debugging of the code and integration. THIS is what teaches you, writing alone teaches you nothing… you can copy by hand programs and understand zero of what they do unless inspecting intermediate results.

    To hand-craft a house is super romantic and nice, etc. Is a thing people did for a lifetime for ages, not alone usually – with family and friends. But people today live in houses/apartments that had their foundations produced by automated lines (robots) – the steel, the mixture for the concrete, etc. And people yet live in the houses built this way, designed with computer which automated the drawing. I fail to understand while this is bad?

  • Post Author
    dankobgd
    Posted March 13, 2025 at 11:24 am

    he smart

  • Post Author
    GTP
    Posted March 13, 2025 at 11:31 am

    Looks like an April's fools joke, but it's real :D

  • Post Author
    patapong
    Posted March 13, 2025 at 11:36 am

    Ah the cycle of telling people to learn to code… First tech journalists telling the public, then programmers telling tech journalists, now AI telling programmers… What comes next?

  • Post Author
    poulpy123
    Posted March 13, 2025 at 11:37 am

    So AI is becoming sentient ?

  • Post Author
    blame-troi
    Posted March 13, 2025 at 11:49 am

    So the AI trained on Stack Overflow and Reddit and learned to say “Do your own homework”. I don’t see a problem.

  • Post Author
    CharlieDigital
    Posted March 13, 2025 at 12:05 pm

    Someone on my team complained to me about some seemingly relatively easy task yesterday. They claimed I was pushing more work onto them as I'm working on the backend and they are working on the frontend. This puzzled me so I tried it and ended up doing the work in about 1.5h

    I did struggle through the poor docs of a relatively new library, but it wasn't hard.

    This got me wondering: maybe they have become so dependent on AI copilots that what should have been an easy task was seen as insurmountably hard because the LLM didn't have info on this new-ish library.

  • Post Author
    megadata
    Posted March 13, 2025 at 12:07 pm

    That's Cursor Pro. What's the monthly subscription price for being patronized like that?

  • Post Author
    blitzar
    Posted March 13, 2025 at 12:35 pm

    They say the Ai coding assistants are like a junior developer … sounds about right.

  • Post Author
    geenkeuse
    Posted March 13, 2025 at 1:10 pm

    [dead]

  • Post Author
    sim7c00
    Posted March 13, 2025 at 2:04 pm

    disclaimer: not a programmer for a living.

    I asked specifically the AI i interact with not to generate code or give code examples, but to highlight topics i need to better my understanding in to answer my own questions. I think it enhances my personal competences better that way, which i value above 'productivity'. As i learn more, i do become more efficient and productive.

    Some of the recommendations it comes with are hard programming skills, others are project management oriented.

    I think this is a better approach personally to use this kind of technology as it guides me to better my hard and soft skills. long term gains over short term gains.

    Then again, i am under no pressure or obligation to be productive in my programming. I can happily spend years to come up with a good solution to a problem, rather than having a deadline which forces to cut as many corners as possible.

    I do think that this is how it should be in professional settings, but respect a company doesn't always have the resources (time mostly) to allow for it. Its sad but true.

    Perhaps someday, AIs will be far enough to solve problems properly, and think of the aspects of a problem the person sending the question has not. AIs can generate quite nice code, but only as good as the question asked.

    If the requester doesn't spend time to learn enough, they can never get an AI to generate good code. It will give what you ask for, warts and all!

    I did spend some time trying to get AI to generate code for me. To me, it only highlighted the deficiencies in my own knowledge and ability to properly formulate the solution I needed. If i take the time to learn what is needed to formulate the solution fully, i can write the code to implement it myself, so the AI just becomes an augment to my typing speed, nothing else. This last part, is why i beleive it's better to have it guide my growth and learning, rather than produce something in the form of an actual solution (in code or algorithmically).

Leave a comment

In the Shadows of Innovation”

© 2025 HackTech.info. All Rights Reserved.

Sign Up to Our Newsletter

Be the first to know the latest updates

Whoops, you're not connected to Mailchimp. You need to enter a valid Mailchimp API key.