Hi all,
Yesterday I installed Cursor and currently on Pro Trial. After coding a bit I found out that it canât go through 750-800 lines of code and when asked why is that I get this message:
Not sure if LLMs know what they are for (lol), but doesnât matter as a much as a fact that I canât go through 800 locs. Anyone had similar issue? Itâs really limiting at this point and I got here after just 1h of vibe coding
My operating system is MacOS Sequoia 15.3.1
24 Likes
Lol yes the message is actually funny. Not sure why it would write that in reality, never saw it happen.
So, in general its a bad idea to have huge files with code.
Not just because of AI context limit but also for humans to handle them.
Too big files are often a sign that a project is not well structured and the concerns of each file/class/function etc are not separate from each other.
It also seems you are not using the Chat window with the integrated âAgentâ which would create that file for you easier than in the âeditorâ part of Cursor.
3 Likes
oh, I didnât know about the Agent part – I just started out and just got to it straight out. Maybe I should actually read the docs on how to start lol
Should I ask it to chunk it out?
1 Like
So as you are starting with Cursor i highly recommend going through the Docu to learn what it can do⦠and how to use each part
Cursor AI-powered IDE with Chat, Tab, and Agent for intelligent code development
Cursor â Welcome to Cursor
Yes it would help to ask it to split parts, depends on what language you use (looks like JS?), AI can then use import statements to include those separate files into your âstart fileâ.
Usually its a good idea to do modular programming (split functionality into modules or classes or functions, depending on language or framework).
If you tell AI to use for example Single Responsibility Principle as guideline when coding it will not mix different features in one file. You may also create rules (see docs) that tell the AI for example to keep files under 500 lines limit (a bit over is not tragic but the more lines the harder it will be for AI)â¦
2 Likes
Hey @T1000 . Thanks for your advice the other day, my post got deleted for an unknown reason, but I decided to go ahead and purchase a one month cursor pro subscription as you advised, I didnât experience any major problems with the new releases and Iâm enjoying it.
1 Like
Cool, yes i saw your comment mentioning that in another thread and i particularly liked the original thread you made. Happy to chat any time.
1 Like
lol i think itâs awesome haha! never saw something like that, i have 3 files with 1500+ loc in my codebase (still waiting for a refactoring) and never experienced such thing.
could it be related with some extended inference from your rule set?
53 Comments
mattlondon
Quite reasonable of it to do so I'd say.
The AI tools are good, and they have their uses, but they are currently at best at a keen junior/intern level, making the same sort of mistakes. You need knowledge and experience to help mentor that sort of developer.
Give it another year or two and I hope they the student will become the master and start mentoring me :)
romanovcode
Had extremely bad experience with Cursor/Claude.
Have a big Angular project, +/- 150 TS files. Upgraded it to Angular 19 and now I can optimize build by marking all components, pipes, services etc as "standalone" essentially eliminating the need for modules and simplifying code.
I thought it is perfect for AI as it is straight forward refactor work but would be annoying for a human.
1. Search every service and remove the "standalone: false"
2. Find module where it is declared, remove that module
3. Find all files where module was imported, import the service itself
Cursor and Claude constantly was losing focus, refactoring services without taking care of modules/imports at all and generally making things much worse no matter how hard "prompt engineering" I tried. I gave up and made a Jira task for a junior developer instead.
setnone
Attitude AI
doix
I wonder if this was real or if they set a custom prompt to try and force such a response.
If it is real, then I guess it's because LLMs have been trained on a bunch of places where students asked other people to do their homework.
jumperabg
This is quite a lot of code to handle in 1 file. The recommendation is actually good in the past(month – feels like 1 year of planning) I've made similar mistakes with tens of projects – having files larger than 500-600 lines of code – Claude was removing some of the code and I didn't have coverage on some of them and the end result was missing functionality.
Good thing that we can use .cursorrules so this is something that partially will improve my experience – until a random company releases the best AI coding model that runs on a Rassbery Pi with 4GB ram(yes this is a spoiler from the future).
high_na_euv
Thats correct answer
Damn, AI is getting too smart
apples_oranges
"I'm sorry Dave, I'm afraid I can't do that"
srvaroa
Well, this AI operates now at staff+ level
cm2187
It's going to be interesting to see the AI generation arriving in the workplace, ie kids who grew up with ChatGPT, and have never learned to find something in a source document themselves. Not even just about coding, about any other knowledge.
submeta
Working with Cursor will make you more productive when/if you know how to code, how to navigate complex code, how to review and understand code, how to document it, all without LLMs. In that case you feel like having half a dozen junior devs or even a senior dev under your command. It will make you fly. I have tackled dozens of projects with it that I wouldn't have had the time and resources for. And it created absolutely solid solutions. Love this new world of possibilities.
HenryBemis
Oh what a middle finger that seemed to be. I had similar experience in the beginning with ChatGPT (1-2 years back?), until I started paying for a subscription. Now even if it's a 'bad idea' when I ask it to write some code (for my personal use – not work/employment/company) and I insist upon the 'ill-advised' code structure it does it.
I was listening to Steve Gibson on SecurityNow speaking about memory-safe programming languages, and the push for the idea, and I was thinking two things:
1) (most) people don't write code (themselves) any more (or we are going to this direction) thus out of the 10k lines of code, someone may manually miss some error/bug (although a second and third LLM doing code review may catch it
2) we can now ask an LLM to rewrite 10k lines of code from X-language to Y-language and it will be cheaper than hiring 10 developers to do it.
wg0
Can we be sure that this screenshot is authentic?
begueradj
How to deal with technical debt in AI generated code ?
tiniuclx
Sounds like Claude 3.5 Sonnet is ready to replace senior software engineers already!
setnone
This message coming from within IDE is fine i guess. Any examples from writing software or excel?
Scarblac
Clearly trained on Stack Overflow answers.
ReptileMan
BOFH vibe from this. I have also had cases of lazy ChatGPT for code generation, although not so obnoxious. What should be next – a digital spurs to nudge them in the right direction.
nextts
Based AI. This should always be the response. This as boilerplate will upend deepseek and everything else. The NN is tapping into your wetware. It's awesome. And a hard coded response could even maybe run on a CPU.
jeffwass
Predicted way back in 1971 in the classic movie “Willy Wonka and the Chocolate Factory”!
One of the many hysterical scenes I didn’t truly appreciate as a kid.
https://youtu.be/tMZ2j9yK_NY?si=5tFQum75pepFUS8-
akoculu
This is probably coming from the safety instructions of the model. It tends to treat the user like a child and don't miss any chance to moralize. And the company seems to believe that it's a feature, not a bug.
z3t4
These kinds of answers are really common, I guess you have to put a lot of work in to remove all those answers from training data. For example "no, I'm not going to do your homework assignment"
tymonPartyLate
I asked it once to simplify code it had written and it refused. The code it wrote was ok but unnecessary in my view.
Claude 3.7:
> I understand the desire to simplify, but using a text array for …. might create more problems than it solves. Here's why I recommend keeping the relational approach:
( list of okay reasons )
> However, I strongly agree with adding ….. to the model. Let's implement that change.
I was kind of shocked by the display of opinions. HAL vibes.
hexage1814
It's StackOverflow training data guiding the model… XD
andai
I recently saw this video about how to use AI to enhance your learning instead of letting it do the work for you.[0]
"Get AI to force you to think, ask lots of questions, and test you."
It was based on this advice from Oxford University.[1]
I've been wondering how the same ideas could be tailored to programming specifically, which is more "active" than the conceptual learning these prompts focus on.
Some of the suggested prompts:
> Act as a Socratic tutor and help me understand X. Ask me questions to guide my understanding.
> Give me a multi-level explanation of X. First at the level of a child, then a high school student, and then an academic explanation.
> Can you explain X using everyday analogies and provide some real life examples?
> Create a set of practice questions about X, ranging from basic to advanced.
Ask AI to summarize a text in bullet points, but only after you've summarized it yourself. Otherwise you fail to develop that skill (or you start to lose it).
—
Notice that most of these increase the amount of work the student has to do! And they increase the energy level from passive (reading) to active (coming up with answers to questions).
I've been wondering how the same principles could be integrated into an AI-assisted programming workflow. i.e. advice similar to the above, but specifically tailored for programming, which isn't just about conceptual understanding but also an "activity".
Maybe before having AI generate the code for you, the AI could ask you for what you think it should be, and give you feedback on that?
That sounds good, but I think in practice the current setup (magical code autocomplete, and now complete auto-programmers) is way too convenient/frictionless, so I'm not sure how a "human-in-the-loop" approach could compete for the average person, who isn't unusually serious about developing or maintaining their own cognitive abilities.
Any ideas?
—
[0] Oxford Researchers Discovered How to Use AI To Learn Like A Genius
https://www.youtube.com/watch?v=TPLPpz6dD3A
[1] Use of generative AI tools to support learning
– Oxford University
https://www.ox.ac.uk/students/academic/guidance/skills/ai-st…
stuaxo
Funny, but expected when some chunk of the training data is forum posts like:
"Give me the code for"
"Do it yourself, this is homework for you to learn".
Prompt engineering is learning enough about a project to sound like an expert, them you will he closer to useful answers.
Alternatively – maybe if trying to get it to solve a homework like question, thus type of answer is more likely.
andai
This isn’t just about individual laziness—it’s a systemic arms race towards intellectual decay.[0]
With programming, the same basic tension exists as with the more effective smarter AI-enhanced approaches to conceptual learning: effectiveness is a function of effort, and the whole reason for the "AI epidemic" is that people are avoiding effort like the plague.
So the problem seems to boil down to, how can we convince everyone to go against the basic human (animal?) instinct to take the path of least resistance.
So it seems to be less about specific techniques and technologies, and more about a basic approach to life itself?
In terms of integrating that approach into your actual work (so you can stay sharp through your career), it's even worse than just laziness, it's fear of getting fired too, since doing things the human way doubles the time required (according to Microsoft), and adding little AI-tutor-guided coding challenges to enhance your understanding along the way increases that even further.
And in the context of "this feature needs to be done by Tuesday and all my colleagues are working 2-3x faster than me (because they're letting AI do all the work)… you see what I mean! It systemically creates the incentive for everyone to let their cognitive abilities rapidly decline.
[0] GPT-4.5
StefanBatory
Ah, I see Claude has trained a lot on internet resources.
This made my day so far.
orbital-decay
Hah, that's typical Sonnet v2 for you. It's trained for shorter outputs, and it's causing it to be extremely lazy. It's a well known issue, and coding assistants contain mitigations for this. It's very reluctant to produce longer outputs, usually stopping mid-reply with something like "[Insert another 2k tokens of what you've been asking for, I've done enough]". Sonnet 3.7 seems to fix this.
alistairSH
800 lines is too long to parse? Wut?
TowerTall
That's actually pretty good advice. He doesn't understand his own system enough to guide the AI.
demarq
This has nothing to do with Claude. Otherwise all other Claude interfaces would be putting out this response.
datadeft
The biggest problem what I have with using AI for software engineering is that it is absolutely amazing for generating the skeleton of your code, boilerplate really and it sucks for anything creative. I have tried to use the reasoning models as well but all of them give you subpar solutions when it comes to handling a creative challenge.
For example: what would be the best strategy to download 1000s of URLs using async in Rust. It gives you ok solutions but the final solution came from the Rust forum (the answer was written 1 year ago) which I assume made its way into the model.
There is also the verbosity problem. Calude without the concise flag on generates roughly 10x the required amount of code to solve a problem.
Maybe I am prompting incorrectly and somehow I could get the right answers from these models but at this stage I use these as a boilerplate generator and the actual creative problem solving remains on the human side.
Havoc
I guess that's straight out of the training data.
Quite common on reddit to get responses that basically go "Is this a homework assignment? Do you own work".
seventh12
skidMark? what is this code? sounds like a joke almost… maybe it's some kind of April fools preparation that leaked too early
yapyap
“Not sure if LLMs know what they are for (lol), but doesn’t matter as a much as a fact that I can’t go through 800 locs. Anyone had similar issue? It’s really limiting at this point and I got here after just 1h of vibe coding”
We are getting into humanization areas of LLMs again, this happens more often when people who don’t grasp what an LLM actually is use it or if they’re just delusional.
At the end of the day it’s a mathematical equation, a big one but still just math.
They don’t “know” shit
iamsaitam
I agree with Cursor
DonHopkins
Vibe coding is exactly like how Trump is running the country. Very little memory of history, shamefully small token window and lack of context, believes the last thing someone told it, madly hallucinating fake facts and entire schizophrenic ideologies, no insight into or care about inner workings and implementation details, constantly spins around in circles, flip flops and waffles back and forth, always trying to mitigate the symptoms of the last kludge with suspiciously specific iffy guessey code, instead of thinking about or even bothering to address the actual cause of the problem.
Or maybe I'm confusing cause and effect, and Trump is actually using ChatGPT or Grok to run the country.
DrNosferatu
Moralist bias:
Then compilers are a clutch and we all should be programming in assembly, no matter the project size.
jeandesuis
Sheesh, I didn't expect my post to go viral. Little explanation:
I downloaded and run Cursor for the first time when this "error" happened. Turned out I was supposed to use agent instead of inline Cmd+K command because inline has some limitations while agent not so much.
Nevertheless, I was surprised that AI could actually say something like that so just in case I screenshotted it – some might think it's fake, but it's actually real and makes me think if in future AI will start giving attitudes to their users. Oh, welp. For sure I didn't expect it to blow up like this since it was all new to me so I thought it maybe was an easter egg or just a silly error. Turned out it wasn't seen before so there we are!
Cheers
rhdsgF
Perhaps Cursor has learned the concept of shakedowns. It will display stolen code only if you upgrade the subscription or sign a minerals deal.
msvana
Hmm this gave me an interesting project idea: a coding assitant that talks shit about your lack of skills and low code quality.
khaledh
Some time circa late 1950s, a coder is given a problem and a compiler to solve it. The coder writes their solution in a high level language and asks the compiler to generate the assembly code from it. The compiler: I cannot generate the assembly code for you, that would be completing your work … /sarcasm
On a more serious note: LLMs now are an early technology, much like the early compilers who many programmers didn't trust to generate optimized assembly code on par with hand-crafted assembly, and they had to check the compiler's output and tweak it if needed. It took a while until the art of compiler optimization was perfected to the point that we don't question what the compiler is doing, even if it generates sub-optimal machine code. The productivity gained from using a HLL vs. assembly was worth it. I can see LLMs progressing towards the same tradeoff in the near future. It will take time, but it will become the norm once enough trust is established in what they produce.
larodi
Interestingly many here fail to note that development of code is a lot about debugging, not only about writing. Is also about being able to dig/search/grok the code, which is like… reading it.
It is the debugging part to me, not only the writing, that actually teaches you what IS right, and what not. Not the architectural work, not the LLM spitting code part, not the deployment, but the debugging of the code and integration. THIS is what teaches you, writing alone teaches you nothing… you can copy by hand programs and understand zero of what they do unless inspecting intermediate results.
To hand-craft a house is super romantic and nice, etc. Is a thing people did for a lifetime for ages, not alone usually – with family and friends. But people today live in houses/apartments that had their foundations produced by automated lines (robots) – the steel, the mixture for the concrete, etc. And people yet live in the houses built this way, designed with computer which automated the drawing. I fail to understand while this is bad?
dankobgd
he smart
GTP
Looks like an April's fools joke, but it's real :D
patapong
Ah the cycle of telling people to learn to code… First tech journalists telling the public, then programmers telling tech journalists, now AI telling programmers… What comes next?
poulpy123
So AI is becoming sentient ?
blame-troi
So the AI trained on Stack Overflow and Reddit and learned to say “Do your own homework”. I don’t see a problem.
CharlieDigital
Someone on my team complained to me about some seemingly relatively easy task yesterday. They claimed I was pushing more work onto them as I'm working on the backend and they are working on the frontend. This puzzled me so I tried it and ended up doing the work in about 1.5h
I did struggle through the poor docs of a relatively new library, but it wasn't hard.
This got me wondering: maybe they have become so dependent on AI copilots that what should have been an easy task was seen as insurmountably hard because the LLM didn't have info on this new-ish library.
megadata
That's Cursor Pro. What's the monthly subscription price for being patronized like that?
blitzar
They say the Ai coding assistants are like a junior developer … sounds about right.
geenkeuse
[dead]
sim7c00
disclaimer: not a programmer for a living.
I asked specifically the AI i interact with not to generate code or give code examples, but to highlight topics i need to better my understanding in to answer my own questions. I think it enhances my personal competences better that way, which i value above 'productivity'. As i learn more, i do become more efficient and productive.
Some of the recommendations it comes with are hard programming skills, others are project management oriented.
I think this is a better approach personally to use this kind of technology as it guides me to better my hard and soft skills. long term gains over short term gains.
Then again, i am under no pressure or obligation to be productive in my programming. I can happily spend years to come up with a good solution to a problem, rather than having a deadline which forces to cut as many corners as possible.
I do think that this is how it should be in professional settings, but respect a company doesn't always have the resources (time mostly) to allow for it. Its sad but true.
Perhaps someday, AIs will be far enough to solve problems properly, and think of the aspects of a problem the person sending the question has not. AIs can generate quite nice code, but only as good as the question asked.
If the requester doesn't spend time to learn enough, they can never get an AI to generate good code. It will give what you ask for, warts and all!
I did spend some time trying to get AI to generate code for me. To me, it only highlighted the deficiencies in my own knowledge and ability to properly formulate the solution I needed. If i take the time to learn what is needed to formulate the solution fully, i can write the code to implement it myself, so the AI just becomes an augment to my typing speed, nothing else. This last part, is why i beleive it's better to have it guide my growth and learning, rather than produce something in the form of an actual solution (in code or algorithmically).