Skip to content Skip to footer

AI Is Stifling Tech Adoption by kiyanwang

37 Comments

  • Post Author
    ZaoLahma
    Posted February 14, 2025 at 1:30 pm

    Seems plausible, especially in combination with the AI-coma that occurs when you tab-complete your way through problems at full speed.

  • Post Author
    _as_text
    Posted February 14, 2025 at 1:32 pm

    know what this will be about without reading

    Python 3.12-style type annnotations are a good example imo, no one uses the type statement because dataset inertia

  • Post Author
    jimnotgym
    Posted February 14, 2025 at 1:35 pm

    Is this such a bad result? Do we need office CRUD apps to use bleeding edge technologies?

  • Post Author
    jgalt212
    Posted February 14, 2025 at 1:39 pm

    Along similar lines, I found Google auto complete to constrict my search space. I would only search the terms that auto complete.

  • Post Author
    physicsguy
    Posted February 14, 2025 at 1:40 pm

    If AI stifles the relentless churn in frontend frameworks then perhaps it's a good thing.

  • Post Author
    CharlieDigital
    Posted February 14, 2025 at 1:41 pm

    As the saying goes:

        while (React.isPopular) {
          React.isPopular = true
        }
    

    It's actually quite sad because there are objectively better models both for performance and memory including Preact, Svelte, Vue, and of course vanilla.

  • Post Author
    jgalt212
    Posted February 14, 2025 at 1:41 pm

    Herein lies the key for IP protection. Never use cloud hosted coding tools as the world will soon be able to copy your homework at zero cost.

  • Post Author
    tiahura
    Posted February 14, 2025 at 1:41 pm

    Perhaps reasoning will help?

  • Post Author
    VMG
    Posted February 14, 2025 at 1:47 pm

    Guess I figured out my niche as a SWE: have a later knowledge cutoff date than LLMs

  • Post Author
    spiderfarmer
    Posted February 14, 2025 at 1:49 pm

    >With Claude 3.5 Sonnet, which is generally my AI offering of choice given its superior coding ability, my “What personal preferences should Claude consider in responses?” profile setting includes the line “When writing code, use vanilla HTML/CSS/JS unless otherwise noted by me”. Despite this, Claude will frequently opt to generate new code with React, and in some occurrences even rewrite my existing code into React against my intent and without my consultation.

    I noticed this too. Anyone found out how to make Claude work better?

  • Post Author
    orbital-decay
    Posted February 14, 2025 at 1:49 pm

    So… it slows down adoption by providing easier alternatives for beginners? I guess you could look at it that way too.

    Eventually it will go either of the two ways, though:

    – models will have enough generalization ability to be trained on new stuff that has passed the basic usefulness test in the hands of enthusiasts and shows promise

    – models will become smart enough to be useful even for obscure things

  • Post Author
    PaulRobinson
    Posted February 14, 2025 at 1:49 pm

    I think if you specify a technology in your prompt, any LLM should use that technology in its response. If you don't specify a technology, and that is an important consideration in the answer, it should clarify and ask about technology choices, and if you don't know, it can make a recommendation.

    LLMs should not have hard-wired preferences through providers' prompt structure.

    And while LLMs are stochastic parrots, and are likely to infer React if a lot of the training corpus mentions React, work should be done to actively prevent biases like this. If we can't get this right with JS frameworks, how are we going to solve it for more nuanced structural biases around ethnicity, gender, religion or political perspective?

    What I'm most concerned about here is that Anthropic is taking investment from tech firms who vendor dev tooling – it would not take much for them to "prefer" one of those proprietary toolchains. We might not have much of a problem with React today, but what if your choice of LLM started to determine if you could or couldn't get recommendations on AWS vs Azure vs GCP vs bare metal/roll your own? Or if it suggested only commercial tools instead of F/LOSS?

    And to take that to its logical conclusion, if that's happening, how do I know that the history assignment a kid is asking for help with isn't sneaking in an extreme viewpoint – and I don't care if it's extreme left or right, just warped by a political philosophy to be disconnected from truth – that the kid just accepts as truth?

  • Post Author
    Eridrus
    Posted February 14, 2025 at 1:49 pm

    This will be solved eventually on the AI model side. It isn't some law of nature that it takes a million tokens for an AI to learn something; just the fact that we can prompt these models should convince you of that.

  • Post Author
    avbanks
    Posted February 14, 2025 at 1:57 pm

    LLM based AI tools are the new No/Low Code.

  • Post Author
    tajd
    Posted February 14, 2025 at 1:58 pm

    Yeah maybe. But I think the thing I like is that is takes me a much shorter amount of time to create solutions for my users and myself. Then I can worry about “tech adoption” once I’ve achieved a relevant solution to my users.

    If performance is an issue then sure let’s look at options. But I don’t think it’s appropriate to expect that sort of level of insight into an optimised solution from llms – but maybe that’s just because I’ve used them a lot.

    They’re just a function of their training data at the end of the day. If you want to use new technology you might have to generate your own training data as it were.

  • Post Author
    jwblackwell
    Posted February 14, 2025 at 1:58 pm

    Larger context windows are helping solve this, though.

    I use ALpineJS which is not as well known as React etc, but I just added a bunch of examples and instructions to the new cursor project rules, and it's now close to perfect.

    Gemini models have up to 2M context windows, meaning you can probably fit your whole codebase and a ton of examples in a single request.

    Furthermore, the agenetic way Cursor is now behaving, automatically building up context before taking action, seems to be another way around this problem

  • Post Author
    conradfr
    Posted February 14, 2025 at 1:59 pm

    I was thinking the other day how coding assistants would hinder new languages adoption.

  • Post Author
    killjoywashere
    Posted February 14, 2025 at 2:00 pm

    Pathologists as a specialty has been grousing about this for several years, at least since 2021 when the College of American Pathologists established the AI Committee. As a trivial example: any trained model deployed will necessarily be behind any new classification of tumors. This makes it harder to push the science and clinical diagnosis of cancer forward.

    The entire music community has been complaining about how old music gets more recommendations on streaming platforms, necessarily making it harder for new music to break out.

    It's absolutely fascinating watching software developers come to grips with what they have wrought.

  • Post Author
    delichon
    Posted February 14, 2025 at 2:02 pm

    Working in Zed I'm full of joy when I see how well Claude can help me code. But when I ask Claude about how to use Zed it's worse than useless, because it's training data is old compared to Zed, and it freely hallucinates answers. So for that I switch over to Perplexity calling OpenAI and get far better answers. I don't know if it's more recent training or RAG, but OpenAI knows about recent Zed github issues where Claude doesn't.

    As long as the AI is pulling in the most recent changes it wouldn't seem to be stiflling.

  • Post Author
    chrisco255
    Posted February 14, 2025 at 2:03 pm

    This makes me fear less for web development jobs being lost to AI, to be honest. Look, we can create new frameworks faster than they can train new models. If we all agree to churn as much as possible the AIs will never be able to keep up.

  • Post Author
    anarticle
    Posted February 14, 2025 at 2:13 pm

    Sadly, as a person who used write AVX in C for real time imaging systems: don't care shipped.

    I love dingling around with Cursor/Claude/qwen to get a 300 line prototype going in about 3-5 minutes with a framework I don't know. It's an amazing time to be small, I would hate to be working at a megacorp where you have to wait two months to get approval to use only GitHub copilot (terrible), in a time of so many interesting tools and more powerful models every month.

    For new people, you still have to put the work in and learn if you want to transcend. That's always been there in this industry and I say that as a 20y vet, C, perl, java, rails, python, R, all the bash bits, every part matters just keep at it.

    I feel like a lot of this is the js frontend committee running headlong into their first sea change in the industry.

  • Post Author
    mtkd
    Posted February 14, 2025 at 2:20 pm

    Sonnet + Tailwind is something of a force multiplier though — backend engineers now have a fast/reliable way of making frontend changes that are understandable and without relying on someone else — you can even give 4o a whiteboard drawing of a layout and get the tailwind back in seconds

    On the wider points, I do think it is reducing time coders are thinking about strategic situation as they're too busy advancing smaller tactical areas which AI is great at assisting — and agree there is a recency issue looming, once these models have heavy weightings baked in, how does new knowledge get to the front quickly — where is that new knowledge now people don't use Stackoverflow?

    Maybe Grok becomes important purely because it has access to developers and researchers talking in realtime even if they are not posting code there

    I worry the speed that this is happening results in younger developers not spending weeks or months thinking about something — so they get some kind of code ADHD and never develop the skills to take on the big picture stuff later which could be quite a way off AI taking on

  • Post Author
    moyix
    Posted February 14, 2025 at 2:22 pm

    One thing that is interesting is that this was anticipated by the OpenAI Codex paper (which led to GitHub Copilot) all the way back in 2021:

    > Users might be more inclined to accept the Codex answer under the assumption that the package it suggests is the one with which Codex will be more helpful. As a result, certain players might become more entrenched in the package market and Codex might not be aware of new packages developed after the training data was originally gathered. Further, for already existing packages, the model may make suggestions for deprecated methods. This could increase open-source developers’ incentive to maintain backward compatibility, which could pose challenges given that open-source projects are often under-resourced (Eghbal, 2020; Trinkenreich et al., 2021).

    https://arxiv.org/pdf/2107.03374 (Appendix H.4)

  • Post Author
    hiAndrewQuinn
    Posted February 14, 2025 at 2:23 pm

    >Consider a developer working with a cutting-edge JavaScript framework released just months ago. When they turn to AI coding assistants for help, they find these tools unable to provide meaningful guidance because their training data predates the framework’s release. [… This] incentivises them to use something [older].

    That sounds great to me, actually. A world where e.g. Django and React are considered as obvious choices for backend and frontend as git is for version control sounds like a world where high quality web apps become much cheaper to build.

  • Post Author
    jleask
    Posted February 14, 2025 at 2:24 pm

    The underlying tech choice only matters at the moment because as software developers we are used to that choice being important. We see it as important because we currently are the ones that have to use it.

    As more and more software is generated and the prompt becomes how we define software rather than code i.e. we shift up an abstraction level, how it is implemented will become less and less interesting to people. In the same way that product owners now do not care about technology, they just want a working solution that meets their requirements. Similarly I don't care how the assembly language produced by a compiler looks most of the time.

  • Post Author
    booleandilemma
    Posted February 14, 2025 at 2:24 pm

    Seems like a short-term problem. We're going to get to the point (maybe we're already there?) where we'll be able to point an AI at a codebase and say "refactor that codebase to use the latest language features" and it'll be done instantly. Sure, there might be a lag of a few months or a year, but who cares?

  • Post Author
    at_
    Posted February 14, 2025 at 2:25 pm

    Anecdotally, working on an old Vue 2 app I found Claude would almost always return "refactors" as React + Tailwind the first time, and need nudging back into using Vue 2.

  • Post Author
    pmuk
    Posted February 14, 2025 at 2:25 pm

    I have noticed this. I think it also applies to the popularity of the projects in general and the number of training examples it has seen.

    I was testing Github copilot's new "Agent" feature last weekend and rapidly built a working app with Vue.js + Vite + InstantSearch + Typesense + Tailwind CSS + DaisyUI

    Today I tried to build another app with Rust and Dioxus and it could barely get the dev environment to load, kept getting stuck on circular errors.

  • Post Author
    evanjrowley
    Posted February 14, 2025 at 2:26 pm

    Neovim author TJ DeVries Express similar concerns in a video earlier this year: https://youtu.be/pmtuMJDjh5A?si=PfpIDcnjuLI1BB0L

  • Post Author
    benrutter
    Posted February 14, 2025 at 2:28 pm

    I think annecdotally this is true, I've definitely seen worse, but older technologies be chosen on the basis of LLM's knowing more about them.

    That said, I also think it's a bad choice, and here's some good news on that front- you can make good choices which will put you and your project/company ahead of many projects/companies making bad choices!

    I don't think the issue is that specific to LLMs- people have been choosing React and similar technologies "because it's easy to find developers" for ages.

    It's definitely a shame to see people make poor design decisions for new reasons, but I think poor design decisions for dumb reasons are gonna outlive LLMs by some way.

  • Post Author
    trescenzi
    Posted February 14, 2025 at 2:29 pm

    Generative AI is fundamentally a tool that enables acceleration. Everything mentioned in this already true without Gen AI. Docs of new versions aren’t as easy to find till they aren’t as new. This is even true for things in the zeitgeist. Anyone around for the Python 2 to 3 or React class to hooks transitions knows how annoying that can be.

    Yes new programmers will land on Python and React for most things. But they already do. And Gen AI will do what it does best and accelerate. It remains to be seen what’ll come of that trend acceleration.

  • Post Author
    dataviz1000
    Posted February 14, 2025 at 2:29 pm

    I'm on the fence with this. I've been using Copilot with vscode constantly and it has greatly increased my productivity. Most important it helps me maintain momentum without getting stuck. Ten years ago I would face a problem with no solution, write a detailed question on Stack Exchange, and most likely solve it in a day or two with a lot of tinkering. Today I ask Claude. If it doesn't give me a good answer, I can get the information I need to solve the problem.

    I've been thinking a lot of T.S. Eliot lately. He wrote and essay, "Tradition and the Individual Talent," which I think is pertinent to this issue. [0] (I should reread it.)

    [0] https://www.poetryfoundation.org/articles/69400/tradition-an…

  • Post Author
    lherron
    Posted February 14, 2025 at 2:30 pm

    I don't know how you solve the "training data and tooling prompts bias LLM responses towards old frameworks" part of this, but once a new (post-cutoff) framework has been surfaced, LLMs seem quite capable of adapting using in-context learning.

    New framework developers need to make sure their documentation is adequate for a model to use it when the docs are injected into the context.

  • Post Author
    memhole
    Posted February 14, 2025 at 2:31 pm

    I've wondered this myself. There was a post about gumroad a few months ago where the CEO explained the decision to migrate to typescript and react. The decision was in part because of how well AI generated those, iirc.

  • Post Author
    lackoftactics
    Posted February 14, 2025 at 2:32 pm

    > OpenAI’s latest models have cutoffs of late 2023.

    The first paragraph is factually incorrect; the cutoff is June 2024 for 4o.

    Awww, no more new JavaScript frameworks and waiting only for established technologies to cut through the noise. I don't see that as a bad thing. Technologies need to mature, and maintaining API backward compatibility is another advantage.

  • Post Author
    matsemann
    Posted February 14, 2025 at 2:33 pm

    I actually asked this a while back, but got little response: https://news.ycombinator.com/item?id=40263033

    > Ask HN: Will LLMs hurt adoption of new frameworks and technology?

    > If I ask some LLM/GPT a react question I get good responses. If I ask it about a framework released after the training data was obtained, it will either not know or hallucinate. Or if it's a lesser known framework the quality will be worse than for a known framework. Same with other things like hardware manuals not being trained on yet etc.

    > As more and more devs rely on AI tools in their work flows, will emerging tech have a bigger hurdle than before to be adopted? Will we regress to the mean?

  • Post Author
    photochemsyn
    Posted February 14, 2025 at 2:33 pm

    The central issue is high cost of training the models, it seems:

    > "Once it has finally released, it usually remains stagnant in terms of having its knowledge updated. This creates an AI knowledge gap. A period between the present and AI’s training cutoff… The cutoff means that models are strictly limited in knowledge up to a certain point. For instance, Anthropic’s latest models have a cutoff of April 2024, and OpenAI’s latest models have cutoffs of late 2023."

    Hasn't DeepSeek's novel training methodology changed all that? If the energy and financial cost for training a model really has drastically dropped, then frequent retraining including new data should become the norm.

Leave a comment

In the Shadows of Innovation”

© 2025 HackTech.info. All Rights Reserved.

Sign Up to Our Newsletter

Be the first to know the latest updates

Whoops, you're not connected to Mailchimp. You need to enter a valid Mailchimp API key.