Skip to content Skip to footer
0 items - $0.00 0

AGI Is Still 30 Years Away – Ege Erdil and Tamay Besiroglu by Philpax

AGI Is Still 30 Years Away – Ege Erdil and Tamay Besiroglu by Philpax

AGI Is Still 30 Years Away – Ege Erdil and Tamay Besiroglu by Philpax

32 Comments

  • Post Author
    pelagicAustral
    Posted April 17, 2025 at 5:28 pm

    Two more weeks

  • Post Author
    codingwagie
    Posted April 17, 2025 at 5:29 pm

    I just used o3 to design a distributed scheduler that scales to 1M+ sxchedules a day. It was perfect, and did better than two weeks of thought around the best way to build this.

  • Post Author
    andrewstuart
    Posted April 17, 2025 at 5:32 pm

    LLMs are basically a library that can talk.

    That’s not artificial intelligence.

  • Post Author
    EliRivers
    Posted April 17, 2025 at 5:33 pm

    Would we even recognise it if it arrived? We'd recognise human level intelligence, probably, but that's specialised. What would general intelligence even look like.

  • Post Author
    cruzcampo
    Posted April 17, 2025 at 5:33 pm

    AGI is never gonna happen – it's the tech equivalent of the second coming of Christ, a capitalist version of the religious savior trope.

  • Post Author
    moralestapia
    Posted April 17, 2025 at 5:34 pm

    "Literally who" and "literally who" put out statements while others out there ship out products.

    Many such cases.

  • Post Author
    dicroce
    Posted April 17, 2025 at 5:35 pm

    Doesn't even matter. The capabilities of the AI that's out NOW will take a decade or more to digest.

  • Post Author
    _Algernon_
    Posted April 17, 2025 at 5:37 pm

    The new fusion power

  • Post Author
    fusionadvocate
    Posted April 17, 2025 at 5:40 pm

    Can someone throw some light on this Dwarkesh character? He landed a Zucc podcast pretty early on… how connected is he? Is he an industry plant?

  • Post Author
    dcchambers
    Posted April 17, 2025 at 5:43 pm

    And in 30 years it will be another 30 years away.

    LLMs are so incredibly useful and powerful but they will NEVER be AGI. I actually wonder if the success of (and subsequent obsession with) LLMs is putting true AGI further out of reach. All that these AI companies see are the $$$. When the biggest "AI Research Labs" like OpenAI shifted to product-izing their LLM offerings I think the writing was on the wall that they don't actually care about finding AGI.

  • Post Author
    throw7
    Posted April 17, 2025 at 5:43 pm

    AGI is here today… go have a kid.

  • Post Author
    ksec
    Posted April 17, 2025 at 5:44 pm

    Is AGI even important? I believe the next 10 to 15 years will be Assisted Intelligence. There are things that current LLM are so poor I dont believe a 100x increase in pref / watt is going to make much difference. But it is going to be good enough there wont be an AI Winter. Since current AI has already reached escape velocity and actually increase productivity in many areas.

    The most intriguing part is if Humanoid factory worker programming will be made 1000 to 10,000x more cost effective with LLM. Effectively ending all human production. I know this is a sensitive topic but I dont think we are far off. And I often wonder if this is what the current administration has in sight. ( Likely Not )

  • Post Author
    csours
    Posted April 17, 2025 at 5:47 pm

    1. LLM interactions can feel real. Projections and psychological mirroring is very real.

    2. I believe that AI researchers will require some level of embodiment to demonstrate:

    a. ability to understand the physical world.

    b. make changes to the physical world.

    c. predict the outcome to changes in the physical world.

    d. learn from the success or failure of those predictions and update their internal model of the external world.

    I cannot quickly find proposed tests in this discussion.

  • Post Author
    lo_zamoyski
    Posted April 17, 2025 at 5:48 pm

    Thirty years. Just enough time to call it quits and head to Costa Rica.

  • Post Author
    xnx
    Posted April 17, 2025 at 5:53 pm

    I'll take the "under" on 30 years. Demis Hassabis (who has more credibility than whoever these 3 people are combined) says 5-10 years: https://time.com/7277608/demis-hassabis-interview-time100-20…

  • Post Author
    arkj
    Posted April 17, 2025 at 5:56 pm

    ”‘AGI is x years away’ is a proposition that is both true and false at the same time. Like all such propositions, it is therefore meaningless.”

  • Post Author
    antisthenes
    Posted April 17, 2025 at 6:00 pm

    You cannot have AGI without a physical manifestation that can generate its own training data based on inputs from the external outside world with e.g. sensors and constantly refine its model.

    Pure language or pure image-models are just one aspect of intelligence – just very refined pattern recognition.

    You will also probably need some aspect of self-awareness in order or the system to set auxiliary goals and directives related to self-maintenance.

    But you don't need AGI in order to have something useful (which I think a lot of readers are confused about). No one is making the argument that you need AGI to bring tons of value.

  • Post Author
    Zambyte
    Posted April 17, 2025 at 6:05 pm
  • Post Author
    kgwxd
    Posted April 17, 2025 at 7:39 pm

    Again?

  • Post Author
    lucisferre
    Posted April 17, 2025 at 7:42 pm

    Huh, so it should be ready around the same time as practical fusion reactors then. I'll warm up the car.

  • Post Author
    shortrounddev2
    Posted April 17, 2025 at 8:35 pm

    Hopefully more!

  • Post Author
    yibg
    Posted April 17, 2025 at 8:51 pm

    Might as well be 10 – 1000 years. Reality is no one knows how long it'll take to get to AGI, because:

    1) No one knows what exactly makes humans "intelligent" and therefore
    2) No one knows what it would take to achieve AGI

    Go back through history and AI / AGI has been a couple of decades away for several decades now.

  • Post Author
    ValveFan6969
    Posted April 17, 2025 at 9:12 pm

    I do not like those who try to play God. The future of humanity will not be determined by some tech giant in their ivory tower, no matter how high it may be. This is a battle that goes deeper than ones and zeros. It's a battle for the soul of our society. It's a battle we must win, or face the consequences of a future we cannot even imagine… and that, I fear, is truly terrifying.

  • Post Author
    sebastiennight
    Posted April 17, 2025 at 9:12 pm

    The thing is, AGI is not needed to enable incredible business/societal value, and there is good reason to believe that actual AGI would damage both our society, our economy, and if many experts in the field are to be believed, humanity's survival as well.

    So I feel happy that models keep improving, and not worried at all that they're reaching an asymptote.

  • Post Author
    stared
    Posted April 17, 2025 at 9:32 pm

    My pet peeve: talking about AGI without defining it. There’s no consistent, universally accepted definition. Without that, the discussion may be intellectually entertaining—but ultimately moot.

    And we run into the motte-and-bailey fallacy: at one moment, AGI refers to something known to be mathematically impossible (e.g., due to the No Free Lunch theorem); the next, it’s something we already have with GPT-4 (which, while clearly not superintelligent, is general enough to approach novel problems beyond simple image classification).

    There are two reasonable approaches in such cases. One is to clearly define what we mean by the term. The second (IMHO, much more fruitful) is to taboo your words (https://www.lesswrong.com/posts/WBdvyyHLdxZSAMmoz/taboo-your…)—that is, avoid vague terms like AGI (or even AI!) and instead use something more concrete. For example: “When will it outperform 90% of software engineers at writing code?” or “When will all AI development be in hands on AI?”.

  • Post Author
    dmwilcox
    Posted April 17, 2025 at 10:08 pm

    I've been saying this for a decade already but I guess it is worth saying here. I'm not afraid AI or a hammer is going to become intelligent (or jump up and hit me in the head either).

    It is science fiction to think that a system like a computer can behave at all like a brain. Computers are incredibly rigid systems with only the limited variance we permit. "Software" is flexible in comparison to creating dedicated circuits for our computations but is nothing by comparison to our minds.

    Ask yourself, why is it so hard to get a cryptographically secure random number? Because computers are pure unadulterated determinism — put the same random seed value in your code and get the same "random numbers" every time in the same order. Computers need to be like this to be good tools.

    Assuming that AGI is possible in the kinds of computers we know how to build means that we think a mind can be reduced to a probabilistic or deterministic system. And from my brief experience on this planet I don't believe that premise. Your experience may differ and it might be fun to talk about.

    In Aristotle's ethics he talks a lot about ergon (purpose) — hammers are different than people, computers are different than people, they have an obvious purpose (because they are tools made with an end in mind). Minds strive — we have desires, wants and needs — even if it is simply to survive or better yet thrive (eudaimonia).

    An attempt to create a mind is another thing entirely and not something we know how to start. Rolling dice hasn't gotten anywhere. So I'd wager AGI somewhere in the realm of 30 years to never.

  • Post Author
    lexarflash8g
    Posted April 17, 2025 at 10:25 pm

    Apparently Dwarkesh's podcast is a big hit in SV — it was covered by the Economist just recently. I thought the "All in" podcast was the voice of tech but their content has been going politcal with MAGA lately and their episodes are basically shouting matches with their guests.

    And for folks who want to read rather than listen to a podcast, why not create an article (they are using Gemini) rather than just posting the whole transcript? Who is going to read a 60 min long transcript?

  • Post Author
    swframe2
    Posted April 17, 2025 at 10:33 pm

    The Anthropic's research on how LLMs reason shows that LLMs are quite flawed.

    I wonder if we can use an LLM to deeply analyze and fix the flaws.

  • Post Author
    colesantiago
    Posted April 17, 2025 at 10:34 pm

    This "AGI" definition is extremely loose depending on who you talk to. Ask "what does AGI mean to you" and sometimes the answer is:

    1. Millions of layoffs across industries due to AI with some form of questionable UBI (not sure if this works)

    2. 100BN in profits. (Microsoft / OpenAI definition)

    3. Abundance in slopware. (VC's definition)

    4. Raise more money to reach AGI / ASI.

    5. Any job that a human can do which is economically significant.

    6. Safe AI (Researchers definition).

    7. All the above that AI could possibly do better.

    I am sure there must be a industry aligned and concrete definition that everyone can agree on rather the goal post moving definitions.

  • Post Author
    alecco
    Posted April 17, 2025 at 10:41 pm

    Is it me or the signal/noise is needle in a haystack for all these cheerleader tech podcasts? In general, I really miss the podcast scene from 10 years ago, less polished but more human and with reasonable content. Not this speculative blabber that seems to be looking to generate clickbait clips. I don't know what happened a few years ago, but even solid podcasts are practically garbage now.

    I used to listen to podcasts daily for at least an hour. Now I'm stuck with uploading blogs and pdfs to Eleven Reader. I tried the Google thing to make a podcast but it's very repetitive and dumb.

  • Post Author
    ChicagoDave
    Posted April 17, 2025 at 10:43 pm

    You can’t put a date on AGI until the required technology is invented and that hasn’t happened yet.

  • Post Author
    owenthejumper
    Posted April 18, 2025 at 12:00 am

    I "love" how the interviewer keeps conflating intelligence with "Hey OpenAI will make $100b"

Leave a comment

In the Shadows of Innovation”

© 2025 HackTech.info. All Rights Reserved.

Sign Up to Our Newsletter

Be the first to know the latest updates

Whoops, you're not connected to Mailchimp. You need to enter a valid Mailchimp API key.