Skip to content Skip to footer

Three Observations by davidbarker

41 Comments

  • Post Author
    imjonse
    Posted February 9, 2025 at 9:31 pm

    "as we get closer to achieving AGI, we believe that trending more towards individual empowerment is important; the other likely path we can see is AI being used by authoritarian governments to control their population through mass surveillance and loss of autonomy."

    The AI being controlled by megacorps scenario conveniently left out.

  • Post Author
    zazazache
    Posted February 9, 2025 at 9:32 pm

    lol, lmao even

    1) give me more money and i will make you rich
    2) don’t look at deepseek
    3) I repeat: there is no reason to not keep giving me EXPONENTIALLY more money to boil all of the oceans

  • Post Author
    llamaimperative
    Posted February 9, 2025 at 9:33 pm

    > Over time, in fits and starts, the steady march of human innovation [alongside monumental efforts of risk mitigation of each new innovation] has brought previously unimaginable levels of prosperity and improvements to almost every aspect of people’s lives.

    Anyway, AI/AGI will not yield economic liberation for the masses. We already produce more than enough for economic liberation all over the world, many times over. It hasn't happened. Why? Snuck in here:

    > the price of… a few inherently limited resources like land may rise even more dramatically.

    This is really the crux of it. The price of land will skyrocket, driving the "cost of living" (cost of land) wedge further between the haves and have-nots.

  • Post Author
    talles
    Posted February 9, 2025 at 9:33 pm

    TLDR:

    1. The intelligence of an AI model roughly equals the log of the resources used to train and run it.

    2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use.

    3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature.

  • Post Author
    gloflo
    Posted February 9, 2025 at 9:37 pm

    "as we get closer to achieving AGI"

    AGI as defined by OpenAI as s "AI systems that can generate at least $100 billion in profits", right? Because what they are doing has very little to do with actual AGI.

  • Post Author
    fsndz
    Posted February 9, 2025 at 9:40 pm

    The chief hype officer is back at it again. Altman is defending exponential progress when everything points to incremental progress with signs of diminishing returns. Altman thinks benefits will naturally trickle down when everything points to corporation replacing employees with AI to boost their profit margins.
    I now understand why some people say Sama might be wrong: https://www.lycee.ai/blog/why-sam-altman-is-wrong

  • Post Author
    bambax
    Posted February 9, 2025 at 9:42 pm

    The very term "AGI" is a confession that the I in AI doesn't stand for intelligence.

    What is even the opposite of "general intelligence"? Specialized intelligence?

    But AI already has a large spectrum. It's not an expert system in this or that. It's quite general. But it's not actually intelligent.

    We should replace "AGI" with "AAI": Actual Artificial Intelligence.

  • Post Author
    edding4500
    Posted February 9, 2025 at 9:43 pm

    > In some sense, AGI is just another tool in this ever-taller scaffolding of human progress we are building together. In another sense, it is the beginning of something for which it’s hard not to say “this time it’s different”; the economic growth in front of us looks astonishing, and we can now imagine a world where we cure all diseases, have much more time to enjoy with our families, and can fully realize our creative potential

    yeah, right. What world is this guy living in? An idealistic one? Will AI equally spread the profits of that economic growth he is talking about? I only see companies getting by on less menpower and doing fine, while poor people stay poor. Bravo. Well thought trhough, guy who now sells "AI".

  • Post Author
    FloatArtifact
    Posted February 9, 2025 at 9:46 pm

    "Anyone in 2035 should be able to marshall the intellectual capacity equivalent to everyone in 2025"

    There's a lot to unpack there. Maintain an internal 10-year technological lead compared to what's public with OpenAI?

  • Post Author
    qwertox
    Posted February 9, 2025 at 9:46 pm

    > 2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.

    This is so exciting. I guess NVIDIA's Project DIGITS [0] will be the starting point for a bit more serious home lab usage of local LLMs, while still being a bit like what Quadro used to be in the pro/prosumer market in the 00s/10s.

    Now it's all RTX, and while differences still exist between pro gamer cards and workstation cards, most of what workstation GPUs were used back then is easily doable by pro gamer cards nowadays.

    Let's just hope that the quoted values are also valid for these prosumer devices like Project DIGITS.

    Also, let's hope that companies start targeting that user base specifically, like a Groq SBC.

    [0] https://nvidianews.nvidia.com/news/nvidia-puts-grace-blackwe…

  • Post Author
    caust1c
    Posted February 9, 2025 at 9:51 pm

    The crux of the challenges AGI presents is hardly mentioned as a footnote in this blog:

    > In particular, it does seem like the balance of power between capital and labor could easily get messed up, and this may require early intervention. We are open to strange-sounding ideas like giving some “compute budget” to enable everyone on Earth to use a lot of AI, but we can also see a lot of ways where just relentlessly driving the cost of intelligence as low as possible has the desired effect.

    The primary challenge in my opinion is that access to AGI will dramatically accelerate wealth inequality. Driving costs lower will not magically enable the less educated to be better able to educate themselves using AGI, particularly if they're already at risk or on the edge of economic uncertainty.

    I want to know how people like sama are thinking about the economics of access to AGI in more broad terms, more than just a footnote in a utopian fluff piece.

    edit: I am an optimist when it comes to the applications around AI, but I have no doubt that we're in for a rough time as the world copes with the economic implications of it's applications. Globally, the highest paying jobs are knowledge workers and we're on the verge (relatively speaking) of making that work go the way that blue collar work did in post-war United States. There's a lot of hard problems ahead and it bothers me when people sweep them under the rug in the name of progress.

  • Post Author
    timewizard
    Posted February 9, 2025 at 9:53 pm

    > 1. The intelligence of an AI model roughly equals the log of the resources used to train and run it.

    We are burning cash so fast and getting very little in return for it. This is a death spiral and we refuse to admit it.

    > The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use.

    We are entirely unconcerned with accuracy and refuse to see how the limitations of our product will not allow us to follow this simple economic aphorism into success.

    > The socioeconomic value of linearly increasing intelligence is super-exponential in nature.

    You see, even though we're burning money at an exponentially increasing rate, some how this /linear/ increase in output is secretly "super-exponential" in nature. I have nothing to back this up but you just have to believe me.

    At least Steve Jobs build something worth having before bullshitting about it. This is just embarassing.

  • Post Author
    guybedo
    Posted February 9, 2025 at 9:54 pm

    At first AI models/systems/agents help employees be more productive.

    But i can't imagine a future where this doesn't lead to mass layoffs, or hiring freezes because these systems can replace tens or hundreds of employess, the end result being more and more unemployed people.

    Sure there's been the industrial revolution and the argument usually is: some people will lose their jobs but many other jobs will be created. I'm not sure this argument gonna hold this time given the magnitude of the change.

    Is there any serious study of the impact of AI on society and employment, and most importantly is there any solution to this problem ?

  • Post Author
    sealeck
    Posted February 9, 2025 at 9:55 pm

    Reading between the lines, I get the feeling that OpenAI may be starting to feel desperate if they feel the need to drive the hype like this.

  • Post Author
    abc-1
    Posted February 9, 2025 at 9:58 pm

    I refuse to listen to anyone who has done nothing in life but maximize their wealth and power, play corporate political games, and grift investors for all they're worth. Shame the actual innovators don't blog much, usually because they're too busy doing the actual work.

  • Post Author
    azinman2
    Posted February 9, 2025 at 9:58 pm

    > the economic growth in front of us looks astonishing, and we can now imagine a world where we cure all diseases, have much more time to enjoy with our families, and can fully realize our creative potential.

    I think this is extremely myopic to assume things like “have much more time to enjoy with our families” unless that time is because you’re unemployed. Every major technology over the past couple hundred years has been paired with such promises, and it’s never materialized. Only through unions did we get 8h days and weekends. Electricity, factory farming, etc has not made me work any less, even if I do different things than I would have 200 years ago.

    I think it’s also odd to assume the only things preventing curing all disease is the lack of intelligence and scale. There are so many more factors that go into this, and into an already competitive landscape (biology) which is constantly evolving and changing. With every new technique innovated (eg CRISPR) and every new discovery (eg immunotherapy) proven, the directions of what’s possible changes. If AGI is thru LLMs as we know it (color me skeptical), they do not have the ability to absorb such new possibilities and change on a dime.

    I could go on and on but this is just a random comment on the internet. I understand the original post is meant to achieve certain goals at a specific word length, but not diving into all of to see possibilities (including failure modes in his extraordinarily optimistic assumptions) is quite irresponsible if he is truly meant to be a leader for a bold new future.

  • Post Author
    dimgl
    Posted February 9, 2025 at 10:02 pm

    I’m not sure anyone is convinced this will empower individuals. On the contrary: if we get this tech “right” enough, the inequality gap will become an inequality chasm… There is no financial incentive to pay humans when a machine is a fraction of the cost.

  • Post Author
    smokel
    Posted February 9, 2025 at 10:04 pm

    > The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. … Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.

    Moore waited at least five years [1] before deriving his law. On top of that, I don't think that it makes much sense to compare commercial pricing schemes to technical advancements.

    [1] http://cva.stanford.edu/classes/cs99s/papers/moore-crammingm…

  • Post Author
    Eliezer
    Posted February 9, 2025 at 10:05 pm

    I wonder who wrote this? Doesn't sound like Altman's voice.

    I wonder who theorized this? Altman isn't known for having models about AGI.

    To the actual theorist: Claiming in one paragraph that AI goes as log resources, and in the next paragraph that the resource costs drop by 10x per year, is a contradiction; the latter paragraph shows a dependence on algorithms that is nothing like "it's just the compute silly".

  • Post Author
    cratermoon
    Posted February 9, 2025 at 10:05 pm
  • Post Author
    msp26
    Posted February 9, 2025 at 10:06 pm

    > one of our reasons for launching products early and often is to give society and the technology time to co-evolve

    Is this really true? O3 (not mini) is still being held for ""safety testing"", and Sora was announced so far before release.

  • Post Author
    dbuser99
    Posted February 9, 2025 at 10:12 pm

    AI has to be the solution to everything to justify the kind of investments going into it right now.

    Sam’s a savvy businessman so he obviously understands that and goes few steps further. He promises exponential returns and addresses any regulatory and societal concerns. This piece is strategically crafted, not for us, but for investors.

  • Post Author
    aithrowawaycomm
    Posted February 9, 2025 at 10:15 pm

    > *By using the term AGI here, we aim to communicate clearly, and we do not intend to alter or interpret the definitions and processes that define our relationship with Microsoft. We fully expect to be partnered with Microsoft for the long term. This footnote seems silly, but on the other hand we know some journalists will try to get clicks by writing something silly so here we are pre-empting the silliness…

    OpenAI and Microsoft signed a profoundly silly contract with vague terms about AGI! It's petty to snipe about journalists asking silly questions: they are responding to the fact that Sam Altman and Satya Nadella are not serious people, despite all their money and power.

  • Post Author
    credit_guy
    Posted February 9, 2025 at 10:17 pm

    Trying to regulate AI so that it is used predominantly for good and not for evil is as futile as trying to regulate steam in the early 19th century to be used for good and not for evil. It turned out steam was mainly used for good, but nothing could stop the navies of the great powers build steam powered battleships.

    Nothing will stop the Chinese CCP to direct AI towards more state surveillance. Or any number of actors to use AI to create extremely lethal swarms of drones.

  • Post Author
    chvid
    Posted February 9, 2025 at 10:20 pm

    “The intelligence of an AI model roughly equals the log of the resources used to train and run it. These resources are chiefly training compute, data, and inference compute.”

    Hasn’t the latest big improvements in LLMs been due to change in approach/algorithm? Reasoning models and augmenting LLMs with external tools and internet access (agents and deep research).

    As far I can tell classical pure LLMs are only improving modestly at this point.

  • Post Author
    ineptech
    Posted February 9, 2025 at 10:27 pm

    > AI will seep into all areas of the economy and society; we will expect everything to be smart.

    I fear this is correct, but with "smart" in the sense of smart TVs. In economic terms, TVs are amazing compared to just a few years ago – more pixels, much cheaper, more functionality – but in practice they spy on you and take ten seconds to turn on and show unskippable ads. This is purely a social (legal, economic, etc) problem as opposed to a technical one, and its solution (if we ever find one) would be likewise. So it's frightening to see someone with as much power over the outcome say something like this:

    > In particular, it does seem like the balance of power between capital and labor could easily get messed up, and this may require early intervention. We are open to strange-sounding ideas like giving some “compute budget” to enable everyone on Earth to use a lot of AI, but we can also see a lot of ways where just relentlessly driving the cost of intelligence as low as possible has the desired effect.

    When capital has control of an industry, but voluntarily gives little pieces of it out to labor so that they can "share" the profit, I think we all know how that turns out. It does seem possible that AGI really will get built and really will seep into everything and really will create a ton of economic value. But getting from there to the part where it improves everyone's lives is a social problem, akin to the problem of smart TVs, and I can't imagine a worse plan to solve that problem than leaving it up to the capitalist that owns the AGIs.

  • Post Author
    TheAceOfHearts
    Posted February 9, 2025 at 10:38 pm

    > The socioeconomic value of linearly increasing intelligence is super-exponential in nature.

    I'm still stuck thinking about this point. I don't know that it's obviously true. Maybe a more bounded claim would make more sense, something like: increasing intelligence in the short-term has big compounding effects. But there's also a cap as society and infrastructure has to adapt. And I don't know how this plays out within an adversarial system where people might be competing for scarce resources like human attention.

    Taken to the extreme, one could imagine a fantasy/scifi scenario where each person is empowered like a god in their own universe, allowing them to experiment, learn and and create endlessly.

  • Post Author
    abetusk
    Posted February 9, 2025 at 10:49 pm

    > 1. The intelligence of an AI model roughly equals the log of the resources used to train and run it.

    > 2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use.

    > 3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature.

    My own editorial:

    Point 1 is pretty interesting as if P != NP then this is essentially "we can't do better than random search". So, in order to find progressively better solutions as we increase the (linear) input, we need exponentially more resources to find the answer. While I believe P != NP it's interesting to see this play out in the context of learning and AI.

    Point 2 is semi well-known. I'm having trouble finding it but there was an article a while back talking about algorithmic efficiencies to the DFT (or DCT?) were outpacing efficiencies that could be just attributed to Moore's law. Meaning the DFT was improving a few orders of magnitude faster than just Moore's law would imply. I assume this is essentially a Wright's law but for attention, in some sense, where more attention to problems leads to better optimizations that dovetail with Moore's law.

    Point 3 seems like it's almost a corollary, at least in the short term. If intelligence is capturing the exponential search and it can be re-used to find further efficiency, as in point 2, you get super-exponential growth. I think Kurzweil mentioned something about this as well.

    I haven't read the whole article but this jumped out at me:

    > Our mission is to ensure that AGI (Artificial General Intelligence) benefits all of humanity.

    A bald faced lie. Their mission is to capture value from developing AGI. Any benefit to humanity is incidental.

  • Post Author
    Foofoobar12345
    Posted February 9, 2025 at 10:59 pm

    So much negativity all the time. The fact remains that AI models are out in the public domain. If you’re unable to see how that can improve the lives of an average person, that’s a failure of imagination.

    Don’t let megacorps dominance prevent individual action. The best way to be hopeful is to effect change in your immediate surroundings. Teach people how to leverage AI, then they won’t be hold hostage to the tyranny of the beauraucrats – doctors, lawyers, accountants, politicians, software engineers, project managers, bankers, investment advisors etc.

    Yes, AI makes mistakes .. so what? Humans do too.

    Credit where credit is due – Sam may be no saint, but OpenAI deserves credit for launching this revolution. Directly or indirectly that led to the release of open models. Would the results have been the same without Sam? Nobody knows, not a point worth anybody’s time debating.

    Given most of us here are software engineers, it’s natural to feel threatened, there will be those of us whose skills will be made obsolete. And some of us will make the jump to the land of ideas, and these tools will empower us to build solutions that previously required large companies to build. Perhaps that might mean that we focus less on monetary rewards instead of change, as it becomes ever so easy to effect that change.

    To those whose skills will be made obsolete – you have a choice on whether you want to let that happen. Some amount of fear is healthy – keeps our mind alert and allows us to grow.

    There will be a growing pain, as our species evolves. We’ll have to navigate that with empathy.

    Change starts from you. You are more powerful than you can imagine, at any given point.

  • Post Author
    thrance
    Posted February 10, 2025 at 12:10 am

    > In particular, it does seem like the balance of power between capital and labor could easily get messed up, and this may require early intervention.

    Marxist analysis from Sam Altman?

    > We are open to strange-sounding ideas like giving some “compute budget” to enable everyone on Earth to use a lot of AI, but we can also see a lot of ways where just relentlessly driving the cost of intelligence as low as possible has the desired effect.

    I can't imagine a world in which billionaires (trillionaires?) would happily share their wealth to people whose work they don't need anymore. Honestly, I have more faith in a rogue ASI taking over the world to install a fair post-scarcity society than on current politicians and tech-oligarchs to give up a single cent to the masses.

  • Post Author
    TheRealNGenius
    Posted February 10, 2025 at 12:13 am

    [dead]

  • Post Author
    dustingetz
    Posted February 10, 2025 at 12:19 am

    each subsequent post is worse than the one before who is the audience for this drivel

  • Post Author
    yapyap
    Posted February 10, 2025 at 12:52 am

    [flagged]

  • Post Author
    blah2244
    Posted February 10, 2025 at 1:13 am

    > AGI is a weakly defined term, but generally speaking we mean it to be a system that can tackle increasingly complex problems, at human level, in many fields.

    Is it just me, or is that an incredibly weak/vague definition for AGI? Feels like you could make the claim that AI is at this level already if you stretch the terms he used enough.

  • Post Author
    0xDEAFBEAD
    Posted February 10, 2025 at 2:24 am

    >Ensuring that the benefits of AGI are broadly distributed is critical. The historical impact of technological progress suggests that most of the metrics we care about (health outcomes, economic prosperity, etc.) get better on average and over the long-term, but increasing equality does not seem technologically determined and getting this right may require new ideas.

    I don't see why we should trust OpenAI's promises now, when they've broken promises in the past.

    See the "OpenAI has a history of broken promises" section of this webpage: https://www.safetyabandoned.org/

    In my view, state AGs should not allow them to complete their transition to a for-profit.

  • Post Author
    pton_xd
    Posted February 10, 2025 at 2:46 am

    We can't even simulate a C. elegans worm with 1k cells and 302 neurons. But, sure, a few more GPUs and we'll have AGI!

  • Post Author
    vivzkestrel
    Posted February 10, 2025 at 3:18 am

    the day your neural network has 1 quintillion connections between each other wake me up because that is when we are getting anywhere close to AGI, billions and trillions are rookie numbers

  • Post Author
    Animats
    Posted February 10, 2025 at 4:14 am

    2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.

    3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future.

    First, if the cost is coming down so fast, why the need for "exponentially increasing investment"? One could make the same exponential growth claim for, say, the electric power industry, which had a growth period around a century ago and eventually stabilized near 5% of GDP. The "tech sector" in total is around 9% of US GDP, and relatively stable.

    Second, only about half the people with college degrees in the US have jobs that need college degrees. The demand for educated people is finite, as is painfully obvious to those paying off college loans.

    This screed comes across as a desperate attempt to justify OpenAI's bloated valuation.

  • Post Author
    xbar
    Posted February 10, 2025 at 4:26 am

    Only two of those are observations.

  • Post Author
    pplonski86
    Posted February 10, 2025 at 4:31 am

    > Still, imagine it as a real-but-relatively-junior virtual coworker. Now imagine 1,000 of them. Or 1 million of them. Now imagine such agents in every field of knowledge work.

    I really don't want a future in which I had to supervise 1 million of real-but-relatively-junior virtual coworkers. I would prefer 1 senior over 1 million juniors. I'm in the programming industry and I don't think that coding might be scaled.

  • Post Author
    tunesmith
    Posted February 10, 2025 at 5:39 am

    We've just gotten through a big second generation of available models, but at least for the projects I'm on, I still feel just as far away as ever as being able to trust responses. I spent a few hours this weekend working on a side project that involved setting up a graphql server with a few basic query resolvers and field resolvers, and my experience with 4o, R1, o3-mini-high were all akin to arguing with too-confident junior engineers that were confused, without realizing it, about what they thought they knew. And this was basic stuff about simple graphql resolvers. I did have my first experience of o3-mini-high (finally, after much arguing) being able to answer something that R1 couldn't, though.

    It's weird, because it's still wildly useful and my ideas for side projects are definitely more expansive than they used to be. And yet, I'm really far from having any fear of replacement. Almost none of the answers I'm getting are truly nailing the experience of teaching me something new, while also having perfect accuracy. (I'm on ChatGPT+, not pro.)

Leave a comment

In the Shadows of Innovation”

© 2025 HackTech.info. All Rights Reserved.

Sign Up to Our Newsletter

Be the first to know the latest updates

Whoops, you're not connected to Mailchimp. You need to enter a valid Mailchimp API key.