Skip to content Skip to footer

13 Comments

  • Post Author
    cebert
    Posted February 9, 2025 at 7:03 pm

    The seamless transition demo is fantastic. The translated voice is passable for my own native voice. It would be incredible when we can achieve this in real-time.

  • Post Author
    rob-olmos
    Posted February 9, 2025 at 7:25 pm

    Is this subject purposely spelled Aidemos somewhere like the HN title says instead of AI Demos?

  • Post Author
    npalli
    Posted February 9, 2025 at 7:32 pm

    “ Our site is not available in your region at this time.”

  • Post Author
    brap
    Posted February 9, 2025 at 7:38 pm

    What is Meta’s angle with AI? They seem to be doing a lot of research but what is the end goal? Google and MSFT I understand, Meta not so much.

  • Post Author
    meltyness
    Posted February 9, 2025 at 7:52 pm

    It's a tool box of demos with the following:

    Segment Anything 2:
    Create video cutouts and other fun visual
    effects with a few clicks.

    Seamless Translation:
    Hear what you sound like in another
    language.

    Animated Drawings:
    Bring hand-drawn sketches to life with
    animations.

    Audiobox:
    Create an audio story with A1-generated
    voices and sounds.

  • Post Author
    kylecazar
    Posted February 9, 2025 at 8:19 pm

    Seamless translation is… Pretty incredible.

    I speak English and Spanish, so I recorded some English sentences and listened to the Spanish output it generated. It came damn close to my own Spanish (although I have more Castilianisms in mine, which of course I wouldn't expect it to know)

  • Post Author
    ewuhic
    Posted February 9, 2025 at 8:23 pm

    Where are all the links to models?

  • Post Author
    lelag
    Posted February 9, 2025 at 8:28 pm

    It's not exhaustive. For exemple, it's missing the Meta Motivo demo at https://metamotivo.metademolab.com/ (humanoid control model)

  • Post Author
    lvl155
    Posted February 9, 2025 at 8:31 pm

    These are all half-baked at best. They are spending so much
    money on undergraduate-level work. But to be fair, who in their right mind would work for Meta in 2025 if you have the talent.

  • Post Author
    nabaraz
    Posted February 9, 2025 at 8:32 pm

    I expected a lot more.

  • Post Author
    xyst
    Posted February 9, 2025 at 9:14 pm

    > Our site is not available in your region at this time.

    What the shit is this?

  • Post Author
    tsumnia
    Posted February 9, 2025 at 9:46 pm

    Neat, but I wish Meta would just say what this really is – "please give us some In the Wild data to further train our models on".

    I did the same technique years ago for estimating ages. Person uploads an image, helps align 10% of our facial landmark points, and run the estimator. If we were wrong, ask for correction and refine.

    Its still cool and all, but meh based on my prior experience.

  • Post Author
    rocauc
    Posted February 9, 2025 at 9:54 pm

    Meta deeply comprehends the impact of GPT-3 vs ChatGPT. The model is a starting point, and the UX of what you do with the model showcases intelligence. This is especially pronounced in visual models. Telling me SAM2 can "see anything" is neat. Clicking the soccer ball and watching the model track it seamlessly across the video even when occluded is incredible.

Leave a comment

In the Shadows of Innovation”

© 2025 HackTech.info. All Rights Reserved.

Sign Up to Our Newsletter

Be the first to know the latest updates

Whoops, you're not connected to Mailchimp. You need to enter a valid Mailchimp API key.