Skip to content Skip to footer
Show HN: GPT image editing, but for 3D models by zachdive

Show HN: GPT image editing, but for 3D models by zachdive

16 Comments

  • Post Author
    dekuma
    Posted June 4, 2025 at 4:36 pm

    will you expose an API/export beyond STL so teams can slot this into existing pipelines? Excited to see creative-mode evolve.

  • Post Author
    gavmor
    Posted June 4, 2025 at 5:08 pm

    Have you put any thought into accommodating self-supporting design to optimize for printing?

    Or parametric structural optimization / topology optimization with parametric constraints?

    Or, rather, if these are out of scope for your tool, then I'm wondering, like the other comment, how you'll slot into existing pipelines that do facilitate these techniques.

    I'm not in the industry, I'm just curious.

  • Post Author
    moralestapia
    Posted June 4, 2025 at 5:16 pm

    This is really good, congrats on shipping!

    Can you upload an existing model and start working from there?

  • Post Author
    liesandxander
    Posted June 4, 2025 at 5:35 pm

    [flagged]

  • Post Author
    JKCalhoun
    Posted June 4, 2025 at 5:56 pm

    Having trouble trying to get a regular truncated tetrahedron. Maybe too .. obtuse? (Ha ha.)

  • Post Author
    hoakiet98
    Posted June 4, 2025 at 6:03 pm

    This is super interesting! Cursor CEO mentioned in interviews that they initially started with building AI for 3D models, but pivoted because they couldn't get enough data for the models to be effective.

    I wonder if you think this is still true given how much better the foundation models are now.

  • Post Author
    adenta
    Posted June 4, 2025 at 6:08 pm

    I've had some success with openSCAD and this MCP, feel free to ping if that's helpful context

    https://github.com/jhacksman/OpenSCAD-MCP-Server

  • Post Author
    klaussilveira
    Posted June 4, 2025 at 6:11 pm

    One thing no 3D AI tool has ever done is to focus on the concept of enhancing or restyling the textures of existing, UV-unwrapped 3D models. I had to build my own pipeline on ComfyUI and Blender scripts, exporting ID maps and black/white masks from the model UV, in order to get a Stable Diffusion to paint within the boundaries of the UV and consider it as means of painting. Using cavity maps also helped with the model create boundaries. But now I am able to quickly apply, let's say, comic-book style art into existing textures of existing models.

    Have you considered providing built-in tools for mesh decimation and UV unwrapping? I know it can be quickly done with meshlab, but I imagine not a lot of Adam users would even understand the need for decimation. Any possibility for also automating rigging?

  • Post Author
    AIorNot
    Posted June 4, 2025 at 6:15 pm

    Computer: tea, earl gray, hot

  • Post Author
    bko
    Posted June 4, 2025 at 6:22 pm

    This is great. I like the pattern of integrating LLMs into specific applications. Is there something similar to figma?

    There are a lot of tools to convert figma into code, but is there a reverse? Say you have code already and you neglected figma. Any way to create the figma and iterate on it through an LLM?

  • Post Author
    lucasoshiro
    Posted June 4, 2025 at 6:47 pm

    Feature request: since it's using OpenSCAD under the hook, it would be great to be able to download the .scad file

    The "creative" mode seems to be ok, but my main interest (the parametric) failed in my first test: generate a bottle.

    But anyway, good job!

  • Post Author
    arberavdullahu
    Posted June 4, 2025 at 7:02 pm

    Very cool! This reminds me of a use case I explored a few years ago—customizing furniture with different fabrics, wood finishes, and design options. In physical showrooms, furniture stores can usually only display a single version of each piece, but customers often want to visualize how the same item would look in various configurations. That’s where a digital tool could really shine.

    One concept I explored was creating an interactive app where users can experiment with different material options—essentially a real-time configurator. There’s a great example here [1], where if you model an object as a .obj file (possibly similar to Adam’s parametric models), you can tweak its materials and colors dynamically. IKEA seems to have something similar in production for some of their products [2].

    I experimented with Adam as well, and it did a surprisingly good job. The only catch: if you try to iterate too much, it tends to alter the form of the object. My ideal version of this would involve a professional photographer capturing high-resolution images of, say, a couch. Then I’d upload them into Adam, generate realistic renders with different fabrics or finishes, and download the final variants as high-quality images to use in catalogs or ecommerce.

    [1] https://angon.me/experiments/6/

    [2] https://www.ikea.com/gb/en/p/ektorp-2-seat-sofa-hakebo-grey-…

    [3] https://app.adamcad.com/share/2f1e68ad-2cdd-4613-8fdc-fc33f2…

  • Post Author
    TheonlyJem
    Posted June 4, 2025 at 7:22 pm

    I think I saw a video of this on YouTube a while back, of them building this. Looks promising

  • Post Author
    b0a04gl
    Posted June 4, 2025 at 7:34 pm

    super curious,how are you handling constraint resolution under the hood when user modifies via prompt vs direct manipulation? are you maintaining a shared parametric model or diffing against a scene graph? also how’s geometry validation handled postgen to avoid non-manifold trash?

  • Post Author
    flippyhead
    Posted June 4, 2025 at 8:13 pm

    I'm excited to try this with my 11-year old. We love 3d printing stuff, but have been mostly limiting ourselves to existing works on printables. I curious how well the output here works in our prusa.

  • Post Author
    ata_aman
    Posted June 4, 2025 at 8:17 pm

    Somewhat related, but I'd love GPT enabled multi-physics simulations on objects. Designing in CAD, especially for intricate objects seems (currently) to be better when done by "hand", but I'd absolutely love to use speech-to-text to run different simulations on said objects.

Leave a comment

In the Shadows of Innovation”

© 2025 HackTech.info. All Rights Reserved.

Sign Up to Our Newsletter

Be the first to know the latest updates

Whoops, you're not connected to Mailchimp. You need to enter a valid Mailchimp API key.