Skip to content Skip to footer
HTTrack Website Copier by yamrzou

HTTrack Website Copier by yamrzou

12 Comments

  • Post Author
    hosteur
    Posted March 18, 2025 at 5:46 pm
  • Post Author
    shuri
    Posted March 18, 2025 at 5:55 pm

    Time to add AI mode to this :).

  • Post Author
    ulrischa
    Posted March 18, 2025 at 5:56 pm

    [flagged]

  • Post Author
    ksec
    Posted March 18, 2025 at 6:41 pm

    Not sure about the context on why this is on HN but it surely put a smile on my face. Used to use it during 56K era when I just download everything and read it. Basically using it as RSS before RSS was a thing.

  • Post Author
    icameron
    Posted March 18, 2025 at 6:48 pm

    I’ve used it a few times to “secure” an old but relevant dynamic website site. Like a site for a mature project that shouldn’t disappear from the internet but it’s not worth upgrading 5 year old code that wont pass our “cyber security audit” due to unsupported versions of php or rails so we just convert to a static site and delete the database. Everything pretty much works fine on the front end, and the CMS functionality is no longer needed. It’s great for that niche use case.

  • Post Author
    NewEntryHN
    Posted March 18, 2025 at 7:52 pm

    Is this

    wget –mirror

    ?

  • Post Author
    solardev
    Posted March 18, 2025 at 8:18 pm

    This doesn't really work with most sites anymore, does it? It can't run JavaScript (unlike headless browsers with Playwright/Puppeteer, for example), has limited supported for more modern protocols, etc.?

    Any suggestions for an easy way to mirror modern web content, like an HTTrack for the enshittifed web?

  • Post Author
    superjan
    Posted March 18, 2025 at 8:22 pm

    A few years ago my workplace got rid of our on-premise install of fogbugz. I tried to clone the site with HTTrack but did not work due to client-side JavaScript and authentication issues.

    I was familiar with C#/webview2 and used that: generate the URL’s, load the pages one by one, wait for it to build the HTML, and then save the final page. Intercept and save the css/image request.

    If you have ever integrated a browserview in a dektop or mobile app, you already know how to do this.

  • Post Author
    jmsflknr
    Posted March 18, 2025 at 8:56 pm

    Never found a great alternative of this for Mac.

  • Post Author
    Hard_Space
    Posted March 18, 2025 at 9:00 pm

    I used this all the time twenty years ago. Tried it out again for some reason recently, I think at the suggestion of ChatGPT (!), for some archiving, and it actually did some damage.

    I do wish there was a modern version of this that could embed the videos in some of my old blog posts so I could save them entire locally as something other than an HTML mystery blob. None of the archive sites preserve video, and neither do extensions like SingleFile. If you're lucky, they'll embed a link to the original file, but that won't help later when the original posts go offline.

  • Post Author
    tamim17
    Posted March 18, 2025 at 9:21 pm

    In old time, I used to download entire website using HTTrack and read it later.

  • Post Author
    bruh2
    Posted March 18, 2025 at 9:36 pm

    I can't recall the details, but this tool had quite some friction last time I tried downloading a site with it. Too many new definitions to learn too many knobs it asks you to tweak. I opted to use `wget` with the `–recursive` flag, which just did what I expected it to do out of the box: crawl all links you can find and download them. No tweaking needed, and nothing new to learn.

Leave a comment

In the Shadows of Innovation”

© 2025 HackTech.info. All Rights Reserved.

Sign Up to Our Newsletter

Be the first to know the latest updates

Whoops, you're not connected to Mailchimp. You need to enter a valid Mailchimp API key.