Version 3.49-2 (05/20/2017)
Engine fixes (keep-alive, redirects, new hashtables, unit tests)
Installing HTTrack:
Go to the download section now!
For help and questions:
Visit the forum,
Read the documentation,
Read the FAQs,
Browse the sources

Welcome
HTTrack is a free (GPL, libre/free software) and easy-to-use offline browser utility.
It allows you to download a World
Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML,
images, and other f
|
12 Comments
hosteur
Related: https://news.ycombinator.com/item?id=27789910
shuri
Time to add AI mode to this :).
ulrischa
[flagged]
ksec
Not sure about the context on why this is on HN but it surely put a smile on my face. Used to use it during 56K era when I just download everything and read it. Basically using it as RSS before RSS was a thing.
icameron
I’ve used it a few times to “secure” an old but relevant dynamic website site. Like a site for a mature project that shouldn’t disappear from the internet but it’s not worth upgrading 5 year old code that wont pass our “cyber security audit” due to unsupported versions of php or rails so we just convert to a static site and delete the database. Everything pretty much works fine on the front end, and the CMS functionality is no longer needed. It’s great for that niche use case.
NewEntryHN
Is this
wget –mirror
?
solardev
This doesn't really work with most sites anymore, does it? It can't run JavaScript (unlike headless browsers with Playwright/Puppeteer, for example), has limited supported for more modern protocols, etc.?
Any suggestions for an easy way to mirror modern web content, like an HTTrack for the enshittifed web?
superjan
A few years ago my workplace got rid of our on-premise install of fogbugz. I tried to clone the site with HTTrack but did not work due to client-side JavaScript and authentication issues.
I was familiar with C#/webview2 and used that: generate the URL’s, load the pages one by one, wait for it to build the HTML, and then save the final page. Intercept and save the css/image request.
If you have ever integrated a browserview in a dektop or mobile app, you already know how to do this.
jmsflknr
Never found a great alternative of this for Mac.
Hard_Space
I used this all the time twenty years ago. Tried it out again for some reason recently, I think at the suggestion of ChatGPT (!), for some archiving, and it actually did some damage.
I do wish there was a modern version of this that could embed the videos in some of my old blog posts so I could save them entire locally as something other than an HTML mystery blob. None of the archive sites preserve video, and neither do extensions like SingleFile. If you're lucky, they'll embed a link to the original file, but that won't help later when the original posts go offline.
tamim17
In old time, I used to download entire website using HTTrack and read it later.
bruh2
I can't recall the details, but this tool had quite some friction last time I tried downloading a site with it. Too many new definitions to learn too many knobs it asks you to tweak. I opted to use `wget` with the `–recursive` flag, which just did what I expected it to do out of the box: crawl all links you can find and download them. No tweaking needed, and nothing new to learn.