Download/Get/Scrap/Crawl all static files from website using Wget


We will use the justdeleteme.xyz website as a good example of this usage, because it only contains static files to host everything, but it needs multiple recursive crawls to download pages in all languages.


This is the base technique used to host delete.nogafam.es, plus some sed/grep comands to remove certain unwanted parts of all translated pages (such as Tweet button)

wget --recursive --no-clobber --page-requisites --html-extension --convert-links --domains justdeleteme.xyz --no-parent https://justdeleteme.xyz