OK sorry if I am telling you something you already know, but the classic tool to use to scrape a website is wget. Starting from one URL it can find all linked URLs and download them, or not, based on configuration. I have always found it pretty much accessible it was one of the earliest cli tools I used. You can learn a little bit or a lot. I think for this task you could learn a little. So that would get you a perfect local mirror or the site.
GNU has the comprehensive docs of course https://www.gnu.org/software/wget/. But starting with some random tutorial might be better choice.
To get it to markdown you'll have to add another step, perhaps turndown or pandoc.
Other projects to check out is httrack and tangentially archivebox.
Its all good. I've heard of wget and used it in the past. I use PowerShell at work so I'm more accustomed to using it and it's an object oriented programing language which I have a good understanding of.
The module is masked on angular and so I could do element queries for the dom nodes I wanted and created some logic to move to the next paragraph under a given header until I reached the next header. Each header + paragraphs were collected then written to the HDD as markdown files.
But thanks for the tool suggestion, its good to know of alternatives.
OK sorry if I am telling you something you already know, but the classic tool to use to scrape a website is
wget
. Starting from one URL it can find all linked URLs and download them, or not, based on configuration. I have always found it pretty much accessible it was one of the earliest cli tools I used. You can learn a little bit or a lot. I think for this task you could learn a little. So that would get you a perfect local mirror or the site.GNU has the comprehensive docs of course https://www.gnu.org/software/wget/. But starting with some random tutorial might be better choice.
To get it to markdown you'll have to add another step, perhaps turndown or pandoc.
Other projects to check out is httrack and tangentially archivebox.
Its all good. I've heard of wget and used it in the past. I use PowerShell at work so I'm more accustomed to using it and it's an object oriented programing language which I have a good understanding of.
The module is masked on angular and so I could do element queries for the dom nodes I wanted and created some logic to move to the next paragraph under a given header until I reached the next header. Each header + paragraphs were collected then written to the HDD as markdown files.
But thanks for the tool suggestion, its good to know of alternatives.