Have you investigated some of the options already now?
A bunch of other things came up, forcing me to put the project on the back burner.
(eg. Most recently (about a week ago), I had my 6-month-old boot drive go bad and it took me several days to rush-order a new NVMe drive, learn ZFSBootMenu, restore my backups, and redesign my backup strategy so that, when the original comes back from RMA, if the ZFS mirroring and snapshotting and the trick to mirror the EFI system partition isn't enough to ensure high availability, a full, bootable backup of the NVMe pool's contents can be restored in 2 hours or less with the sequential read performance of my first tier of backup being the bottleneck.)
missing flexibility for output paths has been an annoyance.
Hmm. We'll see if I wind up using it. Avoiding deadlinks has been non-negotiable to the point where replicating my WordPress blog on a local httpd, spidering it, and logging the URLs I need to preserve has been one of the big hold-ups.
is that I found Zola to be quite hard to hack on
Hmm. Potentially a reason I'll wind up making my own, given that I've written SSGs in Python before (eg. https://vffa.ficfan.org/ is on a homebrew Python SSG) and I've already got a single-page pulldown-cmark
frontend I've gone way overboard on the features for and a basic task-specific Rust SSG for my mother's art website that I can merge with it and generalize.
EDIT: Here's a screenshot of what I mean by saying I've gone way overboard.
and Tera (its templating lang) to be a little buggy / much less elegant than minijinja API-wise.
Hmm. Noted. I think i'm using Tera for my mother's SSG.
Re. link checking, have you seen lychee? When I found out about it, the priority of building my own link checker in my SSG (that was only an idea at that point, I think) basically dropped to zero :D
You accidentally re-used the link to the Zola issue tracker there. I have not yet checked out lychee and I'm getting a docs.rs error when clicking the examples link, so all I can say is that it'll depend on how amenable it is to checking a site rooted in a file://
URL so I don't need the overhead and complexity of spinning up an HTTP server to check for broken links.
I don't want the overhead of looping through an HTTP client and server implementation in places it doesn't need to. I design my tooling based on a test target roughly comparable to the Raspberry Pi 4, performance-wise.