Hey everyone,
I’m reaching out because I could use some help or guidance here. So, I kinda ended up being the go-to person for collecting and storing data in a project I’m involved in with my team. We’ve got a bunch of data, like tens of thousands of files, and we’re using nextcloud to make it accessible to our users. The thing is, we’re facing a bit of a challenge with the search function on nextcloud when accessed through a public link. Being that there isn’t a search function. While we really appreciate the features it offers from an administrative standpoint, it’s not working that well for our users.
I was wondering if anyone has any suggestions or resources that could point us in the right direction for this project? It would be super awesome if it’s something open-source and privacy retaining. 😄
Thanks a bunch in advance!
That is a new one for us! Thank you for the info I will do some research!
Just want to let you know there is a nextcloud and selfhosted community on lemmy
Already read this? https://github.com/nextcloud/server/issues/39162
Seems like there is active work on improving search with nextcloud.
I've never heard of it before this.
I use Mega. It has a search function. No idea how fast it would be in your use case though.
Not open source though.
Yeah our problem is we are hosting everything on a website for public use. I didn't know that they were working on public search I will give that a read thank you!
Is there a specific version of Webdav you'd suggest? another user suggested using SFTPGo, do you have any preferences?
I use this one
https://hub.docker.com/r/ugeek/webdav
Its not been updated in a while though. I've found performance to be roughly the same.
Sftpgo includes webdav too so that too could work
Advantage of webdav is that you can view it in browser if needed
If they're well named files, just spin up a webDAV server via
rclone
and search by file name in the browser. You could also usedavfs2
to mount the server locally in a directory and then filter through the content withfd | fzf
If they're text files, spin up a docker with Forgejo (formerly gitea) and enable the bleve search indexer.
If you wanted to get really fancy, you could have wikijs in the same docker container, use git as a backend and get a wiki that's easy to fork and distribute among the team.
Would the rclone method work with a public website? I only have a vague familiarity with rclone from the .edu google drive days.
Of course, it's just a http server. All you have to do is port-forward.
Have you looked into combining Cryptomator with commercial cloud storage? Once mounted, you can search the directory like any other local drive. I only use it for my work directory so that I can maintain an encrypted cloud backup, so I am not sure how well it does when working in a multi-user environment with different levels of access.
I understand these files are not images or binary. But you should specify a bit better if they're PDF, XLS, DOC, etc or what else...
I'm sure that apache solr or elasticsearch can index these files. And there are nextcloud apps to integrate such indexes. For example, this one https://github.com/nextcloud/fulltextsearch/wiki