Scholars usually portray institutions as stable, inviting a status quo bias in their theories. Change, when it is theorized, is frequently attributed to exogenous factors. This paper, by contrast, proposes that institutional change can occur endogenously through population loss, as institutional losers become demotivated and leave, whereas institutional winners remain. This paper provides a detailed demonstration of how this form of endogenous change occurred on the English Wikipedia. A qualitative content analysis shows that Wikipedia transformed from a dubious source of information in its early years to an increasingly reliable one over time. Process tracing shows that early outcomes of disputes over rule interpretations in different corners of the encyclopedia demobilized certain types of editors (while mobilizing others) and strengthened certain understandings of Wikipedia’s ambiguous rules (while weakening others). Over time, Wikipedians who supported fringe content departed or were ousted. Thus, population loss led to highly consequential institutional change.
@manucode@feddit.de I am also in agreement that I don't know how a federated wikipedia solves what made Wikipedia so great. Per the paper above, fringe editors saying "the flatness of the world is a debated topic" gradually got frustrated about having to "present evidence" and having their work reverted all the time, and so voluntarily left over time. And so an issue page goes from being "both sides" to "one side is a fringe idea".
From reading the Ibis page, this seems a lot closer to fandom than the wikipedia. Different encyclopedias where the same page name can be completely different.
Skepchick also had a great video about the topic: https://www.patreon.com/posts/92654496
Get a lightweight gaming laptop instead. Combine with a lap desk.
@matcha_addict@lemy.lol In this situation, I'd advise acquiring a copy from an alternative source, then just compare the texts of the two.
In practicality though, if you're already going the OCR route then just utility knife cut the pages from a real book and feed them into a feeder scanner. All they get to know is that some asshole cyberpunk script kiddie jacked your book while you were waiting at a bus stop.
The bad news is that uploading e-books will involve programming on your part (for your sanity at least).
The good news is that it should be far easier than other mediums.
If you are approaching from a complete safety perspective (cause you live in a fiefdom that owes tribute to the publishers guild), then you're going to want to OCR the pages of the book and use the text to make a brand new book free from metadata. I'm pretty sure a python crash course could get you up and running in a month or 6.
If you want what's closest to the original product, then you'll need a python script that strips everything from the book into just a text document, then re-convert back into your own book. You'll have to review the text document to see if any random code was included in the book like invisible text.
Both options are so simple from a programming perspective that I've never seen scripts to strip e-book protections. A real (the solution is left un-worked as a challenge for the reader). And from what I know, the publishers have switched to focusing on selling hard copies as their bread and butter, and striking deals with libraries for other revenue. Big money is still in mandatory university textbooks.
Source: Never actually done what you're asking for
From what I understand, they're able to practically make custom audio files for every download. Sharing the time stamps wouldn't work that well. Re-distributing podcasts without the ads would definitely land you in legal trouble, cause every audio file is their "work of art".
Not a problem for ublock because you're editing their work of art for your personal use, and sharing unaltered stuff.
And youtube sponsor block is just sharing time stamps you might be interested in.
AI system that can recognize patterns and auto skip forward?