Hacker News new | past | comments | ask | show | jobs | submit login
Library Genesis Desktop app, now with IPFS support (mhut.org)
370 points by janandonly on Nov 8, 2021 | hide | past | favorite | 73 comments



The ´Books' tool [1] has been doing the same for a long time as well, can be used anywhere where Bash runs and - given its shell-based nature - is eminently hackable. It comes with a CLI (named 'books'), TUI ('nbook' and 'nfiction') and GUI ('xbook' and 'xfiction') and supports both IPFS, Bittorrent as well as direct download. It can search on all fields.

Source: I wrote this about 6 years ago and maintain it every now and then

[1] https://github.com/Yetangitu/books

[2] https://forum.mhut.org/viewtopic.php?f=34&t=7816


This looks nearly perfect apart from the mysql dependency. Sqlite should be more than enough for this use case.


I made for for a use case where an existing central database server can be used by several instances of Books, kept up to date using a cron script. This reduces external network interactions and makes it possible to search the database when the external network connection happens to be down. It uses mysql to keep things as simple as possible, it would of course be possible to use another database engine but that would necessitate a conversion of the libgen dumps (which are nothing but mysql dumps). Since there was a mysql instance running anyway this seemed superfluous but as said, it is quite hackable so if you feel an urge...


That is really cool project, but I can't give this to non technical people


The tools are meant for those who live on the command line, this more or less excludes 'non-technical people'. For those non-technical people there is always the multitude of web things, the Windows tool which started this thread and the plain ol' Library, the which the Library Genesis project - that is, the original version as headed by Bookwarrior - aims/aimed to be.

Let there always be tools tailored to specific group's needs, the one-size-fits-all approach nearly always ends up dumbing down the interface by removing 'difficult to use' functions and 'complicated' options to present a Fisher-prize interface with big happy buttons and lots of open space.

Also notice that the Books tools weigh in at 47K compressed, there is something to be said for light and nimble tools.


Alexandria [1] is pretty much a UI-based Mac app version of Books. I've been maintaining it for a while now and whenever I share it with someone non-technical Libgen's breadth blows their mind.

[1]https://github.com/Samin100/Alexandria/releases


Why do you advertise it as a mac-based app, when it's Electron based and should therefore be cross-platform?


I just don’t have a Windows PC to test on. There are also a few Mac-specific quirks like opening the epubs in Apple Books.


Linux support should be extremely simple to test and deploy. It'd be like an hour's task to write up a github action for it so you don't even need to have a local linux deploy or whatever.

I was going to do it myself but the build scripts you made are incompatible with node 17.x.x (it requires exporting `NODE_OPTIONS=--openssl-legacy-provider`), and they also seem to assume that `react-scripts`, etc. are in the local environment, rather than including them portably as a yarn package dependency and then doing `yarn exec <xyz>`. There's too much weirdness here and I'm not familiar enough with the systems involved to even begin looking into making it compatible (I started, and then realised that I'm supposed to be relaxing from my fulltime job, not dealing with more weird new tech systems that don't behave sensibly).


"This fence is so long and sprawling that even thinking about painting it negatively affects my health. But I bet you could knock it out in under an hour," Tom said to the other children.

Later that afternoon, Aunt Polly arrived home to an unpainted fence.


I don't think it's a stretch to say that the primary maintainer, the person who wrote the (extremely slapdash) build scripts and knows exactly why it's copying folders during the build process, is going to get the changes done faster than someone who is not knowledgeable with the tools and codebase? Writing code isn't painting fences, it's like asking me to replace a set of levers on an industrial piping system, except at least then the specifications of the prior levers would be available and there would be documentation for the whole thing.


This looks great! Thanks for making it


Non technical people aren't the ones who use LibGen though, right? There are things like PDFDrive that has found some popularity in my country.


Sure they do. College students become highly technical when it’s paying for textbooks vs beer.


I mean, using LibGen requires all of knowing how to visit a website and type words in a text input field. Not exactly technical?


The way this works is that if you're in a highly competitive market where all the products are equally capable and one of them is at all easier to use, that one wins. Then people get the impression that everyone is a dunce because the slightest inconvenience causes them to give up and use something else.

But then you come to something where there is only one way to do that thing, or the competition has some countervailing disadvantage, and suddenly Bob from accounts knows how to use AS/400.


My mother is one of the least technical people I know and also a regular libgen user.


Not a "technical" person. Use LibGen all the time.


have you heard about Libgen Desktop, a Windows application for browsing a local copy of LibGen catalog


So lets work on making the people around us more technical.


Whilst I appreciate the sentiment, most plans based on the premise that "all the children are above average" can be true will fail.

The Internet, WWW, and Google were once power-user tools. Global education has increased only modestly. The biggest change has been in lowering barriers to ease of use, including but not limited to the cost of tools.

This isn't an unalloyed blessing --- the minimum viable user is both a blessing and curse:

https://old.reddit.com/r/dredmorbius/comments/69wk8y/the_tyr...


Hacker news is mostly a nice forum; but the fact that the above got downvotes is a perfect representation of exactly what's wrong with it.


Only if they consent though, many people do not want to be technical


how big is the local db?


Around 9G for the nonfiction section, this can be halved by using the compact version (libgen_compact) which leaves out the publication descriptions and as such is far less useable.


Can we create decentralized library (catalog) protocol, so it can't be taken down? I'm thinking about libgen/sci-hub/whatcd(RIP), but entirely decentralized, protocol-based. We already have basically all necessary tools: BitTorrent/IPFS to distribute files, magnet links to link them, DHT to discover peers. We just need to combine them together in a smart way and maybe add some bits of missing functionality, like:

- categorization and search;

- voting for every entry;

- maybe reputation system. Where peers regularly publishing good, high quality content get their 'karma'[1] upvoted. So when downloading you can choose people with higher reputation. Or when building catalog of materials in certain area of interest, you can filter only peers with certain threshold karma.

- incentive for peers to keep seeding whatever they downloaded at least until achieving certain ratio (say, 2:1). Maybe by rewarding with 'torrent tokens' that you can spend on downloading, commenting etc.[2]

- comments to every entry (published torrent). With means to combat spam, insults, irrelevant stuff etc. E.g. with voting system, where comments with, say, -3[3] votes become collapsed. Or/and with aforementioned 'tokens' that you spend when comment.

- personal blacklists to block people whose torrents/comments you don't want to see.

Maybe I'm reinventing the wheel here and something similar already exists, but for some reason not popular (since I never heard of it)? In that case we need to figure out why it didn't shoot and fix it to make it work.

[1] Your ID will be a cryptographic key a la cryptocurrency wallet.

[2] With rare but sought after torrents rewarding you with more tokens.

[3] Personally adjustable.


Yes, this is exactly what we are working on for 16 years now. Longest running torrent, trust and Tor-fork project: https://github.com/Tribler/tribler/wiki

Its as hard as making a decentralised Google and decentralised YouTube at the same time. Over 75 master students and PhDs put their coding efforts in it at Delft University.


Looks great, congrats! sci-hub and libgen can be found on tribler?


Thnx! Sadly it's not that simple.

What IPFS, Dat protocol, Tribler, and all others are missing is "adverserial decentralised information retrieval".

For any keyword you type in, the match you want should show first. Trolling, Kremlin bots and copyright police forces should not be able to bring it down. Unsolved problem. How to create a privacy respecting relevance ranking or distributed clicklog.


I think a lot of the social stuff is unnecessary and getting ahead of things.

The core product IMO is just a resource database, with a nice resource description interface, peer syncing, and the ability to search and aggregate results from multiple databases together. A resource could be a web page blob, or a file, or plain text (e.g. a comment). All the social stuff, file viewers, blacklists, etc can be built on top of that, ideally by third parties in an open ecosystem.


I honestly know nothing about the space.

Is this what you're talking about? At least as an MVP. :

https://libgen.fun/dweb.html

?


The file sharing aspect of this seems to be the least difficult portion of the problem. I’d love to see something that passes muster, to a librarian, on the catalog front. Sort of a wikipedia or open street map of library catalogs.

The resources themselves might be found anywhere (physically at a library, in digital form for purchase, downloadable via torrent, etc.). A true library of babel.

If I knew more about library science, I suspect catalog federation protocols all exist, but there is probably work to be done in making such a thing resilient.


> Sort of a wikipedia or open street map of library catalogs.

Wikidata has this in their scope (and it makes sense to keep the data there, since a book catalog needs to cross-reference entries for authors, topics etc.), and the Internet Archive OpenLibrary leverages their data.


This is exactly what LBRY is: https://lbry.com


I am just a layman with all this, but could this be an actually useful application of blockchain? Create a DAO where reputation is earned through votes, and tokens are distributed to people who upload new good content that people want (a bounty system??), and you can spend them on downloads. I have no idea how any of this works so maybe it's a terrible idea.


Title is wrong, the updated configuration (https://wiki.mhut.org/_media/software:libgen_desktop_mirrors...) is using ipfs.com and cloudflare-ipfs.com, so it's still being fetched over HTTP(S), not IPFS.


This is an interesting point. In my experience, it's very common for HTTP(S) to be used for the "last mile" for content that originates from IPFS. Generally this is done via public or dedicated IPFS gateways.

Even when you have a local IPFS node (which is not common), the "last meter" delivery is still done using HTTP too, via a local gateway


> it's very common for HTTP(S) to be used for the "last mile" for content that originates from IPFS

Indeed, it seems very common, but that doesn't make my original point less true.

> the "last meter" delivery is still done using HTTP too, via a local gateway

Yeah, that seems common too, but less "wrong" to call it "IPFS support" in that case, in my opinion.

Running your own IPFS gateway at least makes the content actually fetched from the network (internet) via IPFS while using ipfs.com/cloudflare-ipfs.com is not any different than just using a CDN (except usually you have to pay for CDNs, IPFS gateways seems to be free (for now)).


>Running your own IPFS gateway at least makes the content actually fetched from the network (internet) via IPFS while using ipfs.com/cloudflare-ipfs.com is not any different than just using a CDN (except usually you have to pay for CDNs, IPFS gateways seems to be free (for now)).

I think there's still a lot of value in this way of using IPFS: using ipfs.com/cloudflare-ipfs.com makes it much easier to swap to another IPFS gateway (including your own local one) if those gateways ever give you issues.


Yup, agreed with all your points


Depending on centralized CDN's with logging is not using native IPFS, correct if wrong


I don't understand what you mean, what's the question?


I wanted a dead-simple Library Genesis Mac app so I released one called Alexandria (https://github.com/Samin100/Alexandria) a while ago, and recently just updated it to use IPFS via the Cloudflare gateway. Compared to the previous Libgen servers, book downloads are nearly instant. When IPFS works, it works really well.


This is cool but "loading sources..." never seems to complete on my machine. Great icon.


It works, sometimes the LibGen servers take a while to respond


Which book did you search for? Sometimes the Library Genesis servers take a while to respond.


For my test searches I did "Donnie Darko" and clicked on the first result, and then "Diana Spencer" and tried a few results. Doesn't seem like sources ever load for me. I have a much better experience just going to whatever libgen URL Safari auto-completes for me.


Ah, I figured out what went wrong. Libgen has two different databases, a fiction and non-fiction one, and right now Alexandria only queries Libgen's non-fiction DB. I'll update it to search both databases later this week. Thanks for bringing this to my attention!


It is a really nice app! I have been meaning to get into Mac OS native app development but haven't had the time. Did you implement it in Objective C or Swift?


Does IPFS really work? As I understand it, it's sort of like a pay version of BitTorrent. In theory, you're supposedly renting space on hard disks of random users. In practice, you're probably renting space from AWS at a markup.

I was interested in IPFS as a possible asset storage system for assets for virtual worlds. Not for piracy, but as a way to sell virtual world assets with no ongoing obligation to host the content. You store the ownership on some blockchain and the content in some storage system with pre-paid "perpetual care". I'd seen one offer at $5/gigabyte/forever.

The idea is supposed to be that Filecoin is a derivative of future declines in storage pricing, and profits on that derivative pay for the storage. Unclear if this works. There's one seller offering "perpetual storage" for a one-time fee. But their terms of service say "Data will be stored at no cost to the user on IPFS for as long as Protocol Labs, Inc. continues to offer free storage for NFT’s." No good.

(Just once, I'd like to see an application for NFTs that actually did something besides power a make-money-fast scheme.)


I think it's worth separating what IPFS provides from what Filecoin offers (note that I work at the Filecoin Foundation/Filecoin Foundation for the Decentralized Web, but I'm hopefully being sufficiently technical here that the description is as objective as I can be.)

IFPS is a model for providing a content-addressable storage system -- so if you have a particular hash (the CID) of a piece of content, you can obtain it without having to know where or who (or how many people) are storing it. Obviously one site on the IPFS network you're using has to have stored that data, but it only needs to be one site. More sites make it easier and quicker to access. Almost all IPFS nodes are run and offered for free, either by volunteers, major services like Cloudflare or Protocol Labs' dweb.link (which act as gateways so that you can access that file network over http/https) or web services that you pay to host your content on IPFS and manage it through a traditional API, like Textile or Fleek, or Fission.codes.

The key point here for someone with your use case, is that you have lots of flexibility as to who is hosting your files. You can start off just running your own node, or pay someone else, or pay lots of providers that are geographically diverse, or just do it among a bunch of volunteers. You're not tied to a single provider, because wherever your data is stored, you or your users will be able to find it.

Filecoin is a project to fix the incentive issues that can affected historical decentralizing projects like bittorrent, and can lead to decentralizing attempts like this collapse into just a single centralized service like AWS.

Storage providers on the Filecoin network negotiate directly with customers to store files -- they receive payment directly from those customers, but they are also incentivized to offer storage, and also store those files over the long term, because Filecoin has a proof-of-storage setup where storage providers get utility coins in return for proving that they're either making space available, or storing customers' files. It's all very zero-knowledge-proof and fancy, but the important thing is that with this in place, and a flat, competitive market for storage, storage provides on this network have good commercial reasons to offer low prices, and don't care if you're not tied directly to them (in the way that Amazon and other traditional storage providers are tempted to lock you in.)

Filecoin isn't so much a derivative of future declines, but a way to establish pricing in an environment where there actually is a free(r) market for online storage. And IPFS is a protocol that establishes one part of that freer market, which is to decouple who is storing your files, from how you might access them in the future. So far, this seems to be working, with prices being much cheaper than the alternatives, and with some degree of geographical and organizational diversity: https://file.app/

Storage providers are also now also competing on other aspects, such as ecological impact (see https://github.com/protocol/FilecoinGreen-tools ), speed of access, etc, which is what you might expect in a flatter market. We also see larger storage providers providing separate markets for large, >1 Pebibyte customers.

Happy to talk about this more, I'm danny@fil.org. Big fan of your work, etc, etc.


Storage providers on the Filecoin network negotiate directly with customers to store files.

So why bother with all the crypto stuff?


IPFS is distinct from filecoin.

IPFS works, although performance might be an issue for rarely-requested files since most of them (file blocks) will require multiple hops to get to. If the file's stored on some reliable ipfs node(s), anywhere, you'll be able to access that file eventually.

Filecoin, however noble it may be, doesn't seem to be taking off (yet). IPFS doesn't currently work very well as a storage service because there's no guaranteed storage by others. Even if filecoin achieved mass adoption, you'd have to pay someone to host your files to get reliable 3rd party storage, and that offered fee might not be adequate motivation.

IPFS works perfectly well, though, for hosting file(s) yourself without the risk of getting your network link saturated if popularity of the file(s) increases a lot. As an auto-scaling CDN, it works great, though with poor performance for rarely accessed files. The solution to the file storage problem, it seems to me, is to integrate existing CDNs with ipfs, to allow fast serving of rarely-accessed files for low cost, and then cease serving it from the CDN once a file is popular enough that it's getting duplicated by a bunch of IPFS nodes for free. Maybe cloudflare can plug that rarely-accessed-file gap and offer file access, integrated with ipfs, at zero hosting cost.


No commits in over a year, and developer has announced the next release will be the "final version".


While this is a warning sign in general, there does exist software that is simply 'done'.


Not really for programs that connect to the internet


Why is this not a locally running webapp? Running an unsandboxed app like this would not be smart.


Why is this person being downvoted? I thought the same - aren't local webapps prone to portscanning and effective cross origin (for lack of a better description?)?


Will not support linux or Mac.


If it's not open source, how can one trust it in such a sensitive question as accessing libgen?

If it is open source, then the support may eventually materialize.


The source is available: https://github.com/libgenapps/LibgenDesktop

However, the repo has no license, and therefore is not free and open source. Technically that code is still proprietary. How much someone writing a libgen app would care if you forked their proprietary app is left as an exercise for the reader.


I think it uses WPF for the interface, so Linux support would pretty much require a fork unless WPF starts supporting Linux.


Ive had luck running it in wine on linux


I'm probably ignorant to how this app uses IPFS (and IPFS in general), but from what I remember IPFS has a fairly strict code of conduct on copyrighted material. I'm more curious than anything, but what's stopping the legal team over at Protocol Labs from shutting this down?


It's not really feasible to remove content from IPFS. You would need all of the nodes hosting the content to remove it. The most likely scenario would be for common IPFS gateways like ipfs.io, cloudflare, and pinata.cloud to not host the content. That would make it much harder for that material's CID to resolve in a timely manner, if any other nodes are hosting it.


Ah, makes sense. Thanks for explaining


As I understand it, if you view something in IPFS, you end up serving it. seems like a legal liability the same way other p2p networks are a liability - digital rights owners see your ip address serving their content and they can take you to court.


Does somebody know of a plugin for Zotero to automatically fetch a file from LibGen?

I tried writing one but it seems like libgen does something weird to not show the download link... s.t. I gave up.


Perhaps modifying https://github.com/ethanwillis/zotero-scihub would be a nice starting/continuation point. Also, if you can write the stuff around fetching an ISBN from Zotero, and getting the right LibGen entry, perhaps someone else can fix the issue of converting the right LibGen entry into the actual link.


Just commenting to say how great site to is and I’m always surprised at how little I hear about it in the wild.


Not sure what you mean? Typo?


Never knew this existed. The web site has always been perfectly fine.


LOL, I prefer to keep my acts of piracy in the browser.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: