Replies: 3 comments 23 replies
-
Flood implements state-of-the-art windowing, memoization and incremental data fetching, so it is probably more scalable than the bundled web UIs of torrent clients. Please let me know if you encountered severe performance issues with Flood. It is unnecessary to implement such "aggregation". Flood supports multi-user with multi-client. You can simply add an additional user that connects to a different client. Additionally, torrent clients generally have their own limitations. I actually did weeks of research in order to find the best fit for my use case. In my experience, qBittorrent cracks under pressure. With a lot of torrents and long uptime, sometimes it just "stuck" with zero upload/download. As the number of torrents grows, its resource consumption (I/O, CPU, memory) skyrockets. Not mentioning that from time to time it randomly gives you "checking all torrents" heart attacks. And, it is almost impossible to gracefully shut it down in a reasonable amount of time when there is a lot of torrents. However, it does work quite well with smaller number of tasks, so I primarily use it for RSS/Sonarr and public torrents that I am not going to seed for an extended period of time. As for the Transmission, the resource consumption is low and the runtime stability is quite good. However, its performance, when compared to other clients, is less than ideal. It often fails to proactively connect with other peers and compete to leech/seed data. It might be a good choice if you don't care about ETA or ratio, though. rTorrent is the only torrent client that truly works (for me). It is known to be a bit difficult to configure, but I think those efforts pay off in long-term. rTorrent can easily handle hundreds of torrents with tens of terabytes of associated data, and it is known to be able to handle thousands (some people are seeding 15000 torrents simultaneously with 1 single instance of rTorrent). I am able to run rTorrent 24/7 continuously for months without being forced to |
Beta Was this translation helpful? Give feedback.
-
While we're discussing qbittorrent's performance WRT flood, I'd like to flag an issue & check if you think the reason is something besides client performance. I run qbittorrent via guillaumedsde/alpine-qbittorrent-openvpn. Some details:
I've noticed that flood is not really letting me interact with my torrent list (at this point)- I can see the speed graph, but not much else. Do you have any idea why this might be? Do you think its related to the two issues mentioned, or its something else? |
Beta Was this translation helpful? Give feedback.
-
10 days later, I am lucky enough to experience first-hand qBittorrent's catastrophic failure in the face of system fluctuations. I will be accellerating the transition to rTorrent, I think |
Beta Was this translation helpful? Give feedback.
-
Hi there.
I'm not certain about the feasibility of this, as maybe flood's infrastructure wouldn't be sufficient, but after a certain number of torrents, most client's web-uis (idk about their native UIs) tend to get really sluggish.
qBittorrent does this, on both the standard webUI & on alternate UIs.
For me, qBittorrent hits that wall at, idk, 5000 ubuntu isos. It's frustratingly slow.
From what I've seen, the only solution that seems to address this is horizontal scale-out by running concurrent instances
https://www.reddit.com/r/DataHoarder/comments/3ve1oz/torrent_client_that_can_handle_lots_of_torrents/
A few people on the above thread discuss their approach to this, and it seems as though many were running parallel instances of whatever torrent client they're using via docker-compose, apparently managing these instances via a couple of shell scripts.
That feels... annoying to me, and I was wondering if it would be possible for flood to aggregate multiple instances of a client?
If that feature doesn't sound appropiate for flood's roadmap, maybe sometime I'd implement my own proxy aggregator / hypervisor that translates single instance qBittorrent queries into multiple-instance queries, and aggregates the result so that a client like flood, or an nginx backed alternative api could just talk to that proxy (no idea if that'd solve anything tbh, but its a nice thought).
What do you think? Is the bottleneck for huge seed amounts the frontend, or something in the backend clients?
Beta Was this translation helpful? Give feedback.
All reactions