You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was trying to get a recommendation of size 100 per user for MovieLens 1M dataset but the algorithm(I tried SVD but I'm sure it is the same for others too) does not recommend that many items for all users (for some users only recommend 10 or maybe a bit more).
Is there any way we can set how many recommendations we want regardless of whether they have high predictions or not? For example for each uid, return 100 iid and their scores even if the scores are low it id fine. We need this capability since many re-ranker algorithms (diversity, calibration etc. ) need a large list generated by a standard algorithm (100 or larger) to build the final list, and right now this is not possible to do on Surprise.
The text was updated successfully, but these errors were encountered:
I'm getting 100 recommendations for all users on MovieLens1M and SVD
when I try this, I found it take so long time in trainset.build_anti_testset()(I have no GPU env, so pure CPU env running).
Also, according to algorithm, we need to find all of new item set by trainset.build_anti_testset() then try each set in predictions and sort it to get optimizing result, which will take terrible long time! In my opinion I think it is impossible to use in online function.
As a result, I think this package is not suitable to get top N recommendation by user.
I was trying to get a recommendation of size 100 per user for MovieLens 1M dataset but the algorithm(I tried SVD but I'm sure it is the same for others too) does not recommend that many items for all users (for some users only recommend 10 or maybe a bit more).
Is there any way we can set how many recommendations we want regardless of whether they have high predictions or not? For example for each uid, return 100 iid and their scores even if the scores are low it id fine. We need this capability since many re-ranker algorithms (diversity, calibration etc. ) need a large list generated by a standard algorithm (100 or larger) to build the final list, and right now this is not possible to do on Surprise.
The text was updated successfully, but these errors were encountered: