From cf915638011476b364f2e71c792d6af175c57f75 Mon Sep 17 00:00:00 2001 From: Weiran Huang <60686501+weiran-huang@users.noreply.github.com> Date: Mon, 6 Jun 2022 15:40:18 +0800 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 5eba1eb..2633d1d 100644 --- a/README.md +++ b/README.md @@ -108,7 +108,7 @@ The regret value will be achieved as follows: # [Model Description](#contents) -The [original paper](https://arxiv.org/abs/2006.00701) assumes that the norm of user features is bounded by 1 and the norm of rating scores is bounded by 2. For the MovieLens dataset, we normalize rating scores to [-1,1]. Thus, we set `sigma` in Algorithm 5 to be $$4/epsilon \* sqrt(2 \* ln(1.25/delta))$$. +The [original paper](https://arxiv.org/abs/2006.00701) assumes that the norm of user features is bounded by 1 and the norm of rating scores is bounded by 2. For the MovieLens dataset, we normalize rating scores to [-1,1]. Thus, we set `sigma` in Algorithm 5 to be $4/\epsilon \times \sqrt{2 \times ln(1.25/\delta)}$. ## [Performance](#contents)