Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Pytorch MaxP Feature/ptmaxp #184

Merged
merged 51 commits into from
Aug 6, 2022
Merged
Changes from 1 commit
Commits
Show all changes
51 commits
Select commit Hold shift + click to select a range
5f6ab05
first version of benchmark.eval with ir-measures
crystina-z Sep 2, 2021
018e5ca
benchmark.eval add relevance level support
crystina-z Sep 8, 2021
b0a4502
minor fix
crystina-z Sep 8, 2021
b25bbd4
remove msmarco-eval
crystina-z Sep 8, 2021
90920e9
clean
crystina-z Sep 8, 2021
2691d76
change all measures into str repr to avoid black problem
crystina-z Sep 8, 2021
ee3a0ef
skip evaluation if there is no matching qids
crystina-z Sep 9, 2021
f24bd3d
speed up training data prep - use set rather than list for train-qids…
crystina-z Sep 16, 2021
f915e45
add pt-maxp (train 30k + rerank top100: MRR@10=0.329)
crystina-z Sep 16, 2021
edace93
adapt config msmarco for pt monobert
crystina-z Sep 16, 2021
8cf1ef4
remove tqdm
crystina-z Sep 18, 2021
bd3d3aa
add decay into msmarco config
crystina-z Sep 18, 2021
2d41e28
fix import
crystina-z Sep 19, 2021
305ff92
add notes to ptmaxp
crystina-z Sep 25, 2021
16d6bdc
add shape for CE loss
crystina-z Sep 25, 2021
9c796c4
change sampling logic of pairsampler - sample one pos and neg at once…
crystina-z Sep 25, 2021
9d891d4
shuffle loaded tfrecord dataset
crystina-z Sep 25, 2021
18c144e
MSMARCO reproductino logs - nima
nimasadri11 Sep 26, 2021
ce4444d
Merge pull request #1 from nimasadri11/master
crystina-z Sep 26, 2021
ce392ac
tf amp: use both / None to align with pt
crystina-z Oct 2, 2021
23dcb3f
ms marco prepro doc; MRR@10=0.352 for pt-maxp; MRR@10=0.354 for tf-ma…
crystina-z Oct 2, 2021
addcb98
merge
crystina-z Oct 2, 2021
133de84
cross entropy; use avg rather than sum
crystina-z Oct 2, 2021
24cee86
support firstp, sump, avgp (same score on msp-v1)
crystina-z Oct 10, 2021
b5e7448
config for pt-maxp (rob04)
crystina-z Oct 10, 2021
9137bec
support eval dev and external runfile using external ckpt (dir)
crystina-z Oct 19, 2021
e788e9a
Update repro log for MS MARCO passage ranking task
leungjch Oct 20, 2021
1c570c3
Merge pull request #2 from leungjch/justin/update-repro-oct-19
crystina-z Oct 20, 2021
5d9fe65
Update msmarco reproduction log
edanerg Nov 5, 2021
c1bce9b
Fix markdown
edanerg Nov 5, 2021
65f0117
Merge branch 'feature/eval+ptmaxp' of github.com:crystina-z/capreolus…
crystina-z Nov 13, 2021
b730b98
add training flag to id2vec() to control different data format during…
crystina-z Nov 14, 2021
f2039ac
cleanup pt-maxp; mRR@10=0.352
crystina-z Nov 14, 2021
581ac27
Merge pull request #3 from AlexWang000/feature/eval+ptmaxp
crystina-z Jan 21, 2022
78d54be
revert the files that involving changing evaluation s.t. the PR isn't…
crystina-z May 8, 2022
7a7de77
merge with master
crystina-z May 8, 2022
3db0ff9
clean
crystina-z May 9, 2022
a87bfe7
adapt lce-passage extractor to the new extractor framework
crystina-z May 9, 2022
ef0f73d
make default msmarco-lce config a "small" version
crystina-z May 9, 2022
2edbb47
update repro doc
crystina-z May 9, 2022
38407df
update config msmarco
crystina-z May 10, 2022
10e0dc6
clean
crystina-z May 10, 2022
7a1ec64
first attmp to solve issue when warmup==epoch==1
crystina-z May 11, 2022
c263868
allow extrector to pad queries to the specified length
crystina-z May 11, 2022
db5e1ee
newline at the end of file
crystina-z May 11, 2022
ae536a5
black
crystina-z May 11, 2022
ea7e04a
dead code
crystina-z May 11, 2022
cdd90f3
bugfix
crystina-z May 11, 2022
95fd1d4
change the id2vec test case; so that the testing n-passage is 1
crystina-z May 11, 2022
30f3096
revert quick.md
crystina-z May 12, 2022
db0e405
for birch extractor; move the create_tf_train_feature and parse_tf_tr…
crystina-z May 12, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
first attmp to solve issue when warmup==epoch==1
crystina-z committed May 11, 2022
commit 7a1ec644b44ef4de0119a885d4fafa5e134a0cb1
15 changes: 10 additions & 5 deletions capreolus/trainer/pytorch.py
Original file line number Diff line number Diff line change
@@ -73,7 +73,7 @@ def build(self):
torch.manual_seed(self.config["seed"])
torch.cuda.manual_seed_all(self.config["seed"])

def single_train_iteration(self, reranker, train_dataloader):
def single_train_iteration(self, reranker, train_dataloader, cur_iter):
"""Train model for one iteration using instances from train_dataloader.

Args:
@@ -86,6 +86,7 @@ def single_train_iteration(self, reranker, train_dataloader):
"""

iter_loss = []
cur_step = cur_iter * self.n_batch_per_iter
batches_since_update = 0
batches_per_step = self.config["gradacc"]

@@ -112,9 +113,12 @@ def single_train_iteration(self, reranker, train_dataloader):
self.optimizer.zero_grad()

if (bi + 1) % self.n_batch_per_iter == 0:
# REF-TODO: save scheduler state along with optimizer
self.lr_scheduler.step()
break
# # REF-TODO: save scheduler state along with optimizer
# self.lr_scheduler.step()
# hacky: use step instead the internally calculated epoch to support step-wise lr update
self.lr_scheduler.step(epoch=cur_step)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's a bit hacky here, where by default lr_scheduler.step takes in the epoch; changing here as when we passing epoch=0 into our lr_multiplier and the warmupiter is also 1, the lr would be almost 0 for the entire first epoch.

cur_step += 1

return torch.stack(iter_loss).mean()

@@ -210,7 +214,8 @@ def train(self, reranker, train_dataset, train_output_path, dev_data, dev_output

# REF-TODO how to handle interactions between fastforward and schedule? --> just save its state
self.lr_scheduler = torch.optim.lr_scheduler.LambdaLR(
self.optimizer, lambda epoch: self.lr_multiplier(step=epoch * self.n_batch_per_iter)
# self.optimizer, lambda epoch: self.lr_multiplier(step=epoch * self.n_batch_per_iter)
self.optimizer, lambda step: self.lr_multiplier(step=step)
)

if self.config["softmaxloss"]:
@@ -254,7 +259,7 @@ def train(self, reranker, train_dataset, train_output_path, dev_data, dev_output
model.train()

iter_start_time = time.time()
iter_loss_tensor = self.single_train_iteration(reranker, train_dataloader)
iter_loss_tensor = self.single_train_iteration(reranker, train_dataloader, cur_iter=niter)
logger.info("A single iteration takes {}".format(time.time() - iter_start_time))
train_loss.append(iter_loss_tensor.item())
logger.info("iter = %d loss = %f", niter, train_loss[-1])