You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello!
It seems that there is no train dataset. Should I just run "run_dataset_preprocessing.sh" to get train data by myself?
and where is the test data--"a paired bootstrap test" described in the paper? Do I need to sample data by myself?
and what's the function of the code below, why "add oracle sentences to training data". Does the Table 5 in the paper use the code below to get more train data?
##add oracle sentences to training data
if "train" in args.split:
chunks_output_list.append(chunks_output_dict)
Thanks!
The text was updated successfully, but these errors were encountered:
It seems that there is no train dataset. Should I just run "run_dataset_preprocessing.sh" to get train data by myself?
To get chunk-level train data, you need to run the code.
and where is the test data--"a paired bootstrap test" described in the paper? Do I need to sample data by myself?
Yes.
and what's the function of the code below, why "add oracle sentences to training data". Does the Table 5 in the paper use the code below to get more train data?
Hello!
It seems that there is no train dataset. Should I just run "run_dataset_preprocessing.sh" to get train data by myself?
and where is the test data--"a paired bootstrap test" described in the paper? Do I need to sample data by myself?
and what's the function of the code below, why "add oracle sentences to training data". Does the Table 5 in the paper use the code below to get more train data?
##add oracle sentences to training data
if "train" in args.split:
chunks_output_list.append(chunks_output_dict)
Thanks!
The text was updated successfully, but these errors were encountered: