You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi. Thank you for the great work and for publishing the benchmark. I am experimenting with the TF10 task and found out that we have more than 4M data points in the public dataset and more than 8M in the hidden dataset. Do you know if this is a bug? Because these numbers are even larger than the number of all possible configurations (4^10 ~ 1M)
The text was updated successfully, but these errors were encountered:
hi @tung-nd, do you have any clue now? it seems both TF8 and TF10 have duplicating inputs, while for TF8 i can safely remove duplications as the output are exactly the same, while for TF10 duplications have obviously different output though.
Thanks for bringing this to my attention! After inspecting the TFBind10 dataset, it appears that each 10-mer sequence is evaluated 4 times to compute the ddG score. In the current benchmark, each trial was stored as an additional datapoint.
However, now knowing this repetition, each of the 4 trials should be averaged and treated as a single datapoint so that there is no overlap between training and testing datasets. I'm working on a patch for this in the form of a TFBind10-Exact-v1 task.
The original task with duplicate datapoints will continue to be served through TFBind10-Exact-v0, which is the current id for that task in design-bench.
Hi. Thank you for the great work and for publishing the benchmark. I am experimenting with the TF10 task and found out that we have more than 4M data points in the public dataset and more than 8M in the hidden dataset. Do you know if this is a bug? Because these numbers are even larger than the number of all possible configurations (4^10 ~ 1M)
The text was updated successfully, but these errors were encountered: