Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wikitext - [WIP] #150

Open
wants to merge 30 commits into
base: development
Choose a base branch
from
Open

Conversation

ayushi-3536
Copy link

No description provided.

ayushi-3536 and others added 9 commits May 17, 2022 14:50
- added dependencies, benchmark
- added token generation and model training code from moasha in dependencies
- returning prediction time for evaluation time
- changed perplexity --> log_perplexity for the objective (MO-ASHA uses log perplexity)
 changed error --> accuracy
- added tqdm
-report train and eval time separately in objective func
-code formatting
-added test file
- added recipe and container file
…al encoding doesn't work for odd number, therefor log seems like perfect solution

-removed logs
* Update Github Actions Workflow  and drop support for singularity < 3.7
Copy link
Collaborator

@PhMueller PhMueller left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left some commments. could you please have a look at them. Thanks

hpobench/benchmarks/mo/lm_benchmark.py Outdated Show resolved Hide resolved
hpobench/benchmarks/mo/lm_benchmark.py Outdated Show resolved Hide resolved
hpobench/benchmarks/mo/lm_benchmark.py Outdated Show resolved Hide resolved
hpobench/benchmarks/mo/lm_benchmark.py Show resolved Hide resolved
hpobench/benchmarks/mo/lm_benchmark.py Show resolved Hide resolved
hpobench/util/data_manager.py Outdated Show resolved Hide resolved
hpobench/util/data_manager.py Outdated Show resolved Hide resolved
ayushi-3536 and others added 6 commits May 24, 2022 14:57
* Add yahpo_gym w help from phmueller

Co-authored-by: PhMueller <[email protected]>
Update the Nasbench201 benchmark to support Multi-Objective queries.

If you want to use the *single objective* Nasbench201 benchmark, you can query the SO version of this benchmark. 
Although, we have not changed the benchmark logic, you can also use the container v0.0.5 in your experiments to reproduce results from the old version of this benchmark.
We add the benchmark from the MO-ASHA paper by Schmucker et al.

It is a MO benchmark, training an MLP on the Adult data set.
Added mo cnn benchmarks from bag of baseline paper

We deviate from the original benchmark in two points: 
* we return as cost only the  training time instead of the total elapsed time
* we return as objective for minimization instead of `-100 * accuracy` now `1 - accuracy` to achieve better output scalings. 

Co-authored-by: ayushi-3536 <[email protected]>
Co-authored-by: Philipp Müller <[email protected]>
Copy link
Collaborator

@PhMueller PhMueller left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it would be cool if you could go through the comments. Thanks

hpobench/benchmarks/mo/lm_benchmark.py Outdated Show resolved Hide resolved
extra_requirements/lm_benchmark.json Outdated Show resolved Hide resolved
hpobench/benchmarks/mo/lm_benchmark.py Outdated Show resolved Hide resolved
hpobench/benchmarks/mo/lm_benchmark.py Outdated Show resolved Hide resolved
hpobench/container/benchmarks/mo/lm_benchmark.py Outdated Show resolved Hide resolved
class TransformerModel(nn.Module):
"""Container module with an encoder, a transformer module, and a decoder."""

def __init__(self, ntoken, ninp, nhead, nhid, nlayers, dropout=0.5, bptt=35, rng=None):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

signature if possible

-add dependency version
- added dependencies, benchmark
- added token generation and model training code from moasha in dependencies
- returning prediction time for evaluation time
- changed perplexity --> log_perplexity for the objective (MO-ASHA uses log perplexity)
 changed error --> accuracy
- added tqdm
-report train and eval time separately in objective func
-code formatting
-added test file
- added recipe and container file
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants