Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LFX Mentorship 2023 01-Mar-May Challenge - for #48 #53

Closed
luosiqi opened this issue Feb 22, 2023 · 0 comments
Closed

LFX Mentorship 2023 01-Mar-May Challenge - for #48 #53

luosiqi opened this issue Feb 22, 2023 · 0 comments

Comments

@luosiqi
Copy link
Contributor

luosiqi commented Feb 22, 2023

Introduction

For those who want to apply for LFX mentorship for #48, this is a selection test for the application. This LFX mentorship aims to build lifelong learning benchmarking on KubeEdge-Ianvs which is a distributed synergy AI benchmarking platform. Based on Ianvs, we designed this challenge to evaluate the candidates.

Requirements

Each applicant of LFX Mentorship can try out the following two tasks and gain a total accumulated score according to completeness. In the end, we'll publish the top five applicants and their total scores. Finally, the one with the highest score will successfully become the mentee of this LFX Mentorship project. All the titles of task output such as pull requests (PRs) should be prefixed with LFX Mentorship.

Task 1

Content

  1. Build a public dataset benchmarking website to present the example dataset [cityscapes][https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/robo_dog_delivery/cityscapes.zip].
    • The applicant might want to design the website with a style similar to the example website of [coda][https://coda-dataset.github.io/index.html].
    • In this task, to release the task burden of the applicant, we provide a clean and re-organized dataset based on the existing public [CITYSCAPES][https://www.cityscapes-dataset.com/] merely for election purposes. Note that another much more complicated new dataset will be provided to the mentee after the mentorship starts.
  2. This benchmarking website should exhibit the contents listed in Table 1.
  3. Submit a PR that includes a public link of the dataset benchmarking website and the corresponding dataset introduction.
    • We suggest that the domain name of the website be named after a personal account (e.g., jack123-LFX.github.io for applicant Jack123).
Home page Dataset overview
Lifelong learning algorithm overview
Data sample display
Documentation page Dataset partition description
Data statistics
Data format
Data annotation
Download page Instructions and links
Benchmark page Various algorithm and metric results

Table 1. Task 1 overview

Resources

Task 2

Content

  1. Create a new example on Kubeedge Ianvs based on the semantic segmentation dataset [cityscapes][https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/robo_dog_delivery/cityscapes.zip], for [single task learning][https://ianvs.readthedocs.io/en/latest/proposals/algorithms/single-task-learning/fpn.html], [incremental learning][https://ianvs.readthedocs.io/en/latest/proposals/algorithms/incremental-learning/basicIL-fpn.html] or [lifelong learning][https://github.com/kubeedge/ianvs/tree/feature-lifelong-n/examples/scene-based-unknown-task-recognition/lifelong_learning_bench].
    • The example mainly includes a new baseline algorithm (not existed on Ianvs) which can run on Ianvs under [cityscapes][https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/robo_dog_delivery/cityscapes.zip].
    • The baseline algorithm can be a new unseen task detection algorithm, processing algorithm, or base model.
    • Reviewers will take a look at the Mean Intersection over Union (mIoU) to evaluate the algorithm performance.
    • Note that If the applicant wants to use [single-task learning][https://ianvs.readthedocs.io/en/latest/proposals/algorithms/single-task-learning/fpn.html] or [incremental learning][https://ianvs.readthedocs.io/en/latest/proposals/algorithms/incremental-learning/basicIL-fpn.html], s/he needs to replace the origin object-detection dataset [pcb-aoi][https://github.com/kubeedge/ianvs/tree/main/examples/pcb-aoi] to the targeted semantic-segmentation dataset [cityscapes][https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/robo_dog_delivery/cityscapes.zip], and replace the origin object-detection model FPN to a semantic-segmentation model, e.g., RFNet. While for an applicant who tries to tackle [lifelong learning][https://github.com/kubeedge/ianvs/tree/feature-lifelong-n/examples/scene-based-unknown-task-recognition/lifelong_learning_bench], s/he does not necessarily need to do that, because the dataset and base model are both prepared.
  2. For each algorithm paradigm, submit an experiment report as a PR which includes algorithm design, experiment results and a README document.
    • The README document aims to show instructions for testing and verifying the submitted example for reviewers. An example is available at [unseen task recognition readme document][https://github.com/kubeedge/ianvs/blob/feature-lifelong-n/examples/scene-based-unknown-task-recognition/lifelong_learning_bench/Readme.md].
    • An example of the algorithm design is available at [unseen task recognition proposal][https://github.com/kubeedge/ianvs/blob/main/docs/proposals/algorithms/lifelong-learning/Unknown_Task_Recognition_Algorithm_Reproduction_based_on_Lifelong_Learning_of_Ianvs.md#5-design-details].
    • An example of the experiment results is available at [the leaderboard of single task learning][https://ianvs.readthedocs.io/en/latest/leaderboards/leaderboard-in-industrial-defect-detection-of-PCB-AoI/leaderboard-of-single-task-learning.html].
  3. Submit a PR of codes of this new example.
    • The organization of the codes can be referred to [pcb-aoi][https://github.com/kubeedge/ianvs/tree/main/examples/pcb-aoi].

Resources

Rating

Task 1

All the items that should be completed in task 1 are listed in Table 2 and item scores will be accumulated as the total score of this task.

Item Score
Set up a basic frontend framework 10
The frontend pages can be accessed publicly 10
Home page Dataset overview 5
Lifelong learning algorithm overview 5
Data sample display 5
Documentation page Dataset partition description 5
Data statistics 5
Data format 5
Data annotation 5
Download page Instructions and links 5
Benchmark page Various algorithm and metric results 20

Table 2. Task 1 scoring rules

Task 2

  1. Completion of different algorithm paradigms has different scores as shown in Table 3.
  2. For examples under the same algorithm paradigm, an applicant will obtain 20 extra scores only if his/her example performs the best in ranking. When ranking, reviewers will take a look at the Mean Intersection over Union (mIoU) to evaluate the algorithm performance.
    • That is, only the applicant ranking top 1 gets the extra score. Good Luck!
  3. Each applicant can try to implement multiple examples with different algorithm paradigms. But only the algorithm paradigm with the highest score will be counted.
  4. For the examples that can not be run successfully directly through the submitted code and the README instruction document, the total score will be 0 in task 2. So, be cautious about the code and docs!
Item Score
Lifelong learning 50
Incremental learning 30
Single task learning 10
Highest metric result 20

Table 3. Task 2 scoring rules

Deadline

According to [the timeline of LFX mentorship 2023 01-Mar-May](https://github.com/cncf/mentoring/tree/main/lfx-mentorship/2023/01-Mar-May), the admission decision deadline is March 7th. Since we have to process the internal review and decide, the final date for PR submissions of the pretest will be March 5th, 8:00 AM PDT.

@luosiqi luosiqi closed this as completed Feb 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant