You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For those who want to apply for LFX mentorship for #48, this is a selection test for the application. This LFX mentorship aims to build lifelong learning benchmarking on KubeEdge-Ianvs which is a distributed synergy AI benchmarking platform. Based on Ianvs, we designed this challenge to evaluate the candidates.
Requirements
Each applicant of LFX Mentorship can try out the following two tasks and gain a total accumulated score according to completeness. In the end, we'll publish the top five applicants and their total scores. Finally, the one with the highest score will successfully become the mentee of this LFX Mentorship project. All the titles of task output such as pull requests (PRs) should be prefixed with LFX Mentorship.
Task 1
Content
Build a public dataset benchmarking website to present the example dataset [cityscapes][https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/robo_dog_delivery/cityscapes.zip].
The applicant might want to design the website with a style similar to the example website of [coda][https://coda-dataset.github.io/index.html].
In this task, to release the task burden of the applicant, we provide a clean and re-organized dataset based on the existing public [CITYSCAPES][https://www.cityscapes-dataset.com/] merely for election purposes. Note that another much more complicated new dataset will be provided to the mentee after the mentorship starts.
This benchmarking website should exhibit the contents listed in Table 1.
Submit a PR that includes a public link of the dataset benchmarking website and the corresponding dataset introduction.
We suggest that the domain name of the website be named after a personal account (e.g., jack123-LFX.github.io for applicant Jack123).
Create a new example on Kubeedge Ianvs based on the semantic segmentation dataset [cityscapes][https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/robo_dog_delivery/cityscapes.zip], for [single task learning][https://ianvs.readthedocs.io/en/latest/proposals/algorithms/single-task-learning/fpn.html], [incremental learning][https://ianvs.readthedocs.io/en/latest/proposals/algorithms/incremental-learning/basicIL-fpn.html] or [lifelong learning][https://github.com/kubeedge/ianvs/tree/feature-lifelong-n/examples/scene-based-unknown-task-recognition/lifelong_learning_bench].
The example mainly includes a new baseline algorithm (not existed on Ianvs) which can run on Ianvs under [cityscapes][https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/robo_dog_delivery/cityscapes.zip].
The baseline algorithm can be a new unseen task detection algorithm, processing algorithm, or base model.
Reviewers will take a look at the Mean Intersection over Union (mIoU) to evaluate the algorithm performance.
Note that If the applicant wants to use [single-task learning][https://ianvs.readthedocs.io/en/latest/proposals/algorithms/single-task-learning/fpn.html] or [incremental learning][https://ianvs.readthedocs.io/en/latest/proposals/algorithms/incremental-learning/basicIL-fpn.html], s/he needs to replace the origin object-detection dataset [pcb-aoi][https://github.com/kubeedge/ianvs/tree/main/examples/pcb-aoi] to the targeted semantic-segmentation dataset [cityscapes][https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/robo_dog_delivery/cityscapes.zip], and replace the origin object-detection model FPN to a semantic-segmentation model, e.g., RFNet. While for an applicant who tries to tackle [lifelong learning][https://github.com/kubeedge/ianvs/tree/feature-lifelong-n/examples/scene-based-unknown-task-recognition/lifelong_learning_bench], s/he does not necessarily need to do that, because the dataset and base model are both prepared.
For each algorithm paradigm, submit an experiment report as a PR which includes algorithm design, experiment results and a README document.
The README document aims to show instructions for testing and verifying the submitted example for reviewers. An example is available at [unseen task recognition readme document][https://github.com/kubeedge/ianvs/blob/feature-lifelong-n/examples/scene-based-unknown-task-recognition/lifelong_learning_bench/Readme.md].
An example of the algorithm design is available at [unseen task recognition proposal][https://github.com/kubeedge/ianvs/blob/main/docs/proposals/algorithms/lifelong-learning/Unknown_Task_Recognition_Algorithm_Reproduction_based_on_Lifelong_Learning_of_Ianvs.md#5-design-details].
An example of the experiment results is available at [the leaderboard of single task learning][https://ianvs.readthedocs.io/en/latest/leaderboards/leaderboard-in-industrial-defect-detection-of-PCB-AoI/leaderboard-of-single-task-learning.html].
Submit a PR of codes of this new example.
The organization of the codes can be referred to [pcb-aoi][https://github.com/kubeedge/ianvs/tree/main/examples/pcb-aoi].
All the items that should be completed in task 1 are listed in Table 2 and item scores will be accumulated as the total score of this task.
Item
Score
Set up a basic frontend framework
10
The frontend pages can be accessed publicly
10
Home page
Dataset overview
5
Lifelong learning algorithm overview
5
Data sample display
5
Documentation page
Dataset partition description
5
Data statistics
5
Data format
5
Data annotation
5
Download page
Instructions and links
5
Benchmark page
Various algorithm and metric results
20
Table 2. Task 1 scoring rules
Task 2
Completion of different algorithm paradigms has different scores as shown in Table 3.
For examples under the same algorithm paradigm, an applicant will obtain 20 extra scores only if his/her example performs the best in ranking. When ranking, reviewers will take a look at the Mean Intersection over Union (mIoU) to evaluate the algorithm performance.
That is, only the applicant ranking top 1 gets the extra score. Good Luck!
Each applicant can try to implement multiple examples with different algorithm paradigms. But only the algorithm paradigm with the highest score will be counted.
For the examples that can not be run successfully directly through the submitted code and the README instruction document, the total score will be 0 in task 2. So, be cautious about the code and docs!
Introduction
For those who want to apply for LFX mentorship for #48, this is a selection test for the application. This LFX mentorship aims to build lifelong learning benchmarking on KubeEdge-Ianvs which is a distributed synergy AI benchmarking platform. Based on Ianvs, we designed this challenge to evaluate the candidates.
Requirements
Each applicant of LFX Mentorship can try out the following two tasks and gain a total accumulated score according to completeness. In the end, we'll publish the top five applicants and their total scores. Finally, the one with the highest score will successfully become the mentee of this LFX Mentorship project. All the titles of task output such as pull requests (PRs) should be prefixed with LFX Mentorship.
Task 1
Content
Table 1. Task 1 overview
Resources
Task 2
Content
Resources
Rating
Task 1
All the items that should be completed in task 1 are listed in Table 2 and item scores will be accumulated as the total score of this task.
Table 2. Task 1 scoring rules
Task 2
Table 3. Task 2 scoring rules
Deadline
According to [the timeline of LFX mentorship 2023 01-Mar-May](https://github.com/cncf/mentoring/tree/main/lfx-mentorship/2023/01-Mar-May), the admission decision deadline is March 7th. Since we have to process the internal review and decide, the final date for PR submissions of the pretest will be March 5th, 8:00 AM PDT.
The text was updated successfully, but these errors were encountered: