Skip to content

Commit

Permalink
merge dj-install
Browse files Browse the repository at this point in the history
  • Loading branch information
BeachWang committed Dec 12, 2024
2 parents 788a212 + 02f8dda commit b4d0798
Show file tree
Hide file tree
Showing 17 changed files with 211 additions and 134 deletions.
5 changes: 4 additions & 1 deletion .github/workflows/deploy_sphinx_docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,16 @@ on:
jobs:
pages:
runs-on: ubuntu-20.04
strategy:
matrix:
python-version: [ "3.9", "3.10" ]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Python ${{ matrix.python-version }}
uses: actions/setup-python@master
with:
python_version: ${{ matrix.python-version }}
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/perf-bench.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@ env:
ACTIONS_ALLOW_USE_UNSECURE_NODE_VERSION: true

jobs:
unittest-single:
runs-on: [self-hosted, linux]
perf_bench:
runs-on: [GPU, unittest]
environment: Testing
steps:
- uses: actions/checkout@v3
Expand All @@ -42,7 +42,7 @@ jobs:
- name: Run performance benchmark standalone
working-directory: dj-${{ github.run_id }}/.github/workflows/docker
run: |
docker compose exec ray-head python tests/benchmark_performance/run.sh ${{ secrets.INTERNAL_WANDB_URL }} ${{ secrets.INTERNAL_WANDB_API_KEY }}
docker compose exec ray-head bash tests/benchmark_performance/run.sh ${{ secrets.INTERNAL_WANDB_URL }} ${{ secrets.INTERNAL_WANDB_API_KEY }}
- name: Remove docker compose
working-directory: dj-${{ github.run_id }}/.github/workflows/docker
Expand Down
16 changes: 16 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -197,6 +197,22 @@ The dependency options are listed below:
| `.[tools]` | Install dependencies for dedicated tools, such as quality classifiers. |
| `.[sandbox]` | Install all dependencies for sandbox. |

- Install dependencies for specific OPs

With the growth of the number of OPs, the dependencies of all OPs becomes very heavy. Instead of using the command `pip install -v -e .[sci]` to install all dependencies,
we provide two alternative, lighter options:

- Automatic Minimal Dependency Installation: During the execution of Data-Juicer, minimal dependencies will be automatically installed. This allows for immediate execution, but may potentially lead to dependency conflicts.

- Manual Minimal Dependency Installation: To manually install minimal dependencies tailored to a specific execution configuration, run the following command:
```shell
# only for installation from source
python tools/dj_install.py --config path_to_your_data-juicer_config_file

# use command line tool
dj-install --config path_to_your_data-juicer_config_file
```

### Using pip

- Run the following command to install the latest released `data_juicer` using `pip`:
Expand Down
15 changes: 15 additions & 0 deletions README_ZH.md
Original file line number Diff line number Diff line change
Expand Up @@ -178,6 +178,21 @@ pip install -v -e .[tools] # 安装部分工具库的依赖
| `.[tools]` | 安装专用工具库(如质量分类器)所需的依赖项 |
| `.[sandbox]` | 安装沙盒实验室的基础依赖 |

* 只安装部分算子依赖

随着OP数量的增长,所有OP的依赖变得很重。为此,我们提供了两个替代的、更轻量的选项,作为使用命令`pip install -v -e .[sci]`安装所有依赖的替代:

* 自动最小依赖安装:在执行Data-Juicer的过程中,将自动安装最小依赖。也就是说你可以直接执行,但这种方式可能会导致一些依赖冲突。

* 手动最小依赖安装:可以通过如下指令手动安装适合特定执行配置的最小依赖:
```shell
# 适用于从源码安装
python tools/dj_install.py --config path_to_your_data-juicer_config_file

# 使用命令行工具
dj-install --config path_to_your_data-juicer_config_file
```

### 使用 pip 安装

* 运行以下命令用 `pip` 安装 `data_juicer` 的最新发布版本:
Expand Down
3 changes: 3 additions & 0 deletions configs/config_all.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -224,6 +224,7 @@ process:
radius: 2 # radius of blur kernel
- image_tagging_mapper: # Mapper to generate image tags.
tag_field_name: '__dj__image_tags__' # the field name to store the tags. It's "__dj__image_tags__" in default.
mem_required: '9GB'
- nlpaug_en_mapper: # simply augment texts in English based on the nlpaug library
sequential: false # whether combine all augmentation methods to a sequence. If it's True, a sample will be augmented by all opened augmentation methods sequentially. If it's False, each opened augmentation method would generate its augmented samples independently.
aug_num: 1 # number of augmented samples to be generated. If `sequential` is True, there will be total aug_num augmented samples generated. If it's False, there will be (aug_num * #opened_aug_method) augmented samples generated.
Expand Down Expand Up @@ -409,6 +410,7 @@ process:
frame_sampling_method: 'all_keyframes' # sampling method of extracting frame images from the videos. Should be one of ["all_keyframes", "uniform"]. The former one extracts all key frames and the latter one extract specified number of frames uniformly from the video. Default: "all_keyframes".
frame_num: 3 # the number of frames to be extracted uniformly from the video. Only works when frame_sampling_method is "uniform". If it's 1, only the middle frame will be extracted. If it's 2, only the first and the last frames will be extracted. If it's larger than 2, in addition to the first and the last frames, other frames will be extracted uniformly within the video duration.
tag_field_name: '__dj__video_frame_tags__' # the field name to store the tags. It's "__dj__video_frame_tags__" in default.
mem_required: '9GB'
- whitespace_normalization_mapper: # normalize different kinds of whitespaces to English whitespace.

# Filter ops
Expand Down Expand Up @@ -641,6 +643,7 @@ process:
frame_num: 3 # the number of frames to be extracted uniformly from the video. Only works when frame_sampling_method is "uniform". If it's 1, only the middle frame will be extracted. If it's 2, only the first and the last frames will be extracted. If it's larger than 2, in addition to the first and the last frames, other frames will be extracted uniformly within the video duration.
tag_field_name: '__dj__video_frame_tags__' # the field name to store the tags. It's "__dj__video_frame_tags__" in default.
any_or_all: any # keep this sample when any/all videos meet the filter condition
mem_required: '9GB'
- words_num_filter: # filter text with number of words out of specific range
lang: en # sample in which language
tokenization: false # whether to use model to tokenize documents
Expand Down
2 changes: 1 addition & 1 deletion data_juicer/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
__version__ = '1.0.0'
__version__ = '1.0.1'

import os
import subprocess
Expand Down
194 changes: 84 additions & 110 deletions data_juicer/utils/auto_install_mapping.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,116 +10,90 @@
'simhash': ['simhash-pybind'],
}

# Packages to corresponding ops that require them
PKG_TO_OPS = {
'torch': [
'image_aesthetics_filter', 'image_nsfw_filter',
'image_text_matching_filter', 'image_text_similarity_filter',
'image_watermark_filter', 'phrase_grounding_recall_filter',
'video_aesthetics_filter', 'video_frames_text_similarity_filter',
'video_nsfw_filter', 'video_tagging_from_frames_filter',
'video_watermark_filter', 'generate_qa_from_text_mapper',
'generate_qa_from_examples_mapper', 'image_captioning_mapper',
'image_diffusion_mapper', 'image_tagging_mapper',
'optimize_query_mapper', 'optimize_response_mapper',
'optimize_qa_mapper', 'video_captioning_from_frames_mapper',
'video_captioning_from_summarizer_mapper',
'video_captioning_from_video_mapper',
'video_tagging_from_audio_mapper', 'video_tagging_from_frames_mapper'
# Extra packages required by each op
OPS_TO_PKG = {
'video_aesthetics_filter':
['simple-aesthetics-predictor', 'torch', 'transformers'],
'document_simhash_deduplicator': ['simhash-pybind'],
'nlpcda_zh_mapper': ['nlpcda'],
'image_aesthetics_filter':
['simple-aesthetics-predictor', 'torch', 'transformers'],
'video_nsfw_filter': ['torch', 'transformers'],
'video_face_blur_mapper': ['opencv-python'],
'stopwords_filter': ['sentencepiece'],
'fix_unicode_mapper': ['ftfy'],
'token_num_filter': ['transformers'],
'optimize_qa_mapper': ['torch', 'transformers', 'vllm'],
'video_motion_score_filter': ['opencv-python'],
'image_tagging_mapper': ['ram', 'torch'],
'video_resize_aspect_ratio_mapper': ['ffmpeg-python'],
'video_captioning_from_audio_mapper': [
'accelerate', 'einops', 'tiktoken', 'transformers',
'transformers_stream_generator'
],
'torchaudio': [
'video_captioning_from_summarizer_mapper',
'video_tagging_from_audio_mapper'
'clean_html_mapper': ['selectolax'],
'video_tagging_from_audio_mapper': ['torch', 'torchaudio', 'transformers'],
'image_deduplicator': ['imagededup'],
'image_diffusion_mapper':
['diffusers', 'simhash-pybind', 'torch', 'transformers'],
'image_text_similarity_filter': ['torch', 'transformers'],
'alphanumeric_filter': ['transformers'],
'image_nsfw_filter': ['torch', 'transformers'],
'image_watermark_filter': ['torch', 'transformers'],
'ray_image_deduplicator': ['imagededup'],
'video_captioning_from_frames_mapper':
['simhash-pybind', 'torch', 'transformers'],
'video_tagging_from_frames_filter': ['torch'],
'video_resize_resolution_mapper': ['ffmpeg-python'],
'optimize_query_mapper': ['torch', 'transformers', 'vllm'],
'sentence_split_mapper': ['nltk'],
'image_text_matching_filter': ['torch', 'transformers'],
'phrase_grounding_recall_filter': ['nltk', 'torch', 'transformers'],
'video_split_by_scene_mapper': ['scenedetect[opencv]'],
'image_face_blur_mapper': ['opencv-python'],
'image_face_ratio_filter': ['opencv-python'],
'document_minhash_deduplicator': ['scipy'],
'flagged_words_filter': ['sentencepiece'],
'language_id_score_filter': ['fasttext-wheel'],
'words_num_filter': ['sentencepiece'],
'chinese_convert_mapper': ['opencc'],
'video_frames_text_similarity_filter': ['torch', 'transformers'],
'generate_qa_from_text_mapper': ['torch', 'transformers', 'vllm'],
'video_ffmpeg_wrapped_mapper': ['ffmpeg-python'],
'image_captioning_mapper': ['simhash-pybind', 'torch', 'transformers'],
'video_ocr_area_ratio_filter': ['easyocr'],
'video_captioning_from_video_mapper':
['simhash-pybind', 'torch', 'transformers'],
'video_remove_watermark_mapper': ['opencv-python'],
'text_action_filter': ['spacy-pkuseg'],
'nlpaug_en_mapper': ['nlpaug'],
'word_repetition_filter': ['sentencepiece'],
'video_watermark_filter': ['torch'],
'video_captioning_from_summarizer_mapper': [
'accelerate', 'einops', 'simhash-pybind', 'tiktoken', 'torch',
'torchaudio', 'transformers', 'transformers_stream_generator'
],
'easyocr': ['video_ocr_area_ratio_filter'],
'fasttext-wheel': ['language_id_score_filter'],
'kenlm': ['perplexity_filter'],
'sentencepiece': [
'flagged_words_filter', 'perplexity_filter', 'stopwords_filter',
'word_repetition_filter', 'words_num_filter'
],
'scipy': ['document_minhash_deduplicator'],
'ftfy': ['fix_unicode_mapper'],
'simhash-pybind': [
'document_simhash_deduplicator', 'image_captioning_mapper',
'image_diffusion_mapper', 'video_captioning_from_frames_mapper',
'video_captioning_from_summarizer_mapper',
'video_captioning_from_video_mapper'
],
'selectolax': ['clean_html_mapper'],
'nlpaug': ['nlpaug_en_mapper'],
'nlpcda': ['nlpcda'],
'nltk': ['phrase_grounding_recall_filter', 'sentence_split_mapper'],
'transformers': [
'alphanumeric_filter', 'image_aesthetics_filter', 'image_nsfw_filter',
'image_text_matching_filter', 'image_text_similarity_filter',
'image_watermark_filter', 'phrase_grounding_recall_filter',
'token_num_filter', 'video_aesthetics_filter',
'video_frames_text_similarity_filter', 'video_nsfw_filter',
'generate_qa_from_text_mapper', 'generate_qa_from_examples_mapper',
'image_captioning_mapper', 'image_diffusion_mapper',
'optimize_query_mapper', 'optimize_response_mapper',
'optimize_qa_mapper', 'video_captioning_from_audio_mapper',
'video_captioning_from_frames_mapper',
'video_captioning_from_summarizer_mapper',
'video_captioning_from_video_mapper',
'video_tagging_from_audio_mapper', 'text_chunk_mapper',
'entity_attribute_aggregator', 'most_relavant_entities_aggregator',
'nested_aggregator'
],
'transformers_stream_generator': [
'video_captioning_from_audio_mapper',
'video_captioning_from_summarizer_mapper'
],
'einops': [
'video_captioning_from_audio_mapper',
'video_captioning_from_summarizer_mapper'
],
'accelerate': [
'video_captioning_from_audio_mapper',
'video_captioning_from_summarizer_mapper'
],
'tiktoken': [
'video_captioning_from_audio_mapper',
'video_captioning_from_summarizer_mapper'
],
'opencc': ['chinese_convert_mapper'],
'imagededup': ['image_deduplicator', 'ray_image_deduplicator'],
'spacy-pkuseg': ['text_action_filter', 'text_entity_dependency_filter'],
'diffusers': ['image_diffusion_mapper'],
'simple-aesthetics-predictor':
['image_aesthetics_filter', 'video_aesthetics_filter'],
'scenedetect[opencv]': ['video_split_by_scene_mapper'],
'ffmpeg-python': [
'audio_ffmpeg_wrapped_mapper', 'video_ffmpeg_wrapped_mapper',
'video_resize_aspect_ratio_mapper', 'video_resize_resolution_mapper'
],
'opencv-python': [
'image_face_ratio_filter', 'video_motion_score_filter',
'image_face_blur_mapper', 'video_face_blur_mapper',
'video_remove_watermark_mapper'
],
'vllm': [
'generate_qa_from_text_mapper',
'generate_qa_from_examples_mapper',
'optimize_query_mapper',
'optimize_response_mapper',
'optimize_qa_mapper',
],
'rouge': ['generate_qa_from_examples_mapper'],
'ram': ['image_tagging_mapper', 'video_tagging_from_frames_mapper'],
'dashscope': [
'text_chunk_mapper', 'entity_attribute_aggregator',
'most_relavant_entities_aggregator', 'nested_aggregator'
],
'openai': [
'calibrate_qa_mapper', 'calibrate_query_mapper',
'calibrate_response_mapper', 'extract_entity_attribute_mapper',
'extract_entity_relation_mapper', 'extract_event_mapper',
'extract_keyword_mapper', 'extract_nickname_mapper',
'extract_support_text_mapper', 'pair_preference_mapper',
'relation_identity_mapper', 'text_chunk_mapper',
'entity_attribute_aggregator', 'most_relavant_entities_aggregator',
'nested_aggregator'
]
'audio_ffmpeg_wrapped_mapper': ['ffmpeg-python'],
'perplexity_filter': ['kenlm', 'sentencepiece'],
'generate_qa_from_examples_mapper':
['rouge', 'torch', 'transformers', 'vllm'],
'video_tagging_from_frames_mapper': ['ram', 'torch'],
'text_entity_dependency_filter': ['spacy-pkuseg'],
'optimize_response_mapper': ['torch', 'transformers', 'vllm'],
'text_chunk_mapper': ['transformers', 'dashscope', 'openai'],
'entity_attribute_aggregator': ['transformers', 'dashscope', 'openai'],
'most_relavant_entities_aggregator':
['transformers', 'dashscope', 'openai'],
'nested_aggregator': ['transformers', 'dashscope', 'openai'],
'calibrate_qa_mapper': ['openai'],
'calibrate_query_mapper': ['openai'],
'calibrate_response_mapper': ['openai'],
'extract_entity_attribute_mapper': ['openai'],
'extract_entity_relation_mapper': ['openai'],
'extract_event_mapper': ['openai'],
'extract_keyword_mapper': ['openai'],
'extract_nickname_mapper': ['openai'],
'extract_support_text_mapper': ['openai'],
'pair_preference_mapper': ['openai'],
'relation_identity_mapper': ['openai'],
}
8 changes: 5 additions & 3 deletions docs/DeveloperGuide.md
Original file line number Diff line number Diff line change
Expand Up @@ -209,7 +209,9 @@ __all__ = [
]
```

4. Now you can use this new OP with custom arguments in your own config files!
4. When an operator has package dependencies listed in `environments/science_requires.txt`, you need to add the corresponding dependency packages to the `OPS_TO_PKG` dictionary in `data_juicer/utils/auto_install_mapping.py` to support dependency installation at the operator level.

5. Now you can use this new OP with custom arguments in your own config files!

```yaml
# other configs
Expand All @@ -222,7 +224,7 @@ process:
max_len: 1000
```
5. (Strongly Recommend) It's better to add corresponding tests for your own OPs. For `TextLengthFilter` above, you would like to add `test_text_length_filter.py` into `tests/ops/filter/` directory as below.
6. (Strongly Recommend) It's better to add corresponding tests for your own OPs. For `TextLengthFilter` above, you would like to add `test_text_length_filter.py` into `tests/ops/filter/` directory as below.

```python
import unittest
Expand All @@ -244,7 +246,7 @@ if __name__ == '__main__':
unittest.main()
```

6. (Strongly Recommend) In order to facilitate the use of other users, we also need to update this new OP information to
7. (Strongly Recommend) In order to facilitate the use of other users, we also need to update this new OP information to
the corresponding documents, including the following docs:
1. `configs/config_all.yaml`: this complete config file contains a list of all OPs and their arguments, serving as an
important document for users to refer to all available OPs. Therefore, after adding the new OP, we need to add it to the process
Expand Down
8 changes: 5 additions & 3 deletions docs/DeveloperGuide_ZH.md
Original file line number Diff line number Diff line change
Expand Up @@ -202,7 +202,9 @@ __all__ = [
]
```

4. 全部完成!现在您可以在自己的配置文件中使用新添加的算子:
4. 算子有`environments/science_requires.txt`中列举的包依赖时,需要在`data_juicer/utils/auto_install_mapping.py`里的`OPS_TO_PKG`中添加对应的依赖包,以支持算子粒度的依赖安装。

5. 全部完成!现在您可以在自己的配置文件中使用新添加的算子:

```yaml
# other configs
Expand All @@ -215,7 +217,7 @@ process:
max_len: 1000
```
5. (强烈推荐)最好为新添加的算子进行单元测试。对于上面的 `TextLengthFilter` 算子,建议在 `tests/ops/filter/` 中实现如 `test_text_length_filter.py` 的测试文件:
6. (强烈推荐)最好为新添加的算子进行单元测试。对于上面的 `TextLengthFilter` 算子,建议在 `tests/ops/filter/` 中实现如 `test_text_length_filter.py` 的测试文件:

```python
import unittest
Expand All @@ -238,7 +240,7 @@ if __name__ == '__main__':
unittest.main()
```

6. (强烈推荐)为了方便其他用户使用,我们还需要将新增的算子信息更新到相应的文档中,具体包括如下文档:
7. (强烈推荐)为了方便其他用户使用,我们还需要将新增的算子信息更新到相应的文档中,具体包括如下文档:
1. `configs/config_all.yaml`:该全集配置文件保存了所有算子及参数的一个列表,作为用户参考可用算子的一个重要文档。因此,在新增算子后,需要将其添加到该文档process列表里(按算子类型分组并按字母序排序):

```yaml
Expand Down
4 changes: 4 additions & 0 deletions environments/minimal_requires.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,11 @@ pandas
numpy
av==13.1.0
soundfile
# need to install two dependencies by librosa to avoid lazy_loader error
librosa>=0.10
samplerate
resampy
# need to install two dependencies by librosa to avoid lazy_loader error
loguru
tabulate
tqdm
Expand Down
Loading

0 comments on commit b4d0798

Please sign in to comment.