Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question][GItlab] Most GitLab Projects Not Selectable #8181

Open
yu501954162 opened this issue Nov 4, 2024 · 29 comments
Open

[Question][GItlab] Most GitLab Projects Not Selectable #8181

yu501954162 opened this issue Nov 4, 2024 · 29 comments
Labels
component/config-ui This issue or PR relates to config-ui severity/p2 This bug doesn’t affect the functionality or isn’t evident type/question This issue is a question

Comments

@yu501954162
Copy link

I'm running version v1.0.0-beta3@0b891d5. Can you advise as to why almost all my GItlab projects in the "Add Data Scope" section are not selectable. I have tried with different Gitlab users (highest permission levels) and cannot determine why this is occurring. I am running GitLab Community Edition v17.1.2
screenshot-20241104-180610

@yu501954162 yu501954162 added the type/question This issue is a question label Nov 4, 2024
Copy link

dosubot bot commented Nov 4, 2024

Hello @yu501954162! I'm here to help you with any bugs, questions, or contributions. Let's tackle this GitLab issue together.

Most of your GitLab projects might not be selectable in the "Add Data Scope" section because repositories with guest permissions or those that are archived are excluded. Ensure that your role in the GitLab projects is not set to Guest, as repositories with only guest permissions are not shown in the list. Additionally, check if any of the projects are archived, as they will also be excluded from selection [1].

To continue talking to Dosu, mention @dosu.

@dosubot dosubot bot added component/config-ui This issue or PR relates to config-ui severity/p2 This bug doesn’t affect the functionality or isn’t evident labels Nov 4, 2024
@yu501954162
Copy link
Author

Hello@dosu Gitlab and Jira Project have slow data collection speeds, with over 1000 data scopes that can be collected in about 3 days. Is there any configuration that can change Devlake's collection efficiency and data collection time period

Copy link

dosubot bot commented Nov 5, 2024

To improve data collection efficiency and reduce the data collection time period for GitLab and Jira projects in DevLake, you can adjust the "Fixed Rate Limit" setting for both platforms. For GitLab, you can increase the rate limit to speed up data collection, with a maximum rate limit for GitLab Cloud being 120,000 requests/hour. Ensure you do not exceed the rate limit set by your GitLab instance. Additionally, unselecting certain GitLab entities that you do not need to collect can accelerate the collection speed. You can also set the sync frequency to determine how often data is collected.

For Jira, you can adjust the rate limit to increase or lower the speed of data collection. If you encounter a 403 error, lower the rate limit. Contact your Jira Server admin to get or set the maximum rate limit of your Jira instance. Unselecting certain Jira entities that you do not need to collect can also help speed up the process. Configuring the data time range and sync frequency can further optimize data collection [1][2].

To continue talking to Dosu, mention @dosu.

@yu501954162
Copy link
Author

@dosu What are the recommended resources and devlake configurations for data scope collection with such a large amount of data

Copy link

dosubot bot commented Nov 5, 2024

To efficiently collect data scopes with a large amount of data in DevLake, consider the following configurations:

  1. Sync Policies:

    • Time Filter: Select the desired time range for syncing data to optimize the collection process.
    • Frequency: Set the sync frequency or specify a cron code for a custom schedule.
    • Running Policy: Enable "Skip failed tasks" to prevent data loss and allow the pipeline to continue running even if some tasks fail. You can rerun the failed tasks later.
  2. Data Collection Modes:

    • Use Collect Data (Default) mode for recurring pipelines, which supports incremental refresh for tools and entities that allow it.
    • Use Collect Data in Full Refresh Mode if you need to delete and re-collect all existing data within the designated time range.
    • Re-transform Data: This mode applies the latest transformation rules to existing data without collecting new data.
  3. Configuration Steps:

    • Add Data Connections: Configure the endpoint and authentication details, and add data scopes like Git repositories or issue boards.
    • Create a Project: Associate connections with the project and set the synchronization policy, including sync frequency, time range, and skip-on-fail options.
    • Start Data Collection: Choose the desired mode for collecting data.

These configurations help manage and optimize the data collection process, especially with large datasets [1][2][3][4][5].

To continue talking to Dosu, mention @dosu.

@yu501954162
Copy link
Author

@dosu Can Devlake configure pipeline parallel collection for different projects

Copy link

dosubot bot commented Nov 8, 2024

Yes, DevLake can configure pipeline parallel collection for different projects. To enable or optimize parallel data collection, you need to adjust the PIPELINE_MAX_PARALLEL configuration setting. By default, it is set to 1, which means that only one pipeline will run in parallel. Increasing this value will allow more pipelines to run concurrently, optimizing parallel data collection [1].

To continue talking to Dosu, mention @dosu.

@yu501954162
Copy link
Author

@dosu Recommended memory, CPU, and disk for devlake to collect large amounts of data

Copy link

dosubot bot commented Nov 8, 2024

I wasn't able to find specific recommended hardware specifications for memory, CPU, and disk space for running Apache DevLake in a production environment within the repository. However, for handling large amounts of data, it's generally advisable to ensure that your system has sufficient resources to accommodate the data processing and storage needs. This typically means having a robust multi-core CPU, ample RAM (at least 16GB or more), and sufficient disk space, preferably on SSDs, to handle the data volume and ensure smooth operation. Adjusting these resources based on your specific data size and processing requirements can help optimize performance.

To continue talking to Dosu, mention @dosu.

@yu501954162
Copy link
Author

@dosu Where can I change the configuration of PIPELLINE_maX_PARALLEL

Copy link

dosubot bot commented Nov 8, 2024

You can change the configuration of PIPELINE_MAX_PARALLEL in DevLake by setting it in the configuration file or as an environment variable. The configuration file is located at devops/releases/lake-v0.21.0/env.example [1]. Make sure the value is a positive integer, as a negative value will cause a panic, and a value of zero will mean no limit on parallel pipelines [2].

To continue talking to Dosu, mention @dosu.

@yu501954162
Copy link
Author

@dosu The cycle and configuration method of storing data in devlake

Copy link

dosubot bot commented Nov 8, 2024

The cycle and configuration method for storing data in DevLake involves the following steps:

  1. Add Data Connections: Configure the endpoint and authentication details to connect to the source data. Add data scope, such as Git repositories, issue boards, or CI/CD pipelines, to determine what data should be collected. Optionally, define specific data entities or transformation rules within the data scope.

  2. Collect Data in a Project: Create a project, associate connections with the project, and select specific data scopes. Set the synchronization policy, which includes specifying the sync frequency, time range, and the skip-on-fail option. Start data collection by choosing the desired mode: "Collect Data (Default)", "Collect Data in Full Refresh Mode", or "Re-transform Data".

  3. Check the Data in Grafana Dashboards: View the collected data using Grafana Dashboards by clicking on the "Dashboards" button in the top-right corner of the Config UI.

The synchronization policy allows you to set a time filter, frequency, and running policy. The "Skip failed tasks" option is enabled by default to prevent data loss during large data collections. You can rerun failed tasks later from the blueprint's detail page [1][2][3][4][5].

To continue talking to Dosu, mention @dosu.

@yu501954162
Copy link
Author

@dosu Suggestions for Devlake Performance Optimization

Copy link

dosubot bot commented Nov 8, 2024

To optimize DevLake's performance, consider the following suggestions:

  1. Data Collection Configuration:

    • Use the Time Filter to limit the data collection to a specific time range.
    • Adjust the Frequency of data synchronization to balance between data freshness and system load.
    • Enable the "Skip failed tasks" option to allow the pipeline to continue running even if some tasks fail, which helps in managing large data volumes efficiently [1].
  2. Parallel Processing:

    • Increase the PIPELINE_MAX_PARALLEL setting to allow multiple pipelines to run concurrently, enhancing data collection speed [2].
  3. Resource Allocation:

    • Ensure your system has sufficient CPU, memory, and disk resources. A robust multi-core CPU, at least 16GB of RAM, and SSD storage are recommended for handling large datasets.
  4. Data Collection Modes:

    • Use Collect Data (Default) mode for incremental refreshes where supported.
    • Opt for Collect Data in Full Refresh Mode if you need to re-collect all data within a specific range.
    • Use Re-transform Data to apply new transformation rules without collecting new data [1].
  5. Rate Limits:

    • Adjust rate limits for data sources like GitLab and Jira to optimize data collection speed without exceeding API limits.

Implementing these strategies can help improve the performance and efficiency of DevLake in handling large datasets.

To continue talking to Dosu, mention @dosu.

@yu501954162
Copy link
Author

@dosu Where to configure Data Collection Modes

Copy link

dosubot bot commented Nov 8, 2024

You can configure the Data Collection Modes in DevLake through the Config UI. To access the Config UI, visit http://localhost:4000. Once there, you can choose the desired mode for collecting data as part of Step 2.4 in the data collection process. DevLake supports three data collection modes: "Collect Data (Default)," "Collect Data in Full Refresh Mode," and "Re-transform Data" [1][2][3].

To continue talking to Dosu, mention @dosu.

@yu501954162
Copy link
Author

@dosu The difference between v1.0.0 and v1.0.1

Copy link

dosubot bot commented Nov 12, 2024

Could you provide more details about the specific software or tool you are referring to with versions v1.0.0 and v1.0.1? This will help me give you a more accurate comparison.

To continue talking to Dosu, mention @dosu.

@yu501954162
Copy link
Author

@dosu ,How to run tasks in parallel within a stage

Copy link

dosubot bot commented Nov 12, 2024

Tasks can be run in parallel within a stage in DevLake. A stage is a collection of tasks performed by data plugins, and these tasks are executed in parallel within any stage. This parallel execution allows for efficient processing of data collection, extraction, conversion, and enrichment jobs. The tasks from different plugins can be executed concurrently within the same stage [1][2][3].

To continue talking to Dosu, mention @dosu.

@yu501954162
Copy link
Author

@dosu ,How to config tasks parallel run

Copy link

dosubot bot commented Nov 12, 2024

To configure tasks to run in parallel in DevLake, you can utilize the pipeline structure, which allows tasks within the same stage to be executed in parallel. Here's how it works:

  1. Pipeline Structure: A pipeline is organized as a two-dimensional array of tasks. Tasks within the same stage (e.g., stage1, stage2) are executed in parallel.

  2. Task Execution: Tasks in each stage run concurrently, and the execution of tasks in subsequent stages depends on the completion of tasks in previous stages.

  3. Subtask Execution: Within each plugin, subtasks are executed sequentially after the parallel execution of tasks in the same stage.

This setup allows for efficient parallel processing of tasks while maintaining dependencies across different stages [1].

To continue talking to Dosu, mention @dosu.

@yu501954162
Copy link
Author

@dosu ,How to configure in this configuration file

Copy link

dosubot bot commented Nov 12, 2024

To configure tasks to run in parallel within a stage in the DevLake configuration file, you need to define a single stage with multiple tasks in a 2D array format. Each stage is represented by an array, and tasks within the same stage run in parallel. Here is an example configuration for running a GitLab task and a Jira task in parallel within a single stage:

[
    [
        {
            "plugin": "jira",
            "options": {
                "boardId": 8,
                "connectionId": 1
            }
        },
        {
            "plugin": "gitlab",
            "options": {
                "connectionId": 1,
                "projectId": 116***98
            }
        }
    ]
]

In this configuration, both the Jira and GitLab tasks will run in parallel because they are defined within the same stage (the same inner array) [1].

To continue talking to Dosu, mention @dosu.

@yu501954162
Copy link
Author

@dosu
The message field in the commit table for data collection is garbled in Chinese

Copy link

dosubot bot commented Nov 14, 2024

The message field in the commit table is displaying garbled text for Chinese characters because the migration script modifies the message column to use LONGTEXT CHARACTER SET binary. This character set does not support proper encoding for Chinese characters, leading to garbled text. The binary character set treats the text as raw bytes, which is not suitable for multi-byte character sets like those used for Chinese characters [1].

To continue talking to Dosu, mention @dosu.

@yu501954162
Copy link
Author

@dosu gtilab convert users and extract users takes a long time

Copy link

dosubot bot commented Nov 18, 2024

To optimize the performance of the "convert users" and "extract users" processes in the GitLab plugin of DevLake, you can adjust the rate limit configuration. By default, DevLake uses a dynamic rate limit of around 12,000 requests per hour for GitLab, but you can increase this to a maximum of 120,000 requests per hour for GitLab Cloud, depending on your specific needs and constraints [1]. This adjustment can help speed up data collection and processing times.

To continue talking to Dosu, mention @dosu.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/config-ui This issue or PR relates to config-ui severity/p2 This bug doesn’t affect the functionality or isn’t evident type/question This issue is a question
Projects
None yet
Development

No branches or pull requests

1 participant