diff --git a/docs/deployment/deploy-quick.md b/docs/deployment/deploy-quick.md
index 93d959146f8..71ebace6f7c 100644
--- a/docs/deployment/deploy-quick.md
+++ b/docs/deployment/deploy-quick.md
@@ -1,30 +1,30 @@
---
-title: Quick Deployment
+title: Stand-alone deployment
sidebar_position: 1
---
-## 1. Preparing for the first installation
+## 1. First-time installation preparations
-### 1.1 Linux Server
+### 1.1 Linux server
-**Hardware requirements**
-Install nearly 10 linkis microservices with at least 3G memory. The size of the jvm -Xmx memory started by the default configuration of each microservice is 512M (if the memory is not enough, you can try to reduce it to 256/128M, and you can also increase it if the memory is sufficient)
+**Hardware Requirements**
+Install nearly 6 linkis microservices, at least 3G memory. The default jvm -Xmx memory size of each microservice is 512M (if the memory is not enough, you can try to reduce it to 256/128M, and you can also increase it if the memory is enough).
### 1.2 Add deployment user
->Deployment user: the startup user of the linkis core process, and this user will be the administrator privilege by default. The corresponding administrator login password will be generated during the deployment process, located in `conf/linkis-mg-gateway .properties` file
-Linkis supports specifying the user who submits and executes. The linkis main process service will switch to the corresponding user through `sudo -u ${linkis-user}`, and then execute the corresponding engine start command, so the user to which the engine `linkis-engine` process belongs is the executor of the task (so the deployment The user needs to have sudo permissions, and it is password-free)
+>Deployment user: The starting user of the linkis core process, and this user will be the administrator by default. The corresponding administrator login password will be generated during the deployment process, located in `conf/linkis-mg-gateway .properties`file
+Linkis supports specifying users for submission and execution. The linkis main process service will switch to the corresponding user through `sudo -u ${linkis-user}`, and then execute the corresponding engine start command, so the user of the engine `linkis-engine` process is the executor of the task (so the deployment The user needs to have sudo authority, and it is password-free).
-Take hadoop user as an example:
+Take hadoop users as an example (Many configuration users in linkis use hadoop users by default. It is recommended that first-time installers use hadoop users, otherwise many unexpected errors may be encountered during the installation process):
-First check whether there is already a hadoop user in the system. If it already exists, you can directly authorize it, if not, create a user first, and then authorize.
+First check whether there is already a hadoop user in the system, if it already exists, just authorize it directly, if not, create a user first, and then authorize.
-Check if a hadoop user already exists
+Check if hadoop user already exists
```shell script
$ id hadoop
uid=2001(hadoop) gid=2001(hadoop) groups=2001(hadoop)
-````
+```
If it does not exist, you need to create a hadoop user and join the hadoop user group
```shell script
@@ -32,7 +32,7 @@ $ sudo useradd hadoop -g hadoop
$ vi /etc/sudoers
#Secret-free configuration
hadoop ALL=(ALL) NOPASSWD: NOPASSWD: ALL
-````
+```
The following operations are performed under the hadoop user
@@ -42,55 +42,55 @@ hadoop ALL=(ALL) NOPASSWD: NOPASSWD: ALL
### 2.1 Installation package preparation
-- Method 1: From the official website [download address](https://linkis.apache.org/download/main): https://linkis.apache.org/download/main
-, download the corresponding The installation package (project installation package and management console installation package)
-- Method 2: Compile the project installation package and management console according to [Linkis Compile and Package](../development/build) and [Front-end Management Console Compile](../development/build-console) Installation package
+- Method 1: From the official website [download address] (https://linkis.apache.org/zh-CN/download/main): https://linkis.apache.org/zh-CN/download/main
+, download the corresponding installation package (project installation package and management console installation package).
+- Method 2: Compile the project installation package and console installation package according to [Linkis Compilation and Packaging](../development/build) and [Front-end Console Compilation](../development/build-console).
-After uploading the installation package `apache-linkis-x.x.x-bin.tar.gz`, decompress the installation package
+After uploading the installation package `apache-linkis-xxx-bin.tar.gz`, decompress the installation package
```shell script
-$ tar -xvf apache-linkis-x.x.x-bin.tar.gz
-````
+$ tar -xvf apache-linkis-xxx-bin.tar.gz
+```
-The unzipped directory structure is as follows
+The directory structure after decompression is as follows
```shell script
--rw-r--r-- 1 hadoop hadoop 518192043 Jun 20 09:50 apache-linkis-1.3.1-bin.tar.gz
-drwxrwxr-x 2 hadoop hadoop 4096 Jun 20 09:56 bin //Script to perform environment check and install
-drwxrwxr-x 2 hadoop hadoop 4096 Jun 20 09:56 deploy-config // Environment configuration information such as DB that depends on deployment
-drwxrwxr-x 4 hadoop hadoop 4096 Jun 20 09:56 docker
-drwxrwxr-x 4 hadoop hadoop 4096 Jun 20 09:56 helm
--rwxrwxr-x 1 hadoop hadoop 84732 Jan 22 2020 LICENSE
-drwxr-xr-x 2 hadoop hadoop 20480 Jun 20 09:56 licenses
-drwxrwxr-x 7 hadoop hadoop 4096 Jun 20 09:56 linkis-package // The actual package, including lib/service startup script tool/db initialization script/microservice configuration file, etc.
--rwxrwxr-x 1 hadoop hadoop 119503 Jan 22 2020 NOTICE
--rw-r--r-- 1 hadoop hadoop 11959 Jan 22 2020 README_CN.md
--rw-r--r-- 1 hadoop hadoop 12587 Jan 22 2020 README.md
-
-````
-
-### 2.2 Configure database
+-rw-r--r-- 1 hadoop hadoop 518192043 Jun 20 09:50 apache-linkis-xxx-bin.tar.gz
+drwxrwxr-x 2 hadoop hadoop 4096 Jun 20 09:56 bin //execute environment check and install script
+drwxrwxr-x 2 hadoop hadoop 4096 Jun 20 09:56 deploy-config // Deployment dependent DB and other environment configuration information
+drwxrwxr-x 4 hadoop hadoop 4096 Jun 20 09:56 docker
+drwxrwxr-x 4 hadoop hadoop 4096 Jun 20 09:56 helm
+-rwxrwxr-x 1 hadoop hadoop 84732 Jan 22 2020 LICENSE
+drwxr-xr-x 2 hadoop hadoop 20480 Jun 20 09:56 licenses
+drwxrwxr-x 7 hadoop hadoop 4096 Jun 20 09:56 linkis-package // actual software package, including lib/service startup script tool/db initialization script/microservice configuration file, etc.
+-rwxrwxr-x 1 hadoop hadoop 119503 Jan 22 2020 NOTICE
+-rw-r--r-- 1 hadoop hadoop 11959 Jan 22 2020 README_CN.md
+-rw-r--r-- 1 hadoop hadoop 12587 Jan 22 2020 README.md
+
+```
+
+### 2.2 Configure database information
`vim deploy-config/linkis-env.sh`
```shell script
-# Select the type of Linkis business database, default is mysql.
-# If using PostgreSQL, please change it to postgresql.
-# Note: The configuration is only applicable to Linkis version 1.4.0 or higher.
+# Select linkis business database type, default mysql
+# If using postgresql, please change to postgresql
+# Note: The current configuration only applies to linkis>=1.4.0
dbType=mysql
```
`vim deploy-config/db.sh`
```shell script
-# Database information of Linkis' own business - mysql
+# Linkis's own business database information - mysql
MYSQL_HOST=xx.xx.xx.xx
MYSQL_PORT=3306
MYSQL_DB=linkis_test
MYSQL_USER=test
MYSQL_PASSWORD=xxxxx
-# Database information of Linkis' own business - postgresql
-# Note: The configurations is only applicable to Linkis version 1.4.0 or higher.
+# Linkis's own business database information - postgresql
+# Note: The following configuration is only applicable to linkis>=1.4.0
PG_HOST=xx.xx.xx.xx
PG_PORT=5432
PG_DB=linkis_test
@@ -98,68 +98,68 @@ PG_SCHEMA=linkis_test
PG_USER=test
PG_PASSWORD=123456
-# Provide the DB information of the Hive metadata database. If the hive engine is not involved (or just a simple trial), you can not configure it
-#Mainly used with scripts, if not configured, it will try to obtain it through the configuration file in $HIVE_CONF_DIR by default
-HIVE_META_URL="jdbc:mysql://10.10.10.10:3306/hive_meta_demo?useUnicode=true&characterEncoding=UTF-8"
-HIVE_META_USER=demo # User of HiveMeta Metabase
-HIVE_META_PASSWORD=demo123 # HiveMeta metabase password
-````
+# Provide the DB information of the Hive metadata database. If the hive engine is not involved (or just a simple trial), it is not necessary to configure
+#Mainly used together with scriptis, if not configured, it will try to get it through the configuration file in $HIVE_CONF_DIR by default
+HIVE_META_URL="jdbc:mysql://10.10.10.10:3306/hive_meta_demo?useUnicode=true&characterEncoding=UTF-8"
+HIVE_META_USER=demo # User of the HiveMeta metabase
+HIVE_META_PASSWORD=demo123 # Password of the HiveMeta metabase
+```
### 2.3 Configure basic variables
-The file is located at `deploy-config/linkis-env.sh`
+The file is located at `deploy-config/linkis-env.sh`.
-#### deploy user
+#### Deploy User
```shell script
deployUser=hadoop #The user who executes the deployment is the user created in step 1.2
-````
+```
-#### base directory configuration (optional)
-:::caution note
-Determine whether you need to adjust according to the actual situation, you can choose to use the default value
+#### Basic directory configuration (optional)
+:::caution Caution
+Determine whether it needs to be adjusted according to the actual situation, and you can choose to use the default value
:::
```shell script
-# Specify the directory path used by the user, which is generally used to store the user's script files and log files, and is the user's workspace. The corresponding configuration file configuration item is wds.linkis.filesystem.root.path(linkis.properties)
+# Specify the directory path used by the user, which is generally used to store the user's script files and log files, etc., and is the user's workspace. The corresponding configuration file configuration item is wds.linkis.filesystem.root.path(linkis.properties)
WORKSPACE_USER_ROOT_PATH=file:///tmp/linkis
-# File paths such as result set logs, used to store the result set files of the Job wds.linkis.resultSet.store.path(linkis-cg-entrance.properties) //If not configured, use the configuration of HDFS_USER_ROOT_PATH
+# The result set log and other file paths are used to store the result set file of the Job wds.linkis.resultSet.store.path(linkis-cg-entrance.properties) //If the configuration of HDFS_USER_ROOT_PATH is not configured
RESULT_SET_ROOT_PATH=file:///tmp/linkis
-# File path such as result set log, used to store the result set file of Job wds.linkis.filesystem.hdfs.root.path(linkis.properties)
+# Result set log and other file paths, used to store the result set file of Job wds.linkis.filesystem.hdfs.root.path(linkis.properties)
HDFS_USER_ROOT_PATH=hdfs:///tmp/linkis
-# Store the working path of the execution engine. You need to deploy a local directory with write permissions for the user wds.linkis.engineconn.root.dir(linkis-cg-engineconnmanager.properties)
+# To store the working path of the execution engine, a local directory wds.linkis.engineconn.root.dir(linkis-cg-engineconnmanager.properties) where the deployment user has write permissions is required
ENGINECONN_ROOT_PATH=/appcom/tmp
-````
+```
#### Yarn's ResourceManager address
-:::caution note
+:::caution Caution
If you need to use the Spark engine, you need to configure
:::
```shell script
-#You can confirm whether it can be accessed normally by visiting the http://xx.xx.xx.xx:8088/ws/v1/cluster/scheduler interface
+#You can check whether it can be accessed normally by visiting http://xx.xx.xx.xx:8088/ws/v1/cluster/scheduler interface
YARN_RESTFUL_URL=http://xx.xx.xx.xx:8088
-````
-When executing spark tasks, you need to use the ResourceManager of yarn. By default, linkis does not enable permission verification. If the ResourceManager has password permission verification enabled, please install and deploy it.
-Modify the database table `linkis_cg_rm_external_resource_provider` to insert yarn data information. For details, please refer to [Check whether the yarn address is configured correctly] (#811-Check whether the yarn address is configured correctly)
+```
+When executing the spark task, you need to use the ResourceManager of yarn. Linkis defaults that permission verification is not enabled. If the ResourceManager has enabled password permission verification, please install and deploy.
+Modify the database table `linkis_cg_rm_external_resource_provider` to insert yarn data information, for details, please refer to [Check whether the yarn address is configured correctly] (#811-Check whether the yarn address is configured correctly)
#### Basic component environment information
-:::caution note
-It can be configured through the user's system environment variables. If configured through the system environment variables, the deploy-config/linkis-env.sh configuration file can be directly commented out without configuration.
+:::caution Caution
+It can be configured through the user's system environment variables. If it is configured through the system environment variables, it can be commented out directly without configuration in the deploy-config/linkis-env.sh configuration file.
:::
```shell script
##If you do not use Hive, Spark and other engines and do not rely on Hadoop, you do not need to configure the following environment variables
-#HADOOP
+#HADOOP
HADOOP_HOME=/appcom/Install/hadoop
HADOOP_CONF_DIR=/appcom/config/hadoop-config
@@ -170,45 +170,45 @@ HIVE_CONF_DIR=/appcom/config/hive-config
#Spark
SPARK_HOME=/appcom/Install/spark
SPARK_CONF_DIR=/appcom/config/spark-config
-````
+```
#### LDAP login configuration (optional)
-:::caution note
-The default is to use a static user and password. The static user is the deployment user. The static password will generate a random password string during deployment and store it in `${LINKIS_HOME}/conf/linkis-mg-gateway.properties`(>=1.0.3 Version)
+:::caution Caution
+The default is to use a static user and password. The static user is the deployment user. The static password will randomly generate a password string during deployment and store it in `${LINKIS_HOME}/conf/linkis-mg-gateway.properties`(>=1.0. 3 version).
:::
```shell script
-#LDAP configuration, Linkis only supports deployment user login by default. If you need to support multi-user login, you can use LDAP. You need to configure the following parameters:
+#LDAP configuration, by default Linkis only supports deployment user login, if you need to support multi-user login, you can use LDAP, you need to configure the following parameters:
#LDAP_URL=ldap://localhost:1389/
#LDAP_BASEDN=dc=webank,dc=com
-````
+```
#### JVM memory configuration (optional)
->The microservice starts the jvm memory configuration, which can be adjusted according to the actual situation of the machine. If the machine memory resources are few, you can try to adjust it to 256/128M
+>Microservice starts jvm memory configuration, which can be adjusted according to the actual situation of the machine. If the machine has less memory resources, you can try to reduce it to 256/128M
```shell script
## java application default jvm memory
export SERVER_HEAP_SIZE="512M"
-````
+```
#### Installation directory configuration (optional)
-> Linkis will eventually be installed in this directory. If it is not configured, it will be in the same level directory as the current installation package by default.
+> Linkis will eventually be installed in this directory, if not configured, it will be in the same directory as the current installation package by default
```shell script
##The decompression directory and the installation directory need to be inconsistent
LINKIS_HOME=/appcom/Install/LinkisInstall
-````
+```
-#### No HDFS mode deployment (optional >1.1.2 version support hold)
+#### No HDFS mode deployment (optional >1.1.2 version support)
-> Deploy Linkis services in an environment without HDFS to facilitate more lightweight learning and debugging. Deploying in HDFS mode does not support tasks such as hive/spark/flink engines
+> Deploy the Linkis service in an environment without HDFS to facilitate lighter learning, use and debugging. Deploying in HDFS mode does not support tasks such as hive/spark/flink engines
-Modify the `linkis-env.sh` file and modify the following
+Modify `linkis-env.sh` file, modify the following content
```bash
-#Use the [file://] path pattern instead of the [hdfs://] pattern
+#Use [file://] path pattern instead of [hdfs://] pattern
WORKSPACE_USER_ROOT_PATH=file:///tmp/linkis/
HDFS_USER_ROOT_PATH=file:///tmp/linkis
RESULT_SET_ROOT_PATH=file:///tmp/linkis
@@ -220,17 +220,27 @@ export ENABLE_SPARK=false
#### kerberos authentication (optional)
-> By default, kerberos authentication is disabled on Linkis. If kerberos authentication is enabled in the hive cluster, you need to set the following parameters:
-
-Modify the `linkis-env.sh` file and modify the following
+> Linkis does not enable kerberos authentication by default. If the hive cluster used enables kerberos authentication, the following parameters need to be configured.
+Modify the `linkis-env.sh` file, the modified content is as follows
```bash
#HADOOP
HADOOP_KERBEROS_ENABLE=true
HADOOP_KEYTAB_PATH=/appcom/keytab/
```
-#### Notice
+### 2.4 Configure Token
+The file is located in `bin/install.sh`
+
+Linkis 1.3.2 version has changed the Token value to 32-bit random generation to ensure system security. For details, please refer to [Token Change Description](https://linkis.apache.org/zh-CN/docs/1.3.2/ feature/update-token/).
+
+Using randomly generated Token, you will encounter a lot of Token verification failure problems when connecting with [WDS other components](https://github.com/WeDataSphere/DataSphereStudio/blob/master/README-ZH.md) for the first time. It is recommended to install it for the first time When not using random generated Token, modify the following configuration to true.
+
+```
+DEBUG_MODE=true
+```
+
+### 2.5 Precautions
**Full installation**
@@ -240,9 +250,20 @@ For the full installation of the new version of Linkis, the install.sh script wi
When the version is upgraded, the database Token is not modified, so there is no need to modify the configuration file and application Token.
-**Token expiration problem**
+**Token expiration issue**
+
+When the Token token is invalid or has expired, you can check whether the Token is configured correctly. You can query the Token through the management console ==> Basic Data Management ==> Token Management.
-There is problem of token is not valid or stale, you can check whether the Token is configured correctly, and you can query the Token through the management console.
+**Python version issue**
+After Linkis is upgraded to 1.4.0, the default Spark version is upgraded to 3.x, which is not compatible with python2. Therefore, if you need to use the pyspark function, you need to make the following modifications.
+1. Map python2 commands to python3
+```
+sudo ln -snf /usr/bin/python3 /usr/bin/python2
+```
+2. Spark engine connector configuration $LINKIS_HOME/lib/linkis-engineconn-plugins/spark/dist/3.2.1/conf/linkis-engineconn.properties Add the following configuration to specify the python installation path
+```
+pyspark.python3.path=/usr/bin/python3
+```
## 3. Install and start
@@ -250,78 +271,67 @@ There is problem of token is not valid or stale, you can check whether the Token
```bash
sh bin/install.sh
-````
+```
-The install.sh script will ask you if you need to initialize the database and import metadata. If you choose to initialize, the table data in the database will be emptied and reinitialized.
+The install.sh script will ask you if you want to initialize the database and import metadata. If you choose to initialize, the table data in the database will be cleared and reinitialized.
-**Empty database must be selected for the first installation**
+**You must choose to clear the database for the first installation**
-:::tip Note
-- If an error occurs, and it is unclear what command to execute to report the error, you can add the -x parameter `sh -x bin/install.sh` to print out the shell script execution process log, which is convenient for locating the problem
-- Permission problem: `mkdir: cannot create directory 'xxxx': Permission denied`, please confirm whether the deployment user has read and write permissions for the path
+:::tip note
+- If an error occurs, and it is not clear what command to execute to report the error, you can add the -x parameter `sh -x bin/install.sh` to print out the log of the shell script execution process, which is convenient for locating the problem.
+- Permission problem: `mkdir: cannot create directory 'xxxx': Permission denied`, please confirm whether the deployment user has read and write permissions for this path.
:::
The prompt for successful execution is as follows:
```shell script
-`Congratulations! You have installed Linkis 1.0.3 successfully, please use sh /data/Install/linkis/sbin/linkis-start-all.sh to start it!
+`Congratulations! You have installed Linkis xxx successfully, please use sh /data/Install/linkis/sbin/linkis-start-all.sh to start it!
Your default account password is [hadoop/5e8e312b4]`
-````
+```
### 3.2 Add mysql driver package
-:::caution note
-Because the mysql-connector-java driver is under the GPL2.0 protocol, it does not meet the license policy of the Apache open source protocol. Therefore, starting from version 1.0.3, the official deployment package of the Apache version provided by default is no mysql-connector-java-x.x.x.jar (**If it is installed through the integrated family bucket material package, you do not need to add it manually**), you need to add dependencies to the corresponding lib package by yourself during installation and deployment. You can check whether it exists in the corresponding directory, if not, you need to add
+:::caution Caution
+Because the mysql-connector-java driver is under the GPL2.0 agreement, it does not meet the license policy of the Apache open source agreement. Therefore, starting from version 1.0.3, the official deployment package of the Apache version provided does not have mysql-connector-java-xxxjar by default. Dependency package (**If you install it through the integrated family bucket material package, you don’t need to add it manually**), you need to add dependencies to the corresponding lib package yourself when installing and deploying. You can check whether it exists in the corresponding directory. If it does not exist, you need to add it.
:::
-To download the mysql driver, take version 8.0.28 as an example: [download link](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.28/mysql-connector-java-8.0.28.jar)
+Download mysql driver Take version 8.0.28 as an example: [Download link](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.28/mysql-connector-java-8.0.28. jar)
Copy the mysql driver package to the lib package
-````
+```
cp mysql-connector-java-8.0.28.jar ${LINKIS_HOME}/lib/linkis-spring-cloud-services/linkis-mg-gateway/
cp mysql-connector-java-8.0.28.jar ${LINKIS_HOME}/lib/linkis-commons/public-module/
-````
-
-### 3.3 Add postgresql driver package (Optional)
-
-If you choose to use postgresql as the business database, you need to manually add the postgresql driver.
-
-To download the postgresql driver, take version 42.5.4 as an example: [download link](https://repo1.maven.org/maven2/org/postgresql/postgresql/42.5.4/postgresql-42.5.4.jar)
-
+```
+### 3.3 Add postgresql driver package (optional)
+If you choose to use postgresql as the business database, you need to manually add the postgresql driver
+Download postgresql driver Take version 42.5.4 as an example: [Download link](https://repo1.maven.org/maven2/org/postgresql/postgresql/42.5.4/postgresql-42.5.4.jar)
Copy the postgresql driver package to the lib package
-````
+```
cp postgresql-42.5.4.jar ${LINKIS_HOME}/lib/linkis-spring-cloud-services/linkis-mg-gateway/
cp postgresql-42.5.4.jar ${LINKIS_HOME}/lib/linkis-commons/public-module/
-````
-
-### 3.4 Configuration Adjustment (Optional)
+```
+### 3.4 Configuration adjustment (optional)
> The following operations are related to the dependent environment. According to the actual situation, determine whether the operation is required
-#### 3.4.1 kerberos authentication
-If the hive cluster used has kerberos mode authentication enabled, modify the configuration `${LINKIS_HOME}/conf/linkis.properties` (<=1.1.3) file
-```shell script
-#Append the following configuration
-echo "wds.linkis.keytab.enable=true" >> linkis.properties
-````
-#### 3.4.2 Yarn Authentication
+#### 3.4.1 Yarn authentication
-When executing spark tasks, you need to use the ResourceManager of yarn, which is controlled by the configuration item `YARN_RESTFUL_URL=http://xx.xx.xx.xx:8088 `.
-During installation and deployment, the `YARN_RESTFUL_URL=http://xx.xx.xx.xx:8088` information will be updated to the database table `linkis_cg_rm_external_resource_provider`. By default, access to yarn resources does not require permission verification.
-If password authentication is enabled in yarn's ResourceManager, please modify the yarn data information generated in the database table `linkis_cg_rm_external_resource_provider` after installation and deployment.
-For details, please refer to [Check whether the yarn address is configured correctly] (#811-Check whether the yarn address is configured correctly)
+When executing spark tasks, you need to use the ResourceManager of yarn, which is controlled by the configuration item `YARN_RESTFUL_URL=http://xx.xx.xx.xx:8088`.
+When performing installation and deployment, the `YARN_RESTFUL_URL=http://xx.xx.xx.xx:8088` information will be updated to `linkis_cg_rm_external_resource_provider` in the database table. By default, access to yarn resources does not require authorization verification.
+If the resourcemanager of yarn has enabled the password authentication, please modify the yarn data information generated in the database table `linkis_cg_rm_external_resource_provider` after installation and deployment,
+For details, please refer to [Check whether the yarn address is configured correctly] (#811-Check whether the yarn address is configured correctly).
#### 3.4.2 session
-If you are upgrading to Linkis. Deploy DSS or other projects at the same time, but the dependent linkis version introduced in other software is <1.1.1 (mainly in the lib package, the linkis-module-x.x.x.jar package of the dependent Linkis is <1.1.1), you need to modify the linkis located in ` ${LINKIS_HOME}/conf/linkis.properties` file
+If you are an upgrade to Linkis. Deploy DSS or other projects at the same time, but the version of linkis introduced in other software is <1.1.1 (mainly in the lib package, the linkis-module-xxxjar package of Linkis that depends on it <1.1.1), you need to modify the `$ {LINKIS_HOME}/conf/linkis.properties` file.
```shell
echo "wds.linkis.session.ticket.key=bdp-user-ticket-id" >> linkis.properties
-````
+```
-#### S3 mode (optional)
-> Currently, it is possible to store engine execution logs and results to S3 in Linkis.
+#### 3.4.3 S3 mode
+> Currently supports storing engine execution logs and results to the S3 file system
>
-> Note: Linkis has not adapted permissions for S3, so it is not possible to grant authorization for it.
+> Note: linkis does not adapt permissions to S3, so it cannot perform authorization operations on it
-`vim linkis.properties`
+`vim $LINKIS_HOME/conf/linkis.properties`
```shell script
# s3 file system
linkis.storage.s3.access.key=xxx
@@ -331,7 +341,7 @@ linkis.storage.s3.region=xxx
linkis.storage.s3.bucket=xxx
```
-`vim linkis-cg-entrance.properties`
+`vim $LINKIS_HOME/conf/linkis-cg-entrance.properties`
```shell script
wds.linkis.entrance.config.log.path=s3:///linkis/logs
wds.linkis.resultSet.store.path=s3:///linkis/results
@@ -340,34 +350,33 @@ wds.linkis.resultSet.store.path=s3:///linkis/results
### 3.5 Start the service
```shell script
sh sbin/linkis-start-all.sh
-````
+```
-### 3.6 Modification of post-installation configuration
-After the installation is complete, if you need to modify the configuration (because of port conflicts or some configuration problems, you need to adjust the configuration), you can re-execute the installation, or modify the configuration `${LINKIS_HOME}/conf/*properties` file of the corresponding service, Restart the corresponding service, such as: `sh sbin/linkis-daemon.sh start ps-publicservice`
+### 3.6 Modification of configuration after installation
+After the installation is complete, if you need to modify the configuration (the configuration needs to be adjusted due to port conflicts or some configuration problems), you can re-execute the installation, or modify the configuration `${LINKIS_HOME}/conf/*properties` file of the corresponding service, Restart the corresponding service, such as: `sh sbin/linkis-daemon.sh start ps-publicservice`.
### 3.7 Check whether the service starts normally
Visit the eureka service page (http://eurekaip:20303),
-The Linkis will start 6 microservices by default, and the linkis-cg-engineconn service in the figure below will be started only for running tasks
+By default, 6 Linkis microservices will be started, and the linkis-cg-engineconn service in the figure below will only be started for running tasks.
![Linkis1.0_Eureka](./images/eureka.png)
```shell script
-LINKIS-CG-ENGINECONNMANAGER Engine Management Services
-LINKIS-CG-ENTRANCE Computing Governance Entry Service
-LINKIS-CG-LINKISMANAGER Computing Governance Management Service
-LINKIS-MG-EUREKA Microservice registry service
-LINKIS-MG-GATEWAY gateway service
+LINKIS-CG-ENGINECONNMANAGER Engine Management Service
+LINKIS-CG-ENTRANCE computing governance entry service
+LINKIS-CG-LINKISMANAGER Computing Governance Management Service
+LINKIS-MG-EUREKA Microservice Registry Service
+LINKIS-MG-GATEWAY Gateway Service
LINKIS-PS-PUBLICSERVICE Public Service
-````
-
-Note: LINKIS-PS-CS, LINKIS-PS-DATA-SOURCE-MANAGER、LINKIS-PS-METADATAMANAGER services have been merged into LINKIS-PS-PUBLICSERVICE in Linkis 1.3.1 and merge LINKIS-CG-ENGINEPLUGIN services into LINKIS-CG-LINKISMANAGER.
+```
-If any services are not started, you can view detailed exception logs in the corresponding log/${service name}.log file.
+Note: In Linkis 1.3.1, LINKIS-PS-CS, LINKIS-PS-DATA-SOURCE-MANAGER, LINKIS-PS-METADATAMANAGER services have been merged into LINKIS-PS-PUBLICSERVICE, and LINKIS-CG-ENGINEPLUGIN services have been merged into LINKIS -CG-LINKISMANAGER.
+If any service is not started, you can check the detailed exception log in the corresponding log/${service name}.log file.
### 3.8 Configure Token
-The original default Token of Linkis is fixed and the length is too short, which has security risks. Therefore, Linkis 1.3.2 changes the original fixed Token to random generation and increases the Token length.
+Linkis's original default Token is fixed and the length is too short, posing security risks. Therefore, Linkis 1.3.2 changes the original fixed Token to random generation, and increases the length of the Token.
New Token format: application abbreviation - 32-bit random number, such as BML-928a721518014ba4a28735ec2a0da799.
@@ -387,7 +396,7 @@ Log in to the management console -> basic data management -> token management
When the Linkis service itself uses Token, the Token in the configuration file must be consistent with the Token in the database. Match by applying the short name prefix.
-$LINKIS_HOME/conf/linkis.properites file Token configuration
+$LINKIS_HOME/conf/linkis.properties file Token configuration
```
linkis.configuration.linkisclient.auth.token.value=BML-928a721518014ba4a28735ec2a0da799
@@ -411,37 +420,37 @@ wds.linkis.client.common.tokenValue=BML-928a721518014ba4a28735ec2a0da799
When other applications use Token, they need to modify their Token configuration to be consistent with the Token in the database.
-## 4. Install the web frontend
+## 4. Install the web front end
The web side uses nginx as the static resource server, and the access request process is:
-`Linkis console request->nginx ip:port->linkis-gateway ip:port->other services`
+`Linkis management console request->nginx ip:port->linkis-gateway ip:port->other services`
-### 4.1 Download the front-end installation package and unzip it
+### 4.1 Download the front-end installation package and decompress it
```shell script
-tar -xvf apache-linkis-x.x.x-incubating-web-bin.tar.gz
-````
+tar -xvf apache-linkis-xxx-web-bin.tar.gz
+```
-### 4.2 Modify the configuration config.sh
+### 4.2 Modify configuration config.sh
```shell script
-#Access the port of the console
+#Access the port of the management console
linkis_port="8188"
-#linkis-mg-gatewayService Address
+#linkis-mg-gateway service address
linkis_url="http://localhost:9020"
-````
+```
### 4.3 Execute the deployment script
```shell script
-# nginx requires sudo privileges to install
+# nginx needs sudo permission to install
sudo sh install.sh
-````
-After installation, linkis' nginx configuration file is by default in `/etc/nginx/conf.d/linkis.conf`
-nginx log files are in `/var/log/nginx/access.log` and `/var/log/nginx/error.log`
-An example of the nginx configuration file of the generated linkis console is as follows:
-````nginx
+```
+After installation, the nginx configuration file of linkis is in `/etc/nginx/conf.d/linkis.conf` by default
+The log files of nginx are in `/var/log/nginx/access.log` and `/var/log/nginx/error.log`
+An example of the generated nginx configuration file of the linkis management console is as follows:
+```nginx
server {
- listen 8188;# access port If the port is occupied, it needs to be modified
+ listen 8188;# If the access port is occupied, it needs to be modified
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
@@ -450,14 +459,14 @@ An example of the nginx configuration file of the generated linkis console is as
index index.html index.html;
}
location /ws {
- proxy_pass http://localhost:9020;#Address of backend Linkis
+ proxy_pass http://localhost:9020;#The address of the backend Linkis
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection upgrade;
}
location /api {
- proxy_pass http://localhost:9020; #Address of backend Linkis
+ proxy_pass http://localhost:9020; #The address of the backend Linkis
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header x_real_ipP $remote_addr;
@@ -479,32 +488,32 @@ An example of the nginx configuration file of the generated linkis console is as
root /usr/share/nginx/html;
}
}
-````
+```
If you need to modify the port or static resource directory, etc., please modify the `/etc/nginx/conf.d/linkis.conf` file and execute the `sudo nginx -s reload` command
-:::caution note
-- At present, the visualis function is not integrated. During the installation process, if you are prompted that the installation of linkis/visualis fails, you can ignore it
-- Check whether nginx starts normally: check whether the nginx process exists `ps -ef |grep nginx`
-- Check if nginx is configured correctly `sudo nginx -T`
-- If the port is occupied, you can modify the service port `/etc/nginx/conf.d/linkis.conf`listen port value started by nginx, save it and restart it
-- If interface 502 appears in the access management console, or `Unexpected token < in JSON at position 0` is abnormal, please confirm whether linkis-mg-gateway starts normally. If it starts normally, check the linkis-mg-gateway configured in the nginx configuration file Is the service address correct?
+:::caution Caution
+- At present, the visualis function is not integrated. During the installation process, if you are prompted to fail to install linkis/visualis, you can ignore it.
+- Check whether nginx starts normally: check whether the nginx process exists `ps -ef |grep nginx`.
+- Check whether the configuration of nginx is correct `sudo nginx -T`.
+- If the port is occupied, you can modify the service port `/etc/nginx/conf.d/linkis.conf`listen port value started by nginx, save and restart.
+- If there is an interface 502 when accessing the management console, or `Unexpected token < in JSON at position 0` is abnormal, please confirm whether the linkis-mg-gateway is started normally. If it is started normally, check the linkis-mg-gateway configured in the nginx configuration file Whether the service address is correct.
:::
-### 4.4 Login to the console
+### 4.4 Log in to the management console
Browser login `http://xx.xx.xx.xx:8188/#/login`
-Username/password can be found in `${LINKIS_HOME}/conf/linkis-mg-gateway.properties`
+Username/password can be checked in `{LINKIS_HOME}/conf/linkis-mg-gateway.properties`.
```shell script
-wds.linkis.admin.user= #User
-wds.linkis.admin.password= #Password
+wds.linkis.admin.user= #user
+wds.linkis.admin.password= #password
-````
+```
-## 5. Verify basic functionality
->Verify the corresponding engine tasks according to actual needs
+## 5. Verify basic functions
+> Verify the corresponding engine tasks according to actual needs
-````
-#The version number of the engineType of the engine must match the actual version. The following example is the default version number
+```
+#The version number of the engineType stitching of the engine must match the actual one. The following example is the default version number
#shell engine tasks
sh bin/linkis-cli -submitUser hadoop -engineType shell-1 -codeType shell -code "whoami"
@@ -514,44 +523,49 @@ sh bin/linkis-cli -submitUser hadoop -engineType hive-3.1.3 -codeType hql -code
#spark engine tasks
sh bin/linkis-cli -submitUser hadoop -engineType spark-3.2.1 -codeType sql -code "show tables"
-#python engine task
+#python engine tasks
sh bin/linkis-cli -submitUser hadoop -engineType python-python2 -codeType python -code 'print("hello, world!")'
-````
-If the verification fails, please refer to [Step 8] for troubleshooting
+```
+If the verification fails, please refer to [Step 8] for troubleshooting.
-## 6 Installation of development tool IDE (Scriptis) (optional)
-After installing the Scripti tool, it can support writing SQL, Pyspark, HiveQL and other scripts online on the web page,For detailed instructions, see [Installation and Deployment of Tool Scriptis](integrated/install-scriptis)
+## 6. Installation of development tool IDE (Scriptis) (optional)
+After installing the Scripti tool, you can write SQL, Pyspark, HiveQL and other scripts online on the web page. For detailed instructions, see [Tool Scriptis Installation and Deployment] (integrated/install-scriptis).
-## 7. Supported Engines
+## 7. Supported engines
-### 7.1 Engine Adaptation List
+### 7.1 Engine adaptation list
-Please note: The separate installation package of Linkis only contains four engines by default: Python/Shell/Hive/Spark. If there are other engines (such as jdbc/flink/sqoop and other engines) usage scenarios, you can install them manually. For details, please refer to [ EngineConnPlugin Engine Plugin Installation Documentation](install-engineconn).
+Please note: the separate installation package of Linkis only includes Python, Shell, Hive, and Spark by default. If there are other engine usage scenarios (such as jdbc/flink/sqoop, etc.), you can install them manually. For details, please refer to [EngineConnPlugin Engine Plugin installation documentation](install-engineconn).
-The list of supported engines that have been adapted in this version is as follows:
+The list of supported engines adapted to this version is as follows:
-| Engine type | Adaptation | Does the official installation package contain |
+| Engine type| Adaptation situation| Whether the official installation package contains |
|---------------|-------------------|------|
-| Python | >=1.0.0 Adapted | Included |
-| Shell | >=1.0.0 Adapted | Included |
-| Hive | >=1.0.0 Adapted | Included |
-| Spark | >=1.0.0 Adapted | Included |
-| Pipeline | >=1.0.0 Adapted | **Excludes** |
-| JDBC | >=1.0.0 Adapted | **Excludes** |
-| Flink | >=1.0.0 already adapted | **Not included** |
-| OpenLooKeng | >=1.1.1 has been adapted | **Not included** |
-| Sqoop | >=1.1.2 Adapted | **Excludes** |
+| Python | >=1.0.0 Adapted | Contains |
+| Shell | >=1.0.0 adapted | contains |
+| Hive | >=1.0.0 adapted | contains |
+| Spark | >=1.0.0 adapted | contains |
+| Pipeline | >=1.0.0 Adapted | **Not Included** |
+| JDBC | >=1.0.0 Adapted | **Not Included** |
+| Flink | >=1.0.0 Adapted | **Excludes** |
+| openLooKeng | >=1.1.1 Adapted | **Not Included** |
+| Sqoop | >=1.1.2 Adapted | **Not Included** |
+| Trino | >=1.3.2 Adapted | **Excluded** |
+| Presto | >=1.3.2 Adapted | **Excluded** |
+| Elasticsearch | >=1.3.2 Adapted | **Excludes** |
+| Seatunnel | >=1.3.2 Adapted | **Not Included** |
+| Impala | >=1.4.0 Adapted | **Excludes** |
-### 7.2 View the deployed engine
+### 7.2 View deployed engines
#### Method 1: View the engine lib package directory
-````
+```
$ tree linkis-package/lib/linkis-engineconn-plugins/ -L 3
linkis-package/lib/linkis-engineconn-plugins/
-├── hive
+├──hive
│ ├── dist
│ │ └── 3.1.3 #version is 3.1.3 engineType is hive-3.1.3
│ └── plugin
@@ -571,27 +585,28 @@ linkis-package/lib/linkis-engineconn-plugins/
│ └── 3.2.1
└── plugin
└── 3.2.1
-````
+```
#### Method 2: View the database table of linkis
```shell script
select * from linkis_cg_engine_conn_plugin_bml_resources
-````
+```
+
+## 8. Troubleshooting guidelines for common abnormal problems
+### 8.1. Yarn queue check
-## 8. Troubleshooting Guidelines for Common Abnormal Problems
-### 8.1. Yarn Queue Check
+>If you need to use the spark/hive/flink engine
->If you need to use spark/hive/flink engine
+After logging in, check whether the yarn queue resources can be displayed normally (click the button in the lower right corner of the page) (you need to install the front end first).
-After logging in, check whether the yarn queue resources can be displayed normally (click the button in the lower right corner of the page) (the front end needs to be installed first)
-Normally as shown below:
+Normal as shown in the figure below:
![yarn-normal](images/yarn-normal.png)
If it cannot be displayed: You can adjust it according to the following guidelines
#### 8.1.1 Check whether the yarn address is configured correctly
-Database table `linkis_cg_rm_external_resource_provider` `
+Database table `linkis_cg_rm_external_resource_provider``
Insert yarn data information
```sql
INSERT INTO `linkis_cg_rm_external_resource_provider`
@@ -600,167 +615,172 @@ INSERT INTO `linkis_cg_rm_external_resource_provider`
'{\r\n"rmWebAddress": "http://xx.xx.xx.xx:8088",\r\n"hadoopVersion": "3.3.4",\r\n"authorEnable":false, \r\n"user":"hadoop",\r\n"pwd":"123456"\r\n}'
);
-config field properties
+config field attribute
-"rmWebAddress": "http://xx.xx.xx.xx:8088", #need to bring http and port
+"rmWebAddress": "http://xx.xx.xx.xx:8088", #Need to bring http and port
"hadoopVersion": "3.3.4",
"authorEnable":true, //Whether authentication is required You can verify the username and password by visiting http://xx.xx.xx.xx:8088 in the browser
-"user":"user",//username
-"pwd":"pwd"//Password
+"user": "user", //username
+"pwd": "pwd"//password
-````
-After the update, because the cache is used in the program, if you want to take effect immediately, you need to restart the linkis-cg-linkismanager service
+```
+After the update, because the cache is used in the program, if you want to take effect immediately, you need to restart the linkis-cg-linkismanager service.
```shell script
sh sbin/linkis-daemon.sh restart cg-linkismanager
-````
+```
#### 8.1.2 Check whether the yarn queue exists
-Exception information: `desc: queue ide is not exists in YARN.` indicates that the configured yarn queue does not exist and needs to be adjusted
+Exception information: `desc: queue ide is not exists in YARN.` indicates that the configured yarn queue does not exist and needs to be adjusted.
-Modification method: `linkis management console/parameter configuration> global settings>yarn queue name [wds.linkis.rm.yarnqueue]`, modify a yarn queue that can be used, and the yarn queue to be used can be found at `rmWebAddress:http:// xx.xx.xx.xx:8088/cluster/scheduler`
+Modification method: `linkis management console/parameter configuration>global settings>yarn queue name [wds.linkis.rm.yarnqueue]`, modify a yarn queue that can be used, and the yarn queue to be used can be found at `rmWebAddress:http:// xx.xx.xx.xx:8088/cluster/scheduler`.
View available yarn queues
- View yarn queue address: http://ip:8888/cluster/scheduler
-### 8.2 Check whether the engine material resource is uploaded successfully
+### 8.2 Check whether the engine material resources are uploaded successfully
```sql
-#Login to the linkis database
+#Log in to the linkis database
select * from linkis_cg_engine_conn_plugin_bml_resources
-````
+```
-The normal is as follows:
+Normally as follows:
![bml](images/bml.png)
-Check whether the material record of the engine exists (if there is an update, check whether the update time is correct).
+Check whether the material record of the engine exists (if there is an update, check whether the update time is correct)
-- If it does not exist or is not updated, first try to manually refresh the material resource (for details, see [Engine Material Resource Refresh](install-engineconn#23-Engine Refresh)).
-- Check the specific reasons for material failure through `log/linkis-cg-engineplugin.log` log. In many cases, it may be caused by the lack of permissions in the hdfs directory
-- Check whether the gateway address configuration is correct. The configuration item `wds.linkis.gateway.url` of `conf/linkis.properties`
+- If it does not exist or is not updated, first try to manually refresh the material resource (see [Engine Material Resource Refresh](install-engineconn#23-engine refresh) for details).
+- Use `log/linkis-cg-linkismanager.log` to check the specific reason for the failure of the material. In many cases, it may be caused by the lack of permission in the hdfs directory.
+- Check whether the gateway address configuration is correct. The configuration item `wds.linkis.gateway.url` in `conf/linkis.properties`.
-The material resources of the engine are uploaded to the hdfs directory by default as `/apps-data/${deployUser}/bml`
+The material resources of the engine are uploaded to the hdfs directory by default as `/apps-data/${deployUser}/bml`.
```shell script
hdfs dfs -ls /apps-data/hadoop/bml
#If there is no such directory, please manually create the directory and grant ${deployUser} read and write permissions
hdfs dfs -mkdir /apps-data
-hdfs dfs -chown hadoop:hadoop/apps-data
-````
+hdfs dfs -chown hadoop:hadoop /apps-data
+```
### 8.3 Login password problem
-By default, linkis uses a static user and password. The static user is the deployment user. The static password will randomly generate a password string during deployment and store it in
-`${LINKIS_HOME}/conf/linkis-mg-gateway.properties` (>=1.0.3 version)
+Linkis uses static users and passwords by default. Static users are deployment users. Static passwords will randomly generate a password string during deployment and store it in
+
+`${LINKIS_HOME}/conf/linkis-mg-gateway.properties` (>=version 1.0.3).
### 8.4 version compatibility issues
-The engine supported by linkis by default, the compatibility with dss can be viewed [this document](https://github.com/apache/linkis/blob/master/README.md)
+The engine supported by linkis by default, and the compatibility relationship with dss can be viewed in [this document](https://github.com/apache/linkis/blob/master/README.md).
-### 8.5 How to locate the server exception log
+### 8.5 How to locate server-side exception logs
-Linkis has many microservices. If you are unfamiliar with the system, sometimes you cannot locate the specific module that has an exception. You can search through the global log.
+Linkis has many microservices. If you are not familiar with the system, sometimes you cannot locate the specific module that has an exception. You can search through the global log.
```shell script
-tail -f log/* |grep -5n exception (or tail -f log/* |grep -5n ERROR)
-less log/* |grep -5n exception (or less log/* |grep -5n ERROR)
-````
+tail -f log/* |grep -5n exception (or tail -f log/* |grep -5n ERROR)
+less log/* |grep -5n exception (or less log/* |grep -5n ERROR)
+```
-### 8.6 Exception troubleshooting of execution engine tasks
+### 8.6 Execution engine task exception troubleshooting
-** step1: Find the startup deployment directory of the engine **
+** step1: Find the startup deployment directory of the engine**
-- Method 1: If it is displayed in the execution log, you can view it on the management console as shown below:
- ![engine-log](images/engine-log.png)
-- Method 2: If it is not found in method 1, you can find the parameter `wds.linkis.engineconn.root.dir` configured in `conf/linkis-cg-engineconnmanager.properties`, which is the directory where the engine is started and deployed. Subdirectories are segregated by the user executing the engine
+- Method 1: If it is displayed in the execution log, you can view it on the management console as shown below:
+![engine-log](images/engine-log.png)
+- Method 2: If not found in method 1, you can find the `wds.linkis.engineconn.root.dir` parameter configured in `conf/linkis-cg-engineconnmanager.properties`, and this value is the directory where the engine starts and deploys. Subdirectories are segregated by user of the execution engine
```shell script
-# If you don't know the taskid, you can select it after sorting by time ll -rt /appcom/tmp/${executed user}/${date}/${engine}/
-cd /appcom/tmp/${executed user}/${date}/${engine}/${taskId}
-````
+# If you don't know the taskid, you can sort by time and choose ll -rt /appcom/tmp/${executed user}/${date}/${engine}/
+cd /appcom/tmp/${user executed}/${date}/${engine}/${taskId}
+```
The directory is roughly as follows
```shell script
-conf -> /appcom/tmp/engineConnPublickDir/6a09d5fb-81dd-41af-a58b-9cb5d5d81b5a/v000002/conf #engine configuration file
-engineConnExec.sh #Generated engine startup script
-lib -> /appcom/tmp/engineConnPublickDir/45bf0e6b-0fa5-47da-9532-c2a9f3ec764d/v000003/lib #Engine dependent packages
-logs #Engine startup and execution related logs
-````
+conf -> /appcom/tmp/engineConnPublicDir/6a09d5fb-81dd-41af-a58b-9cb5d5d81b5a/v000002/conf #engine configuration file
+engineConnExec.sh #generated engine startup script
+lib -> /appcom/tmp/engineConnPublicDir/45bf0e6b-0fa5-47da-9532-c2a9f3ec764d/v000003/lib #engine-dependent packages
+logs #Related logs of engine startup execution
+```
-** step2: View the log of the engine **
+**step2: Check the log of the engine**
```shell script
-less logs/stdout
-````
-
-**step3: try to execute the script manually (if needed)**
-Debugging can be done by trying to execute the script manually
-````
-sh -x engineConnExec.sh
-````
-
-### 8.7 How to modify the port of the registry eureka
-Sometimes when the eureka port is occupied by other services and the default eureka port cannot be used, the eureka port needs to be modified. Here, the modification of the eureka port is divided into two situations: before the installation is performed and after the installation is performed.
-1. Modify the eureka port of the registry before performing the installation
-````
-1. Enter the decompression directory of apache-linkis-x.x.x-incubating-bin.tar.gz
+less logs/stdout
+```
+
+**step3: Try to execute the script manually (if needed)**
+You can debug by trying to execute the script manually
+```
+sh -x engineConnExec.sh
+```
+
+### 8.7 How to modify the port of the registration center eureka
+Sometimes when the eureka port is occupied by other services and the default eureka port cannot be used, it is necessary to modify the eureka port. Here, the modification of the eureka port is divided into two cases: before the installation and after the installation.
+
+1. Modify the eureka port of the registration center before performing the installation
+```
+1. Enter the decompression directory of apache-linkis-xxx-bin.tar.gz
2. Execute vi deploy-config/linkis-env.sh
3. Modify EUREKA_PORT=20303 to EUREKA_PORT=port number
-````
-2. Modify the eureka port of the registry after the installation is performed
-````
-1. Go to the ${LINKIS_HOME}/conf directory
+```
+2. Modify the registry eureka port after installation
+```
+1. Enter the ${LINKIS_HOME}/conf directory
-2. Execute grep -r 20303 ./* , the query result is as follows:
+2. Execute grep -r 20303 ./* , the query results are as follows:
./application-eureka.yml: port: 20303
./application-eureka.yml: defaultZone: http://ip:20303/eureka/
./application-linkis.yml: defaultZone: http://ip:20303/eureka/
./linkis-env.sh:EUREKA_PORT=20303
./linkis.properties:wds.linkis.eureka.defaultZone=http://ip:20303/eureka/
-3. Change the port in the corresponding location to the new port, and restart all services sh restart sbin/linkis-start-all.sh
-````
+3. Change the port at the corresponding location to a new port, and restart all services sh restart sbin/linkis-start-all.sh
+```
-### 8.8 Notes on CDH adaptation version
+### 8.8 Notes for CDH adaptation version
-CDH itself is not the official standard hive/spark package used. When adapting, it is best to modify the hive/spark version dependencies in the source code of linkis to recompile and deploy.
-For details, please refer to the CDH adaptation blog post
-[[Linkis1.0 - Installation and Stepping in the CDH5 Environment]](https://mp.weixin.qq.com/s/__QxC1NoLQFwme1yljy-Nw)
-[[DSS1.0.0+Linkis1.0.2——Trial record in CDH5 environment]](https://mp.weixin.qq.com/s/9Pl9P0hizDWbbTBf1yzGJA)
-[[DSS1.0.0 and Linkis1.0.2——Summary of JDBC engine related issues]](https://mp.weixin.qq.com/s/vcFge4BNiEuW-7OC3P-yaw)
-[[DSS1.0.0 and Linkis1.0.2——Summary of Flink engine related issues]](https://mp.weixin.qq.com/s/VxZ16IPMd1CvcrvHFuU4RQ)
+CDH itself is not an official standard hive/spark package. When adapting, it is best to modify the hive/spark version dependencies in the linkis source code and recompile and deploy.
+For details, please refer to the CDH adaptation blog post
+[[Linkis1.0——Installation and stepping in the CDH5 environment]](https://mp.weixin.qq.com/s/__QxC1NoLQFwme1yljy-Nw)
+[[DSS1.0.0+Linkis1.0.2——Trial record in CDH5 environment]](https://mp.weixin.qq.com/s/9Pl9P0hizDWbbTBf1yzGJA)
+[[DSS1.0.0 and Linkis1.0.2 - Summary of JDBC engine-related issues]](https://mp.weixin.qq.com/s/vcFge4BNiEuW-7OC3P-yaw)
+[[DSS1.0.0 and Linkis1.0.2——Summary of issues related to Flink engine]](https://mp.weixin.qq.com/s/VxZ16IPMd1CvcrvHFuU4RQ)
### 8.9 Debugging of Http interface
-- Method 1 can enable [Login-Free Mode Guide] (/docs/latest/api/login-api/#2 Login-Free Configuration)
-- In method 2 postman, the request header brings the cookie value of the successful login
+- Method 1 can enable [Guide to Free Login Mode](/docs/latest/api/login-api/#2 Login-free configuration)
+- In method 2 postman, the cookie value of successful login on the request header
The cookie value can be obtained after successful login on the browser side
![bml](images/bml-cookie.png)
```shell script
Cookie: bdp-user-ticket-id=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
-````
-- Method 3 Add a static Token to the http request header
+```
+- Method 3 http request header to add a static Token token
Token is configured in conf/linkis.properties
Such as: TEST-AUTH=hadoop,root,user01
```shell script
Token-Code: TEST-AUTH
-Token-User:hadoop
-````
+Token-User: hadoop
+```
### 8.10 Troubleshooting process for abnormal problems
-First, follow the above steps to check whether the service/environment, etc. are all started normally
-Troubleshoot basic problems according to some of the scenarios listed above
-[QA documentation](https://docs.qq.com/doc/DSGZhdnpMV3lTUUxq) Find out if there is a solution, link: https://docs.qq.com/doc/DSGZhdnpMV3lTUUxq
-See if you can find a solution by searching the content in the issue
-![issues](images/issues.png)
-Through the official website document search, for some problems, you can search for keywords through the official website, such as searching for "deployment". (If 404 appears, please refresh your browser)
+First, check whether the service/environment is started normally according to the above steps, and then check the basic problems according to some scenarios listed above.
+
+[QA document](https://docs.qq.com/doc/DSGZhdnpMV3lTUUxq) Find out if there is a solution, link: https://docs.qq.com/doc/DSGZhdnpMV3lTUUxq
+See if you can find a solution by searching the contents of the issue.
+![issues](images/issues.png)
+Through the official website document search, for some questions, you can search keywords on the official website, such as searching for "deployment". (If 404 appears, please refresh the browser)
![search](images/search.png)
+
## 9. How to obtain relevant information
-Linkis official website documents are constantly improving, you can view/keyword search related documents on this official website.
-Related blog post links
-- Linkis technical blog collection https://github.com/apache/linkis/issues/1233
-- Technical blog post on the official account https://mp.weixin.qq.com/mp/homepage?__biz=MzI4MDkxNzUxMg==&hid=1&sn=088cbf2bbed1c80d003c5865bc92ace8&scene=18
-- Official website documentation https://linkis.apache.org/docs/latest/about/introduction
-- bili technology sharing video https://space.bilibili.com/598542776?spm_id_from=333.788.b_765f7570696e666f.2
+Linkis official website documents are constantly being improved, and you can view related documents on this official website.
+
+Related blog posts are linked below.
+- Linkis' technical blog collection https://github.com/apache/linkis/issues/1233
+- Public account technical blog post https://mp.weixin.qq.com/mp/homepage?__biz=MzI4MDkxNzUxMg==&hid=1&sn=088cbf2bbed1c80d003c5865bc92ace8&scene=18
+- Official website documentation https://linkis.apache.org/zh-CN/docs/latest/about/introduction
+- bili technology sharing video https://space.bilibili.com/598542776?spm_id_from=333.788.b_765f7570696e666f.2
+
diff --git a/docs/feature/other.md b/docs/feature/other.md
index 293c79620cc..aeb42806873 100644
--- a/docs/feature/other.md
+++ b/docs/feature/other.md
@@ -3,28 +3,26 @@ title: Description of other features
sidebar_position: 0.6
---
-## 1. Linkis 1.4.0 other feature upgrade instructions
-
-### 1.1 Do not kill EC when ECM restarts
+## 1. Do not kill EC when ECM restarts
When the ECM restarts, there is an option not to kill the engine, but to take over the existing surviving engine. Makes the Engine Connection Manager (ECM) service stateless.
-### 1.2 Remove json4s dependency
+## 2. Remove json4s dependency
Different versions of spark depend on different json4s versions, which is not conducive to the support of multiple versions of spark. We need to reduce this json4s dependency and remove json4s from linkis.
For example: spark2.4 needs json4s v3.5.3, spark3.2 needs json4s v3.7.0-M11.
-### 1.3 EngineConn module definition depends on engine version
-The version definition of the engine is in `EngineConn` by default. Once the relevant version changes, it needs to be modified in many places. We can put the relevant version definition in the top-level pom file. When compiling a specified engine module, it needs to be compiled in the project root directory, and use `-pl` to compile the specific engine module, for example:
+## 3. EngineConn module definition depends on engine version
+The version definition of the engine is in `EngineConn` by default. Once the relevant version is changed, it needs to be modified in many places. We can put the relevant version definition in the top-level pom file. When compiling a specified engine module, it needs to be compiled in the project root directory, and use `-pl` to compile the specific engine module, for example:
```
mvn install package -pl linkis-engineconn-plugins/spark -Dspark.version=3.2.1
```
The version of the engine can be specified by the -D parameter of mvn compilation, such as -Dspark.version=xxx, -Dpresto.version=0.235
-At present, all underlying engine versions have been moved to the top-level pom file. When compiling a specified engine module, it needs to be compiled in the project root directory, and `-pl` is used to compile the specific engine module.
+At present, all the underlying engine versions have been moved to the top-level pom file. When compiling the specified engine module, it needs to be compiled in the project root directory, and `-pl` is used to compile the specific engine module.
-### 1.4 Linkis Main Version Number Modification Instructions
+## 4. Linkis main version number modification instructions
Linkis will no longer be upgraded by minor version after version 1.3.2. The next version will be 1.4.0, and the version number will be 1.5.0, 1.6.0 and so on. When encountering a major defect in a released version that needs to be fixed, it will pull a minor version to fix the defect, such as 1.4.1.
-## 1.5 LInkis code submission main branch description
+## 5. LInkis code submission main branch instructions
The modified code of Linkis 1.3.2 and earlier versions is merged into the dev branch by default. In fact, the development community of Apache Linkis is very active, and new development requirements or repair functions will be submitted to the dev branch, but when users visit the Linkis code base, the master branch is displayed by default. Since we only release a new version every quarter, it seems that the community is not very active from the perspective of the master branch. Therefore, we decided to merge the code submitted by developers into the master branch by default starting from version 1.4.0.
diff --git a/docs/feature/overview.md b/docs/feature/overview.md
index e7a06984fd0..34160261e3a 100644
--- a/docs/feature/overview.md
+++ b/docs/feature/overview.md
@@ -5,12 +5,12 @@ sidebar_position: 0.1
- [Base engine dependencies, compatibility, default version optimization](./base-engine-compatibilty.md)
- [Hive engine connector supports concurrent tasks](./hive-engine-support-concurrent.md)
-- [Support more data sources](./spark-etl.md)
-- [linkis-storage supports S3 file systems (Experimental version)](../deployment/deploy-quick#s3-mode-optional)
-- [Add postgresql database support (Experimental version)](../deployment/deploy-quick#22-configure-database)
-- [Add impala engine support(Experimental version)](../engine-usage/impala.md)
+- [Support more datasources](../user-guide/datasource-manual#31-jdbc-datasource)
- [Spark ETL enhancements](./spark-etl.md)
- [Generate SQL from data source](./datasource-generate-sql.md)
+- [linkis-storage supports S3 file system (experimental version)](../deployment/deploy-quick#343-s3-mode)
+- [add postgresql database support (experimental version)](../deployment/deploy-quick#22-configuration database information)
+- [Add impala engine support (experimental version)](../engine-usage/impala.md)
- [Other feature description](./other.md)
- [version of Release-Notes](/download/release-notes-1.4.0)
@@ -25,7 +25,8 @@ sidebar_position: 0.1
| mg-eureka | new | eureka.instance.lease-expiration-duration-in-seconds | 12 | eureka waits for the next heartbeat timeout (seconds)|
| EC-shell | Modify | wds.linkis.engineconn.support.parallelism | true | Whether to enable parallel execution of shell tasks |
| EC-shell | Modify | linkis.engineconn.shell.concurrent.limit | 15 | Concurrent number of shell tasks |
-| Entrance | Modify | linkis.entrance.auto.clean.dirty.data.enable | true | Whether to clean dirty data during startup |
+| Entrance | Modify | linkis.entrance.auto.clean.dirty.data.enable | true | Whether to clean dirty data at startup |
+
## Database table changes
diff --git a/docs/feature/spark-etl.md b/docs/feature/spark-etl.md
index 965f0509752..2151cb74cf9 100644
--- a/docs/feature/spark-etl.md
+++ b/docs/feature/spark-etl.md
@@ -61,7 +61,7 @@ sh ./bin/linkis-cli -engineType spark-3.2.1 -codeType data_calc -code "{\"plugin
```
### 4.3 Synchronization json script description of each data source
-### 4.3.1 jdbc
+#### 4.3.1 jdbc
Configuration instructions
```text
@@ -127,7 +127,7 @@ kingbase8-8.6.0.jar
postgresql-42.3.8.jar
```
-### 4.3.2 file
+#### 4.3.2 file
Configuration instructions
@@ -173,7 +173,7 @@ Need to add new jar
spark-excel-2.12.17-3.2.2_2.12-3.2.2_0.18.1.jar
```
-### 4.3.3 redis
+#### 4.3.3 redis
```text
sourceTable: source table,
@@ -225,7 +225,7 @@ commons-pool2-2.8.1.jar
spark-redis_2.12-2.6.0.jar
```
-### 4.3.4 kafka
+#### 4.3.4 kafka
Configuration instructions
```text
@@ -302,7 +302,7 @@ spark-sql-kafka-0-10_2.12-3.2.1.jar
spark-token-provider-kafka-0-10_2.12-3.2.1.jar
```
-###elasticsearch
+#### 4.3.5 elasticsearch
Configuration instructions
```text
@@ -380,7 +380,7 @@ Need to add new jar
elasticsearch-spark-30_2.12-7.17.7.jar
```
-###mongo
+#### 4.3.6 mongo
Configuration instructions
```text
@@ -461,7 +461,7 @@ mongodb-driver-core-3.12.8.jar
mongodb-driver-sync-3.12.8.jar
```
-###delta
+#### 4.3.7 delta
Configuration instructions
```text
@@ -539,7 +539,7 @@ delta-core_2.12-2.0.2.jar
delta-storage-2.0.2.jar
```
-###hudi
+#### 4.3.8 hudi
Configuration instructions
```text
diff --git a/docusaurus.config.js b/docusaurus.config.js
index 2d8a79bb029..75df9a84662 100644
--- a/docusaurus.config.js
+++ b/docusaurus.config.js
@@ -41,10 +41,10 @@ const darkCodeTheme = require('prism-react-renderer/themes/dracula');
editUrl: 'https://github.com/apache/linkis-website/edit/dev/',
versions: {
current: {
- path: '1.4.0',
- label: 'Next(1.4.0)'
+ path: '1.5.0',
+ label: 'Next(1.5.0)'
},
- '1.3.2': {
+ '1.4.0': {
path: 'latest',
},
}
@@ -161,12 +161,13 @@ const darkCodeTheme = require('prism-react-renderer/themes/dracula');
label: 'Doc',
position: 'right',
items: [
- {label: '1.3.2', to: '/docs/latest/about/introduction'},
+ {label: '1.4.0', to: '/docs/latest/about/introduction'},
+ {label: '1.3.2', to: '/docs/1.3.2/about/introduction'},
{label: '1.3.1', to: '/docs/1.3.1/about/introduction'},
{label: '1.3.0', to: '/docs/1.3.0/introduction'},
{label: '1.2.0', to: '/docs/1.2.0/introduction'},
{label: '1.1.1', to: '/docs/1.1.1/introduction'},
- {label: 'Next(1.4.0)', to: '/docs/1.4.0/about/introduction'},
+ {label: 'Next(1.5.0)', to: '/docs/1.5.0/about/introduction'},
{label: 'All Version', to: '/versions'}
]
},
@@ -356,7 +357,7 @@ const darkCodeTheme = require('prism-react-renderer/themes/dracula');
createRedirects(existingPath) {
if (existingPath.includes('/latest')) {
return [
- existingPath.replace('/latest', '/1.3.2'),
+ existingPath.replace('/latest', '/1.4.0'),
];
}
return undefined; // Return a false value: no redirect created
diff --git a/download/main.md b/download/main.md
index 9bef235afc9..9ad742cdd0f 100644
--- a/download/main.md
+++ b/download/main.md
@@ -9,6 +9,7 @@ Use the links below to download the Apache Linkis Releases, the latest release i
| Version | Release Date | Source | Binary | Web Binary | Release Notes |
|----------------------------------------------|--------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|
+| 1.4.0 | 2023-08-05 | [[Source](https://www.apache.org/dyn/closer.lua/linkis/1.4.0/apache-linkis-1.4.0-src.tar.gz)] [[Sign](https://www.apache.org/dyn/closer.lua/linkis/1.4.0/apache-linkis-1.4.0-src.tar.gz.asc)] [[SHA512](https://www.apache.org/dyn/closer.lua/linkis/1.4.0/apache-linkis-1.4.0-src.tar.gz.sha512)] | [[Binary](https://www.apache.org/dyn/closer.lua/linkis/1.4.0/apache-linkis-1.4.0-bin.tar.gz)] [[Sign](https://www.apache.org/dyn/closer.lua/linkis/1.4.0/apache-linkis-1.4.0-bin.tar.gz.asc)] [[SHA512](https://www.apache.org/dyn/closer.lua/linkis/1.4.0/apache-linkis-1.4.0-bin.tar.gz.sha512)] | [[Binary](https://www.apache.org/dyn/closer.lua/linkis/1.4.0/apache-linkis-1.4.0-web-bin.tar.gz)] [[Sign](https://www.apache.org/dyn/closer.lua/linkis/1.4.0/apache-linkis-1.4.0-web-bin.tar.gz.asc)] [[SHA512](https://www.apache.org/dyn/closer.lua/linkis/1.4.0/apache-linkis-1.4.0-web-bin.tar.gz.sha512)] | [Release-Notes](release-notes-1.4.0.md) |
| 1.3.2 | 2023-04-03 | [[Source](https://www.apache.org/dyn/closer.lua/linkis/1.3.2/apache-linkis-1.3.2-src.tar.gz)] [[Sign](https://www.apache.org/dyn/closer.lua/linkis/1.3.2/apache-linkis-1.3.2-src.tar.gz.asc)] [[SHA512](https://www.apache.org/dyn/closer.lua/linkis/1.3.2/apache-linkis-1.3.2-src.tar.gz.sha512)] | [[Binary](https://www.apache.org/dyn/closer.lua/linkis/1.3.2/apache-linkis-1.3.2-bin.tar.gz)] [[Sign](https://www.apache.org/dyn/closer.lua/linkis/1.3.2/apache-linkis-1.3.2-bin.tar.gz.asc)] [[SHA512](https://www.apache.org/dyn/closer.lua/linkis/1.3.2/apache-linkis-1.3.2-bin.tar.gz.sha512)] | [[Binary](https://www.apache.org/dyn/closer.lua/linkis/1.3.2/apache-linkis-1.3.2-web-bin.tar.gz)] [[Sign](https://www.apache.org/dyn/closer.lua/linkis/1.3.2/apache-linkis-1.3.2-web-bin.tar.gz.asc)] [[SHA512](https://www.apache.org/dyn/closer.lua/linkis/1.3.2/apache-linkis-1.3.2-web-bin.tar.gz.sha512)] | [Release-Notes](release-notes-1.3.2.md) |
| 1.3.1 | 2023-01-18 | [[Source](https://www.apache.org/dyn/closer.lua/linkis/release-1.3.1/apache-linkis-1.3.1-src.tar.gz)] [[Sign](https://www.apache.org/dyn/closer.lua/linkis/release-1.3.1/apache-linkis-1.3.1-src.tar.gz.asc)] [[SHA512](https://www.apache.org/dyn/closer.lua/linkis/release-1.3.1/apache-linkis-1.3.1-src.tar.gz.sha512)] | [[Binary](https://www.apache.org/dyn/closer.lua/linkis/release-1.3.1/apache-linkis-1.3.1-bin.tar.gz)] [[Sign](https://www.apache.org/dyn/closer.lua/linkis/release-1.3.1/apache-linkis-1.3.1-bin.tar.gz.asc)] [[SHA512](https://www.apache.org/dyn/closer.lua/linkis/release-1.3.1/apache-linkis-1.3.1-bin.tar.gz.sha512)] | [[Binary](https://www.apache.org/dyn/closer.lua/linkis/release-1.3.1/apache-linkis-1.3.1-web-bin.tar.gz)] [[Sign](https://www.apache.org/dyn/closer.lua/linkis/release-1.3.1/apache-linkis-1.3.1-web-bin.tar.gz.asc)] [[SHA512](https://www.apache.org/dyn/closer.lua/linkis/release-1.3.1/apache-linkis-1.3.1-web-bin.tar.gz.sha512)] | [Release-Notes](release-notes-1.3.1.md) |
| 1.3.0 | 2022-10-25 | [[Source](https://www.apache.org/dyn/closer.lua/incubator/linkis/release-1.3.0/apache-linkis-1.3.0-incubating-src.tar.gz)] [[Sign](https://www.apache.org/dyn/closer.lua/incubator/linkis/release-1.3.0/apache-linkis-1.3.0-incubating-src.tar.gz.asc)] [[SHA512](https://www.apache.org/dyn/closer.lua/incubator/linkis/release-1.3.0/apache-linkis-1.3.0-incubating-src.tar.gz.sha512)] | [[Binary](https://www.apache.org/dyn/closer.lua/incubator/linkis/release-1.3.0/apache-linkis-1.3.0-incubating-bin.tar.gz)] [[Sign](https://www.apache.org/dyn/closer.lua/incubator/linkis/release-1.3.0/apache-linkis-1.3.0-incubating-bin.tar.gz.asc) ][[SHA512](https://www.apache.org/dyn/closer.lua/incubator/linkis/release-1.3.0/apache-linkis-1.3.0-incubating-bin.tar.gz.sha512)] | [[Binary](https://www.apache.org/dyn/closer.lua/incubator/linkis/release-1.3.0/apache-linkis-1.3.0-incubating-web-bin.tar.gz)] [[Sign](https://www.apache.org/dyn/closer.lua/incubator/linkis/release-1.3.0/apache-linkis-1.3.0-incubating-web-bin.tar.gz.asc )] [[SHA512](https://www.apache.org/dyn/closer.lua/incubator/linkis/release-1.3.0/apache-linkis-1.3.0-incubating-web-bin.tar.gz.sha512)] | [Release-Notes](release-notes-1.3.0.md) |
diff --git a/download/release-notes-1.4.0.md b/download/release-notes-1.4.0.md
index dbfe015fcc9..f8f945a4229 100644
--- a/download/release-notes-1.4.0.md
+++ b/download/release-notes-1.4.0.md
@@ -3,9 +3,9 @@ title: Release Notes 1.4.0
sidebar_position: 0.14
---
-Apache Linkis 1.4.0 includes all [Project Linkis-1.3.4](https://github.com/apache/linkis/projects/26)
+Apache Linkis 1.4.0 includes all [Project Linkis-1.4.0](https://github.com/apache/linkis/projects/26)
-Linkis version 1.4.0 mainly adds the following functions: upgrade the default versions of hadoop, spark, and hive to 3.x; reduce the compatibility issues of different versions of the basic engine; Hive EC supports concurrent submission of tasks; ECM service does not kill EC when restarting; linkis-storage supports S3 and OSS file systems; supports more data sources, such as: tidb, starrocks, Gaussdb, etc.; increases postgresql database support; and enhances Spark ETL functions, supports Excel, Redis, Mongo, Elasticsearch, etc.; The version number upgrade rules and the code submission default merge branch have been modified.
+Linkis 1.4.0 version, the main new features are as follows: Adapted Hadoop, Hive, Spark The default version is upgraded to 3.x (Hadoop2.7.2-3.3.4, Hive2.3.3-3.1.3, spark2.4.3-3.2 .1 Supplement the specific version information), and support compilation parameters to control the version, so as to reduce the difficulty of transforming and adapting to the non-default base engine version; Hive EC supports running tasks in concurrent mode, which can greatly reduce the use of machine resources and improve the concurrency of hive tasks; ECM service does not kill EC when restarting, providing support for graceful restart; storage of task log result sets, new support for S3 and OSS file system modes; new support for data source services, such as: tidb, starrocks, Gaussdb, etc. ; Service support adapts to postgresql database mode deployment (experimental); Added Impala engine support (experimental); and enhanced Spark ETL functions, supporting Excel, Redis, Mongo, Elasticsearch, etc.;
The main functions are as follows:
@@ -14,9 +14,11 @@ The main functions are as follows:
- Reduce the compatibility issues of different versions of the base engine
- Support Hive EC to execute tasks concurrently
- Support not kill EC when restarting ECM service
-- linkis-storage supports S3 and OSS file systems
+
- Support more data sources, such as: tidb, starrocks, Gaussdb, etc.
-- Add postgresql database support
+- Add postgresql database support (experimental)
+- linkis-storage supports S3 and OSS filesystems (experimental)
+- Added Impala engine connector support (experimental)
- Enhancements to Spark ETL
- Version number upgrade rules and submitted code default merge branch modification
@@ -24,18 +26,18 @@ abbreviation:
- ORCHESTRATOR: Linkis Orchestrator
- COMMON: Linkis Common
- ENTRANCE: Linkis Entrance
--EC: Engineconn
+- EC: Engineconn
- ECM: EngineConnManager
- ECP: EngineConnPlugin
- DMS: Data Source Manager Service
- MDS: MetaData Manager Service
-- LM: Linkis Manager
-- PS: Linkis Public Service
-- PE: Linkis Public Enhancement
+- LM: Links Manager
+- PS: Link Public Service
+- PE: Link Public Enhancement
- RPC: Linkis Common RPC
- CG: Linkis Computation Governance
- DEPLOY: Linkis Deployment
-- WEB: Linkis Web
+- WEB: Linked Web
- GATEWAY: Linkis Gateway
- EP: Engine Plugin
@@ -43,24 +45,16 @@ abbreviation:
## new features
- \[EC][LINKIS-4263](https://github.com/apache/linkis/pull/4263) upgrade the default version of Hadoop, Spark, Hive to 3.x
- \[EC-Hive][LINKIS-4359](https://github.com/apache/linkis/pull/4359) Hive EC supports concurrent tasks
-- \[COMMON][LINKIS-4424](https://github.com/apache/linkis/pull/4424) linkis-storage supports OSS file system
- \[COMMON][LINKIS-4435](https://github.com/apache/linkis/pull/4435) linkis-storage supports S3 file system
-- \[EC-Impala][LINKIS-4458](https://github.com/apache/linkis/pull/4458) Add Impala EC plugin support
-- \[ECM][LINKIS-4452](https://github.com/apache/linkis/pull/4452) Do not kill EC when ECM restarts
-- \[EC][LINKIS-4460](https://github.com/apache/linkis/pull/4460) Linkis supports multiple clusters
+- \[ECM][LINKIS-4452](https://github.com/apache/linkis/pull/4452) ECM 无电影化,reboot when not kill EC
- \[COMMON][LINKIS-4524](https://github.com/apache/linkis/pull/4524) supports postgresql database
-- \[DMS][LINKIS-4486](https://github.com/apache/linkis/pull/4486) data source model supports Tidb data source
-- \[DMS][LINKIS-4496](https://github.com/apache/linkis/pull/4496) data source module supports Starrocks data source
-- \[DMS][LINKIS-4513](https://github.com/apache/linkis/pull/4513) data source model supports Gaussdb data source
-- \[DMS][LINKIS-](https://github.com/apache/linkis/pull/4581) data source model supports OceanBase data source
-- \[EC-Spark][LINKIS-4568](https://github.com/apache/linkis/pull/4568) Spark JDBC supports dm and kingbase databases
+- \[DMS][LINKIS-4486](https://github.com/apache/linkis/pull/4486) supports Tidb data source
+- \[EC-Spark][LINKIS-4568](https://github.com/apache/linkis/pull/4568) Spark JDBC supports dm database
- \[EC-Spark][LINKIS-4539](https://github.com/apache/linkis/pull/4539) Spark etl supports excel
- \[EC-Spark][LINKIS-4534](https://github.com/apache/linkis/pull/4534) Spark etl supports redis
-- \[EC-Spark][LINKIS-4564](https://github.com/apache/linkis/pull/4564) Spark etl supports RocketMQ
- \[EC-Spark][LINKIS-4560](https://github.com/apache/linkis/pull/4560) Spark etl supports mongo and es
-- \[EC-Spark][LINKIS-4569](https://github.com/apache/linkis/pull/4569) Spark etl supports solr
- \[EC-Spark][LINKIS-4563](https://github.com/apache/linkis/pull/4563) Spark etl supports kafka
-- \[EC-Spark][LINKIS-4538](https://github.com/apache/linkis/pull/4538) Spark etl supports data lake
+- \[EC-Spark][LINKIS-4538](https://github.com/apache/linkis/pull/4538) Spark etl supports data lake (hudi, delta)
## Enhancement points
@@ -68,6 +62,8 @@ abbreviation:
- \[COMMON][LINKIS-4425](https://github.com/apache/linkis/pull/4425) code optimization, delete useless code
- \[COMMON][LINKIS-4368](https://github.com/apache/linkis/pull/4368) code optimization, remove json4s dependency
- \[COMMON][LINKIS-4357](https://github.com/apache/linkis/pull/4357) file upload interface optimization
+- \[COMMON][LINKIS-4678](https://github.com/apache/linkis/pull/4678) Linkis JDBC Driver optimization supports docking different types of engines and tasks
+- \[COMMON][LINKIS-4554](https://github.com/apache/linkis/pull/4554) Add task link tracking log to facilitate locating problems through unique task ID
- \[ECM][LINKIS-4449](https://github.com/apache/linkis/pull/4449) ECM code optimization
- \[EC][LINKIS-4341](https://github.com/apache/linkis/pull/4341) Optimize the code logic of CustomerDelimitedJSONSerDe
- \[EC-Openlookeng][LINKIS-](https://github.com/apache/linkis/pull/4474) Openlookeng EC code conversion to Java
@@ -89,15 +85,14 @@ abbreviation:
## Repair function
- \[EC-Hive][LINKIS-4246](https://github.com/apache/linkis/pull/4246) The Hive engine version number supports hyphens, such as hive3.1.2-cdh5.12.0
- \[COMMON][LINKIS-4438](https://github.com/apache/linkis/pull/4438) fixed nohup startup error
-- \[EC][LINKIS-4429](https://github.com/apache/linkis/pull/4429) fix CPU average load calculation bug
+- \[EC][LINKIS-4429](https://github.com/apache/linkis/pull/4429) Fix CPU average load calculation bug
- \[PE][LINKIS-4457](https://github.com/apache/linkis/pull/4457) fix parameter validation issue configured by admin console
- \[DMS][LINKIS-4500](https://github.com/apache/linkis/pull/4500) Fixed type conversion failure between client and data source
- \[COMMON][LINKIS-4480](https://github.com/apache/linkis/pull/4480) fixed build default configuration file with jdk17
- \[CG][LINKIS-4663](https://github.com/apache/linkis/pull/4663) Fix the problem that engine reuse may throw NPE
-- \[LM][LINKIS-4652](https://github.com/apache/linkis/pull/4652) fixed the problem of creating engine node throwing NPE
-- \[][LINKIS-](https://github.com/apache/linkis/pull/)
-- \[][LINKIS-](https://github.com/apache/linkis/pull/)
+- \[LM][LINKIS-4652](https://github.com/apache/linkis/pull/4652) fixed the problem that creating engine node throws NPE
## Acknowledgments
-The release of Apache Linkis 1.4.0 is inseparable from the contributors of the Linkis community, thanks to all community contributors, casionone,MrFengqin,zhangwejun,Zhao,ahaoyao,duhanmin,guoshupei,shixiutao,CharlieYan24,peacewong,GuoPhilipse,aiceflower,waynecookie,jacktao007,chenghuichen,ws00428637,ChengJie1053,dependabot,jackxu2011,sjgllgh,rarexixi,pjfanning,v-kkhuang,binbinCheng,stdnt-xiao,mayinrain.
\ No newline at end of file
+The release of Apache Linkis 1.4.0 is inseparable from the contributors of the Linkis community. Thanks to all community contributors, including but not limited to the following Contributors (in no particular order):
+casionone,MrFengqin,zhangwejun,Zhao,ahaoyao,duhanmin,guoshupei,shixiutao,CharlieYan24,peacewong,GuoPhilipse,aiceflower,waynecookie,jacktao007,chenghuichen,ws00428637,ChengJie1053,dependabot,jackxu2011,s jgllgh,rarexixi,pjfanning,v-kkhuang,binbinCheng, stdnt-xiao, mayinrain.
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs-download/current/main.md b/i18n/zh-CN/docusaurus-plugin-content-docs-download/current/main.md
index 464b9135930..33723259de4 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs-download/current/main.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs-download/current/main.md
@@ -8,6 +8,7 @@ sidebar_position: 0
| 版本 | 发布时间 | 源码 | 项目安装包 | 管理台安装包 | Release Notes |
|----------------------------------------------|------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|
+| 1.4.0 | 2023-08-05 | [[Source](https://www.apache.org/dyn/closer.lua/linkis/1.4.0/apache-linkis-1.4.0-src.tar.gz)] [[Sign](https://www.apache.org/dyn/closer.lua/linkis/1.4.0/apache-linkis-1.4.0-src.tar.gz.asc)] [[SHA512](https://www.apache.org/dyn/closer.lua/linkis/1.4.0/apache-linkis-1.4.0-src.tar.gz.sha512)] | [[Binary](https://www.apache.org/dyn/closer.lua/linkis/1.4.0/apache-linkis-1.4.0-bin.tar.gz)] [[Sign](https://www.apache.org/dyn/closer.lua/linkis/1.4.0/apache-linkis-1.4.0-bin.tar.gz.asc)] [[SHA512](https://www.apache.org/dyn/closer.lua/linkis/1.4.0/apache-linkis-1.4.0-bin.tar.gz.sha512)] | [[Binary](https://www.apache.org/dyn/closer.lua/linkis/1.4.0/apache-linkis-1.4.0-web-bin.tar.gz)] [[Sign](https://www.apache.org/dyn/closer.lua/linkis/1.4.0/apache-linkis-1.4.0-web-bin.tar.gz.asc)] [[SHA512](https://www.apache.org/dyn/closer.lua/linkis/1.4.0/apache-linkis-1.4.0-web-bin.tar.gz.sha512)] | [Release-Notes](release-notes-1.4.0.md) |
| 1.3.2 | 2023-04-03 | [[Source](https://www.apache.org/dyn/closer.lua/linkis/1.3.2/apache-linkis-1.3.2-src.tar.gz)] [[Sign](https://www.apache.org/dyn/closer.lua/linkis/1.3.2/apache-linkis-1.3.2-src.tar.gz.asc)] [[SHA512](https://www.apache.org/dyn/closer.lua/linkis/1.3.2/apache-linkis-1.3.2-src.tar.gz.sha512)] | [[Binary](https://www.apache.org/dyn/closer.lua/linkis/1.3.2/apache-linkis-1.3.2-bin.tar.gz)] [[Sign](https://www.apache.org/dyn/closer.lua/linkis/1.3.2/apache-linkis-1.3.2-bin.tar.gz.asc)] [[SHA512](https://www.apache.org/dyn/closer.lua/linkis/1.3.2/apache-linkis-1.3.2-bin.tar.gz.sha512)] | [[Binary](https://www.apache.org/dyn/closer.lua/linkis/1.3.2/apache-linkis-1.3.2-web-bin.tar.gz)] [[Sign](https://www.apache.org/dyn/closer.lua/linkis/1.3.2/apache-linkis-1.3.2-web-bin.tar.gz.asc)] [[SHA512](https://www.apache.org/dyn/closer.lua/linkis/1.3.2/apache-linkis-1.3.2-web-bin.tar.gz.sha512)] | [Release-Notes](release-notes-1.3.2.md) |
| 1.3.1 | 2023-01-18 | [[Source](https://www.apache.org/dyn/closer.lua/linkis/release-1.3.1/apache-linkis-1.3.1-src.tar.gz)] [[Sign](https://www.apache.org/dyn/closer.lua/linkis/release-1.3.1/apache-linkis-1.3.1-src.tar.gz.asc)] [[SHA512](https://www.apache.org/dyn/closer.lua/linkis/release-1.3.1/apache-linkis-1.3.1-src.tar.gz.sha512)] | [[Binary](https://www.apache.org/dyn/closer.lua/linkis/release-1.3.1/apache-linkis-1.3.1-bin.tar.gz)] [[Sign](https://www.apache.org/dyn/closer.lua/linkis/release-1.3.1/apache-linkis-1.3.1-bin.tar.gz.asc)] [[SHA512](https://www.apache.org/dyn/closer.lua/linkis/release-1.3.1/apache-linkis-1.3.1-bin.tar.gz.sha512)] | [[Binary](https://www.apache.org/dyn/closer.lua/linkis/release-1.3.1/apache-linkis-1.3.1-web-bin.tar.gz)] [[Sign](https://www.apache.org/dyn/closer.lua/linkis/release-1.3.1/apache-linkis-1.3.1-web-bin.tar.gz.asc)] [[SHA512](https://www.apache.org/dyn/closer.lua/linkis/release-1.3.1/apache-linkis-1.3.1-web-bin.tar.gz.sha512)] | [Release-Notes](release-notes-1.3.1.md) |
| 1.3.0 | 2022-10-25 | [[Source](https://www.apache.org/dyn/closer.lua/incubator/linkis/release-1.3.0/apache-linkis-1.3.0-incubating-src.tar.gz)] [[Sign](https://www.apache.org/dyn/closer.lua/incubator/linkis/release-1.3.0/apache-linkis-1.3.0-incubating-src.tar.gz.asc)] [[SHA512](https://www.apache.org/dyn/closer.lua/incubator/linkis/release-1.3.0/apache-linkis-1.3.0-incubating-src.tar.gz.sha512)] | [[Binary](https://www.apache.org/dyn/closer.lua/incubator/linkis/release-1.3.0/apache-linkis-1.3.0-incubating-bin.tar.gz)] [[Sign](https://www.apache.org/dyn/closer.lua/incubator/linkis/release-1.3.0/apache-linkis-1.3.0-incubating-bin.tar.gz.asc) ][[SHA512](https://www.apache.org/dyn/closer.lua/incubator/linkis/release-1.3.0/apache-linkis-1.3.0-incubating-bin.tar.gz.sha512)] | [[Binary](https://www.apache.org/dyn/closer.lua/incubator/linkis/release-1.3.0/apache-linkis-1.3.0-incubating-web-bin.tar.gz)] [[Sign](https://www.apache.org/dyn/closer.lua/incubator/linkis/release-1.3.0/apache-linkis-1.3.0-incubating-web-bin.tar.gz.asc )] [[SHA512](https://www.apache.org/dyn/closer.lua/incubator/linkis/release-1.3.0/apache-linkis-1.3.0-incubating-web-bin.tar.gz.sha512)] | [Release-Notes](release-notes-1.3.0.md) |
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs-download/current/release-notes-1.4.0.md b/i18n/zh-CN/docusaurus-plugin-content-docs-download/current/release-notes-1.4.0.md
index 8711b0ec906..d96ee0bb71a 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs-download/current/release-notes-1.4.0.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs-download/current/release-notes-1.4.0.md
@@ -5,7 +5,7 @@ sidebar_position: 0.14
Apache Linkis 1.4.0 包括所有 [Project Linkis-1.4.0](https://github.com/apache/linkis/projects/26)
-Linkis 1.4.0 版本,主要增加了如下功能:将 hadoop、spark、hive 默认版本升级为3.x;减少基础引擎不同版本兼容性问题;Hive EC 支持并发提交任务;ECM 服务重启时不 kill EC;linkis-storage 支持 S3 和 OSS 文件系统;支持更多的数据源,如:tidb、starrocks、Gaussdb等;增加 postgresql 数据库支持;以及对Spark ETL 功能增强,支持 Excel、Redis、Mongo、Elasticsearch等;同时对版本号升级规则及代码提交默认合并分支做了修改。
+Linkis 1.4.0 版本,主要新增如下特性功能:适配的 Hadoop、Hive、Spark 默认版本升级为3.x (Hadoop2.7.2-3.3.4, Hive2.3.3-3.1.3,spark2.4.3-3.2.1 补充下具体的版本信息), 并支持编译参数控制版本,以降低改造适配非默认基础引擎版本的难度;Hive EC 支持并发模式运行任务,可大幅降低机器资源使用,提高hive任务并发;ECM 服务重启时不 kill EC,为优雅重启提供支持;任务日志结果集的存储,新增对S3 和 OSS 文件系统模式的支持;数据源服务新增对,如:tidb、starrocks、Gaussdb等的支持;服务支持适配postgresql 数据库模式部署(实验性);新增Impala引擎支持(实验性);以及对Spark ETL 功能增强,支持 Excel、Redis、Mongo、Elasticsearch等;
主要功能如下:
@@ -14,9 +14,11 @@ Linkis 1.4.0 版本,主要增加了如下功能:将 hadoop、spark、hive
- 减少基础引擎不同版本兼容性问题
- 支持 Hive EC 并发执行任务
- 支持 ECM 服务重启时不 kill EC
-- linkis-storage 支持 S3 和 OSS 文件系统
+
- 支持更多的数据源,如:tidb、starrocks、Gaussdb等
-- 增加 postgresql 数据库支持
+- 增加 postgresql 数据库支持(实验性)
+- linkis-storage 支持 S3 和 OSS 文件系统(实验性)
+- 新增 Impala 引擎连接器支持(实验性)
- 对Spark ETL 功能增强
- 版本号升级规则及提交代码默认合并分支修改
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current.json b/i18n/zh-CN/docusaurus-plugin-content-docs/current.json
index 86fa068d54f..f715ad86bcf 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current.json
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current.json
@@ -1,6 +1,6 @@
{
"version.label": {
- "message": "Next(1.4.0)",
+ "message": "Next(1.5.0)",
"description": "The label for version current"
},
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/deploy-quick.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/deploy-quick.md
index 328c4429d25..35489d955c4 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/deploy-quick.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/deploy-quick.md
@@ -8,7 +8,7 @@ sidebar_position: 1
### 1.1 Linux服务器
**硬件要求**
-安装linkis 微服务近10个,至少3G内存。每个微服务默认配置启动的jvm -Xmx 内存大小为 512M(内存不够的情况下,可以尝试调小至256/128M,内存足够情况下也可以调大)。
+安装linkis 微服务近6个,至少3G内存。每个微服务默认配置启动的jvm -Xmx 内存大小为 512M(内存不够的情况下,可以尝试调小至256/128M,内存足够情况下也可以调大)。
### 1.2 添加部署用户
@@ -16,7 +16,7 @@ sidebar_position: 1
>部署用户: linkis核心进程的启动用户,同时此用户会默认作为管理员权限,部署过程中会生成对应的管理员登录密码,位于`conf/linkis-mg-gateway.properties`文件中
Linkis支持指定提交、执行的用户。linkis主要进程服务会通过`sudo -u ${linkis-user}` 切换到对应用户下,然后执行对应的引擎启动命令,所以引擎`linkis-engine`进程归属的用户是任务的执行者(因此部署用户需要有sudo权限,而且是免密的)。
-以hadoop用户为例:
+以hadoop用户为例(linkis中很多配置用户默认都使用hadoop用户,建议初次安装者使用hadoop用户,否则在安装过程中可能会遇到很多意想不到的错误):
先查看系统中是否已经有 hadoop 用户,若已经存在,则直接授权即可,若不存在,先创建用户,再授权。
@@ -54,7 +54,7 @@ $ tar -xvf apache-linkis-x.x.x-bin.tar.gz
解压后的目录结构如下
```shell script
--rw-r--r-- 1 hadoop hadoop 518192043 Jun 20 09:50 apache-linkis-1.3.1-bin.tar.gz
+-rw-r--r-- 1 hadoop hadoop 518192043 Jun 20 09:50 apache-linkis-x.x.x-bin.tar.gz
drwxrwxr-x 2 hadoop hadoop 4096 Jun 20 09:56 bin //执行环境检查和安装的脚本
drwxrwxr-x 2 hadoop hadoop 4096 Jun 20 09:56 deploy-config // 部署时依赖的DB等环境配置信息
drwxrwxr-x 4 hadoop hadoop 4096 Jun 20 09:56 docker
@@ -229,7 +229,18 @@ HADOOP_KERBEROS_ENABLE=true
HADOOP_KEYTAB_PATH=/appcom/keytab/
```
-#### 注意事项
+### 2.4 配置 Token
+文件位于 `bin/install.sh`
+
+Linkis 1.3.2 版本为保证系统安全性已将 Token 值改为32位随机生成,具体可查看[Token变更说明](https://linkis.apache.org/zh-CN/docs/1.3.2/feature/update-token/)。
+
+使用随机生成Token,初次与[WDS其它组件](https://github.com/WeDataSphere/DataSphereStudio/blob/master/README-ZH.md)对接时会遇到很多 Token 验证失败的问题,建议初次安装时不使用随机生成Token,修改如下配置为 true 即可。
+
+```
+DEBUG_MODE=true
+```
+
+### 2.5 注意事项
**全量安装**
@@ -241,7 +252,7 @@ HADOOP_KEYTAB_PATH=/appcom/keytab/
**Token 过期问题**
-当遇到 Token 令牌无效或已过期问题时可以检查 Token 是否配置正确,可通过管理台查询 Token。
+当遇到 Token 令牌无效或已过期问题时可以检查 Token 是否配置正确,可通过管理台 ==> 基础数据管理 ==> 令牌管理,查询 Token。
**Python 版本问题**
Linkis 升级为 1.4.0 后默认 Spark 版本升级为 3.x,无法兼容 python2。因此如果需要使用 pyspark 功能需要做如下修改。
@@ -273,7 +284,7 @@ install.sh脚本会询问您是否需要初始化数据库并导入元数据。
执行成功提示如下:
```shell script
-`Congratulations! You have installed Linkis 1.0.3 successfully, please use sh /data/Install/linkis/sbin/linkis-start-all.sh to start it!
+`Congratulations! You have installed Linkis x.x.x successfully, please use sh /data/Install/linkis/sbin/linkis-start-all.sh to start it!
Your default account password is [hadoop/5e8e312b4]`
```
@@ -539,6 +550,11 @@ sh bin/linkis-cli -submitUser hadoop -engineType python-python2 -codeType pyth
| Flink | >=1.0.0 已适配 | **不包含** |
| openLooKeng | >=1.1.1 已适配 | **不包含** |
| Sqoop | >=1.1.2 已适配 | **不包含** |
+| Trino | >=1.3.2 已适配 | **不包含** |
+| Presto | >=1.3.2 已适配 | **不包含** |
+| Elasticsearch | >=1.3.2 已适配 | **不包含** |
+| Seatunnel | >=1.3.2 已适配 | **不包含** |
+| Impala | >=1.4.0 已适配 | **不包含** |
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/other.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/other.md
index 9dafd70c4f1..db4ddc1c284 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/other.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/other.md
@@ -3,16 +3,14 @@ title: 其它特性说明
sidebar_position: 0.6
---
-## 1. Linkis 1.4.0 其它特性升级说明
-
-### 1.1 ECM 重启时不 kill EC
+## 1. ECM 重启时不 kill EC
当ECM重新启动时,可以选择不杀死引擎,而是可以接管现有的存活引擎。使引擎连接管理器 (ECM) 服务无状态。
-### 1.2 移除 json4s 依赖
+## 2. 移除 json4s 依赖
spark 不同版本依赖不同的json4s版本,不利于spark多版本的支持,我们需要减少这个json4s依赖,从linkis中移除了json4s.
比如: spark2.4 需要json4s v3.5.3, spark3.2需要json4s v3.7.0-M11。
-### 1.3 EngineConn模块定义依赖引擎版本
+## 3. EngineConn模块定义依赖引擎版本
引擎的版本定义默认在 `EngineConn`中,一旦相关版本变更,需要修改多处,我们可以把相关的版本定义统一放到顶层pom文件中。编译指定引擎模块时,需要在项目根目录编译,并使用`-pl`来编译具体的引擎模块,比如:
```
mvn install package -pl linkis-engineconn-plugins/spark -Dspark.version=3.2.1
@@ -20,12 +18,12 @@ mvn install package -pl linkis-engineconn-plugins/spark -Dspark.version=3.2.1
引擎的版本可以通过mvn编译-D参数来指定,比如 -Dspark.version=xxx 、 -Dpresto.version=0.235
目前所有的底层引擎版本新都已经移到顶层pom文件中,编译指定引擎模块时,需要在项目根目录编译,并使用`-pl`来编译具体的引擎模块。
-### 1.4 Linkis 主版本号修改说明
+## 4. Linkis 主版本号修改说明
Linkis 从 1.3.2 版本后将不再按小版本升级,下一个版本为 1.4.0,再往后升级时版本号为1.5.0,1.6.0 以此类推。当遇到某个发布版本有重大缺陷需要修复时会拉取小版本修复缺陷,如 1.4.1 。
-## 1.5 LInkis 代码提交主分支说明
+## 5. LInkis 代码提交主分支说明
Linkis 1.3.2 及之前版本修改代码默认是合并到 dev 分支。实际上 Apache Linkis 的开发社区很活跃,对于新开发的需求或修复功能都会提交到 dev 分支,但是用户访问 Linkis 代码库的时候默认显示的是 master 分支。由于我们一个季度才会发布一个新版本,从 master 分支来看显得社区活跃的不高。因此我们决定从 1.4.0 版本开始,将开发者提交的代码默认合并到 master 分支。
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/overview.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/overview.md
index b2ad3c08c61..2755345ad75 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/overview.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/overview.md
@@ -5,12 +5,12 @@ sidebar_position: 0.1
- [基础引擎依赖性、兼容性、默认版本优化](./base-engine-compatibilty.md)
- [Hive 引擎连接器支持并发任务](./hive-engine-support-concurrent.md)
-- [支持更多的数据源](./spark-etl.md)
+- [支持更多的数据源](../user-guide/datasource-manual#31-jdbc-数据源)
+- [Spark ETL 功能增强](./spark-etl.md)
+- [根据数据源生成SQL](./datasource-generate-sql.md)
- [linkis-storage 支持 S3 文件系统(实验版本)](../deployment/deploy-quick#343-s3-模式)
- [增加 postgresql 数据库支持(实验版本)](../deployment/deploy-quick#22-配置数据库信息)
- [增加 impala 引擎支持(实验版本)](../engine-usage/impala.md)
-- [Spark ETL 功能增强](./spark-etl.md)
-- [根据数据源生成SQL](./datasource-generate-sql.md)
- [其它特性说明](./other.md)
- [版本的 Release-Notes](/download/release-notes-1.4.0)
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/spark-etl.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/spark-etl.md
index 40c2a646851..6f486f4515a 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/spark-etl.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/spark-etl.md
@@ -61,7 +61,7 @@ sh ./bin/linkis-cli -engineType spark-3.2.1 -codeType data_calc -code "{\"plugi
```
### 4.3 各数据源同步 json 脚本说明
-### 4.3.1 jdbc
+#### 4.3.1 jdbc
配置说明
```text
@@ -127,7 +127,7 @@ kingbase8-8.6.0.jar
postgresql-42.3.8.jar
```
-### 4.3.2 file
+#### 4.3.2 file
配置说明
@@ -173,7 +173,7 @@ json code
spark-excel-2.12.17-3.2.2_2.12-3.2.2_0.18.1.jar
```
-### 4.3.3 redis
+#### 4.3.3 redis
```text
sourceTable: 源表,
@@ -225,7 +225,7 @@ commons-pool2-2.8.1.jar
spark-redis_2.12-2.6.0.jar
```
-### 4.3.4 kafka
+#### 4.3.4 kafka
配置说明
```text
@@ -302,7 +302,7 @@ spark-sql-kafka-0-10_2.12-3.2.1.jar
spark-token-provider-kafka-0-10_2.12-3.2.1.jar
```
-### elasticsearch
+#### 4.3.5 elasticsearch
配置说明
```text
@@ -380,7 +380,7 @@ index: elasticsearch索引名称
elasticsearch-spark-30_2.12-7.17.7.jar
```
-### mongo
+#### 4.3.6 mongo
配置说明
```text
@@ -461,7 +461,7 @@ mongodb-driver-core-3.12.8.jar
mongodb-driver-sync-3.12.8.jar
```
-### delta
+#### 4.3.7 delta
配置说明
```text
@@ -539,7 +539,7 @@ delta-core_2.12-2.0.2.jar
delta-storage-2.0.2.jar
```
-### hudi
+#### 4.3.8 hudi
配置说明
```text
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0.json b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0.json
new file mode 100644
index 00000000000..658c5831f83
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0.json
@@ -0,0 +1,163 @@
+{
+ "version.label": {
+ "message": "1.4.0",
+ "description": "The label for version current"
+ },
+
+ "sidebar.tutorialSidebar.category.About Linkis": {
+ "message": "关于 Linkis"
+ },
+
+ "sidebar.tutorialSidebar.category.Quick Experience": {
+ "message": "快速体验",
+ "description": "The label for category Quick in sidebar tutorialSidebar"
+ },
+ "sidebar.tutorialSidebar.category.Deployment": {
+ "message": "部署指南",
+ "description": "The label for category advanced Deployment in sidebar tutorialSidebar"
+ },
+ "sidebar.tutorialSidebar.category.User Guide": {
+ "message": "使用指南",
+ "description": "The label for category User Guide in sidebar tutorialSidebar"
+ },
+ "sidebar.tutorialSidebar.category.Engine Usage": {
+ "message": "引擎使用",
+ "description": "The label for category Engine Usage in sidebar tutorialSidebar"
+ },
+ "sidebar.tutorialSidebar.category.Tuning And Troubleshooting": {
+ "message": "调优排障",
+ "description": "The label for category Tuning And Troubleshooting in sidebar tutorialSidebar"
+ },
+ "sidebar.tutorialSidebar.category.Error Guide": {
+ "message": "错误码",
+ "description": "The label for category Error Guide in sidebar tutorialSidebar"
+ },
+ "sidebar.tutorialSidebar.category.API Docs": {
+ "message": "API文档",
+ "description": "The label for category API Docs in sidebar tutorialSidebar"
+ },
+ "sidebar.tutorialSidebar.category.Table Structure": {
+ "message": "表结构",
+ "description": "The label for category Table Structure in sidebar tutorialSidebar"
+ },
+
+ "sidebar.tutorialSidebar.category.Architecture": {
+ "message": "架构设计",
+ "description": "The label for category Architecture in sidebar tutorialSidebar"
+ },
+ "sidebar.tutorialSidebar.category.Commons": {
+ "message": "公共依赖模块",
+ "description": "The label for category Commons in sidebar tutorialSidebar"
+ },
+ "sidebar.tutorialSidebar.category.Computation Governance Services": {
+ "message": "计算治理模块",
+ "description": "The label for category Computation Governance Services in sidebar tutorialSidebar"
+ },
+ "sidebar.tutorialSidebar.category.Engine": {
+ "message": "引擎服务",
+ "description": "The label for category Engine Services in sidebar tutorialSidebar"
+ },
+ "sidebar.tutorialSidebar.category.Linkis Manager": {
+ "message": "Manager架构",
+ "description": "The label for category Linkis Manager in sidebar tutorialSidebar"
+ },
+ "sidebar.tutorialSidebar.category.Public Enhancement Services": {
+ "message": "公共增强模块",
+ "description": "The label for category Public Enhancement Services in sidebar tutorialSidebar"
+ },
+ "sidebar.tutorialSidebar.category.Context Service": {
+ "message": "上下文服务",
+ "description": "The label for category Public Enhancement Services in sidebar tutorialSidebar"
+ },
+
+ "sidebar.tutorialSidebar.category.Microservice Governance Services": {
+ "message": "微服务实例模块",
+ "description": "The label for category Microservice Governance Services in sidebar tutorialSidebar"
+ },
+ "sidebar.tutorialSidebar.category.Orchestrator": {
+ "message": "编排器架构",
+ "description": "The label for category Orchestrator Services in sidebar tutorialSidebar"
+ },
+
+ "sidebar.tutorialSidebar.category.Upgrade Guide": {
+ "message": "升级指南",
+ "description": "The label for category Upgrade Guide in sidebar tutorialSidebar"
+ },
+
+ "sidebar.tutorialSidebar.category.Development": {
+ "message": "开发指南",
+ "description": "The label for category Development Doc in sidebar tutorialSidebar"
+ },
+ "sidebar.tutorialSidebar.category.Development Specification": {
+ "message": "开发规范",
+ "description": "The label for category Development Specification in sidebar tutorialSidebar"
+ },
+
+ "sidebar.tutorialSidebar.category.Components": {
+ "message": "组件介绍",
+ "description": "The label for category Components in sidebar tutorialSidebar"
+ },
+
+ "sidebar.tutorialSidebar.category.Engine Plugin Management Service": {
+ "message": "引擎插件管理服务",
+ "description": "Engine Plugin Management Service"
+ },
+ "sidebar.tutorialSidebar.category.Computing Governance Portal Service": {
+ "message": "计算治理入口服务",
+ "description": "Computing Governance Portal Service"
+ },
+ "sidebar.tutorialSidebar.category.Computing Governance Management Services": {
+ "message": "计算治理管理服务",
+ "description": "Computing Governance Management Services"
+ },
+ "sidebar.tutorialSidebar.category.Public Service": {
+ "message": "公共服务",
+ "description": "Public Service"
+ },
+ "sidebar.tutorialSidebar.category.Quick Start": {
+ "message": "快速上手",
+ "description": "quick start"
+ },
+ "sidebar.tutorialSidebar.category.Integrated": {
+ "message": "集成",
+ "description": "integrated"
+ },
+ "sidebar.tutorialSidebar.category.Console Manual": {
+ "message": "管理台的使用",
+ "description": "console manual"
+ },
+ "sidebar.tutorialSidebar.category.Security Authentication": {
+ "message": "安全认证"
+ },
+ "sidebar.tutorialSidebar.category.Service Architecture": {
+ "message": "微服务架构",
+ "description": "linkis service architecture"
+ },
+ "sidebar.tutorialSidebar.category.Feature": {
+ "message": "关键特性架构",
+ "description": "key feature architechture"
+ },
+ "sidebar.tutorialSidebar.category.Control Panel": {
+ "message": "管理台的使用",
+ "description": "control panel usage"
+ },
+ "sidebar.tutorialSidebar.category.Advice Configuration": {
+ "message": "建议配置",
+ "description": "Linkis advice configuration"
+ },
+ "sidebar.tutorialSidebar.category.LinkisManger Services": {
+ "message": "LinkisManger 服务",
+ "description": "LinkisManger Services"
+ },
+ "sidebar.tutorialSidebar.category.Entrance Services": {
+ "message": "Entrance 服务",
+ "description": "Entrance Services"
+ },
+ "sidebar.tutorialSidebar.category.Version Feature": {
+ "message": "版本特性",
+ "description": "Version Feature"
+ }
+
+
+
+}
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/_category_.json b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/_category_.json
new file mode 100644
index 00000000000..2c333deaa77
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "关于 Linkis",
+ "position": 1.0
+}
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/configuration.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/configuration.md
new file mode 100644
index 00000000000..fa6358f012b
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/configuration.md
@@ -0,0 +1,179 @@
+---
+title: 建议配置
+sidebar_position: 0.2
+---
+
+## 1 软硬件环境建议配置
+
+Linkis 在上层应用程序和底层引擎之间构建了一层计算中间件。 作为一款开源分布式计算中间件,可以很好地部署和运行在 Intel 架构服务器及主流虚拟化环境下,并支持主流的Linux操作系统环境
+
+### 1.1 Linux 操作系统版本要求
+
+| 操作系统 | 版本 |
+| --- | --- |
+| Red Hat Enterprise Linux | 7.0 及以上 |
+| CentOS | 7.0 及以上 |
+| Oracle Enterprise Linux | 7.0 及以上 |
+| Ubuntu LTS | 16.04 及以上 |
+
+> **注意:** 以上 Linux 操作系统可运行在物理服务器以及 VMware、KVM、XEN 主流虚拟化环境上
+
+### 1.2 服务器建议配置
+
+Linkis 支持运行在 Intel x86-64 架构的 64 位通用硬件服务器平台。对生产环境的服务器硬件配置有以下建议:
+
+#### 生产环境
+
+| **CPU** | **内存** | **硬盘类型** | **网络** | **实例数量** |
+| --- | --- | --- | --- | --- |
+| 16核+ | 32GB+ | SAS | 千兆网卡 | 1+ |
+
+> **注意:**
+>
+> - 以上建议配置为部署 Linkis的最低配置,生产环境强烈推荐使用更高的配置
+> - 硬盘大小配置建议 50GB+ ,系统盘和数据盘分开
+
+### 1.3 软件要求
+
+Linkis二进制包基于以下软件版本进行编译:
+
+| 组件 | 版本 | 说明 |
+| --- | --- | --- |
+| Hadoop | 3.3.4 | |
+| Hive | 3.1.3 | |
+| Spark | 3.2.1 | |
+| Flink | 1.12.2 | |
+| openLooKeng | 1.5.0 | |
+| Sqoop | 1.4.6 | |
+| ElasticSearch | 7.6.2 | |
+| Presto | 0.234 | |
+| Python | Python2 | |
+
+> **注意:**
+> 如果本地安装组件版本与上述不一致,需要进行修改对应组件版本,自行编译二进制包进行安装。
+
+### 1.4 客户端 Web 浏览器要求
+
+Linkis推荐 Chrome 73版本进行前端访问
+
+
+## 2 常用场景
+
+### 2.1 开启测试模式
+开发过程需要免密接口,可在`linkis.properties`替换或追加此配置
+
+| 参数名 | 默认值 | 描述 |
+| ------------------------- | ------- | -----------------------------------------------------------|
+| wds.linkis.test.mode | false | 是否打开调试模式,如果设置为 true,所有微服务都支持免密登录,且所有EngineConn打开远程调试端口 |
+| wds.linkis.test.user | hadoop | 当wds.linkis.test.mode=true时,免密登录的默认登录用户 |
+
+![](./images/test-mode.png)
+
+
+### 2.2 登录用户设置
+Apache Linkis 默认使用配置文件来管理admin用户,可以在`linkis-mg-gateway.properties`替换或追加此配置。如需多用户可接入LDAP实现。
+
+| 参数名 | 默认值 | 描述 |
+| ------------------------- | ------- | -----------------------------------------------------------|
+| wds.linkis.admin.user | hadoop | 管理员用户名 |
+| wds.linkis.admin.password | 123456 | 管理员用户密码 |
+
+![](./images/login-user.png)
+
+
+### 2.3 LDAP设置
+Apache Linkis 可以通过参数接入LDAP实现多用户管理,可以在`linkis-mg-gateway.properties`替换或追加此配置。
+
+| 参数名 | 默认值 | 描述 |
+| ------------------------- | ------- | -----------------------------------------------------------|
+| wds.linkis.ldap.proxy.url | 无 | LDAP URL地址 |
+| wds.linkis.ldap.proxy.baseDN | 无 | LDAP baseDN地址 |
+| wds.linkis.ldap.proxy.userNameFormat | 无 | |
+
+![](./images/ldap.png)
+
+### 2.4 关闭资源检查
+Apache Linkis 提交任务时有时会调试异常,如:资源不足;可以在`linkis-cg-linkismanager.properties`替换或追加此配置。
+
+| 参数名 | 默认值 | 描述 |
+| ------------------------- | ------- | -----------------------------------------------------------|
+| wds.linkis.manager.rm.request.enable | true | 资源检查 |
+
+![](./images/resource-enable.png)
+
+### 2.5 开启引擎调试
+Apache Linkis EC可以开启调试模式,可以在`linkis-cg-linkismanager.properties`替换或追加此配置。
+
+| 参数名 | 默认值 | 描述 |
+| ------------------------- | ------- | -----------------------------------------------------------|
+| wds.linkis.engineconn.debug.enable | true | 是否开启引擎调试 |
+
+![](./images/engine-debug.png)
+
+### 2.6 Hive元数据配置
+Apache Linkis 的public-service服务需要读取hive的元数据;可以在`linkis-ps-publicservice.properties`替换或追加此配置。
+
+| 参数名 | 默认值 | 描述 |
+| ------------------------- | ------- | -----------------------------------------------------------|
+| hive.meta.url | 无 | HiveMetaStore数据库的URL。 |
+| hive.meta.user | 无 | HiveMetaStore数据库的user |
+| hive.meta.password | 无 | HiveMetaStore数据库的password |
+
+![](./images/hive-meta.png)
+
+### 2.7 Linkis 数据库配置
+Apache Linkis 访问默认使用Mysql作为数据存储,可以在`linkis.properties`替换或追加此配置。
+
+| 参数名 | 默认值 | 描述 |
+| ------------------------- | ------- | -----------------------------------------------------------|
+| wds.linkis.server.mybatis.datasource.url | 无 | 数据库连接字符串,例如:jdbc:mysql://127.0.0.1:3306/dss?characterEncoding=UTF-8 |
+| wds.linkis.server.mybatis.datasource.username | 无 | 数据库用户名,例如:root |
+| wds.linkis.server.mybatis.datasource.password | 无 | 数据库密码,例如:root |
+
+![](./images/linkis-db.png)
+
+### 2.8 Linkis Session 缓存配置
+Apache Linkis 支持使用redis进行session的共享;可以在`linkis.properties`替换或追加此配置。
+
+| 参数名 | 默认值 | 描述 |
+| ------------------------- | ------- | -----------------------------------------------------------|
+| linkis.session.redis.cache.enabled | None | 是否开启 |
+| linkis.session.redis.host | 127.0.0.1 | 主机名 |
+| linkis.session.redis.port | 6379 | 端口,例如 |
+| linkis.session.redis.password | None | 密码 |
+
+![](./images/redis.png)
+
+### 2.9 Linkis 模块开发配置
+Apache Linkis 开发时可通过此参数,自定义加载模块的数据库、Rest接口、实体对象;可以在`linkis-ps-publicservice.properties`进行修改,多个模块之间使用逗号分割。
+
+| 参数名 | 默认值 | 描述 |
+| ------------------------- | ------- | -----------------------------------------------------------|
+| wds.linkis.server.restful.scan.packages | 无 | restful 扫描包,例如:org.apache.linkis.basedatamanager.server.restful |
+| wds.linkis.server.mybatis.mapperLocations | 无 | mybatis mapper文件路径,例如: classpath*:org/apache/linkis/basedatamanager/server/dao/mapper/*.xml|
+| wds.linkis.server.mybatis.typeAliasesPackage | 无 | 实体别名扫描包,例如:org.apache.linkis.basedatamanager.server.domain |
+| wds.linkis.server.mybatis.BasePackage | 无 | 数据库dao层扫描,例如:org.apache.linkis.basedatamanager.server.dao |
+
+![](./images/deverlop-conf.png)
+
+### 2.10 Linkis 模块开发配置
+Apache Linkis 开发时可通过此参数,自定义加载模块的路由;可以在`linkis.properties`进行修改,多个模块之间使用逗号分割。
+
+| 参数名 | 默认值 | 描述 |
+| ------------------------- | ------- | -----------------------------------------------------------|
+| wds.linkis.gateway.conf.publicservice.list | cs,contextservice,data-source-manager,metadataQuery,metadatamanager,query,jobhistory,application,configuration,filesystem,udf,variable,microservice,errorcode,bml,datasource,basedata-manager | publicservice服务支持路由的模块 |
+
+![](./images/list-conf.png)
+
+### 2.11 Linkis 文件系统及物料存放路径
+Apache Linkis 开发时可通过此参数,自定义加载模块的路由;可以在`linkis.properties`进行修改,多个模块之间使用逗号分割。
+
+| 参数名 | 默认值 | 描述 |
+| ------------------------- | ------- | -----------------------------------------------------------|
+| wds.linkis.filesystem.root.path | file:///tmp/linkis/ | 本地用户目录,需在该目录下建立以用户名为名称的文件夹 |
+| wds.linkis.filesystem.hdfs.root.path | hdfs:///tmp/ | HDFS用户目录 |
+| wds.linkis.bml.is.hdfs | true | 是否启用hdfs |
+| wds.linkis.bml.hdfs.prefix | /apps-data | hdfs路径 |
+| wds.linkis.bml.local.prefix | /apps-data | 本地路径 |
+
+![](./images/fs-conf.png)
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/glossary.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/glossary.md
new file mode 100644
index 00000000000..76ed35f4abc
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/glossary.md
@@ -0,0 +1,105 @@
+---
+title: 名词解释和服务介绍
+sidebar_position: 0.3
+---
+
+## 1.名词解释
+
+Linkis 基于微服务架构开发,其服务可以分为3类服务群(组):计算治理服务组、公共增强服务组和微服务治理服务组。
+- 计算治理服务组(Computation Governance Services):处理任务的核心服务,支持计算任务/请求处理流程的4个主要阶段(提交->准备->执行->结果);
+- 公共增强服务组(Public Enhancement Services):提供基础的支撑服务,包括上下文服务、引擎/udf物料的管理服务、历史任务等公共服务及数据源的管理服务等;
+- 微服务治理服务组(Microservice Governance Services):定制化的Spring Cloud Gateway、Eureka。提供微服务的基础底座
+
+下面将对这三组服务的关键名词和服务进行介绍:
+
+### 1.1 关键模块名词
+
+首先我们了解下关键模块的名词
+
+| 简称 | 全称 | 主要功能 |
+|-------- |------------------------- |---------------------|
+| MG/mg | Microservice Governance | 微服务治理 |
+| CG/cg | Computation Governance | 计算治理 |
+| EC/ec | EngineConn | 引擎连接器 |
+| - | Engine | 底层计算存储引擎,如spark、hive、shell |
+| ECM/ecm | EngineConnManager | 引擎连接器的管理 |
+| ECP/ecp | EngineConnPlugin | 引擎连接器插件 |
+| RM/rm | ResourceManager | 资源管理器,用于管控任务资源和用户资源使用和控制 |
+| AM/am | AppManager | 应用管理器,用于管控EngineConn和ECM服务 |
+| LM/lm | LinkisManager | Linkis管理器服务,包含了:RM、AM、LabelManager等模块 |
+| PES/pes | Public Enhancement Services | 公共增强服务 |
+| - | Orchestrator | 编排器,用于Linkis任务编排,任务多活、混算、AB等策略支持 |
+| UJES | Unified Job Execute Service | 统一作业执行服务 |
+| DDL/ddl | Data Definition Language | 数据库定义语言 |
+| DML/dml | Data Manipulation Language | 数据操纵语言 |
+
+### 1.2 任务关键名词
+
+- JobRequest: 任务请求,对应Client提交给Linkis的任务,包含任务的执行内容、用户、标签等信息
+- RuntimeMap: 任务运行时参数,任务的运行时参数,任务级别生效,如放置多数据源的数据源信息
+- StartupMap: 引擎连接器启动参数,用于EngineConn连机器启动的参数,EngineConn进程生效,如设置spark.executor.memory=4G
+- UserCreator: 任务创建者信息:包含用户信息User和Client提交的应用信息Creator,用于任务和资源的租户隔离
+- submitUser: 任务提交用户
+- executeUser: 任务真实执行用户
+- JobSource: 任务来源信息,记录任务的IP或者脚本地址
+- errorCode: 错误码,任务错误码信息
+- JobHistory: 任务历史持久化模块,提供任务的历史信息查询
+- ResultSet: 结果集,任务对应的结果集,默认以.dolphin文件后缀进行保存
+- JobInfo: 任务运行时信息,如日志、进度、资源信息等
+- Resource: 资源信息,每个任务都会消耗资源
+- RequestTask: EngineConn的最小执行单元,传输给EngineConn执行的任务单元
+
+
+
+## 2. 服务介绍
+
+本节主要对Linkis的服务进行介绍,了解Linkis启动后会有哪些服务,以及服务的作用。
+
+### 2.1 服务列表
+
+Linkis启动后各个服务群(组)下包括的微服务如下:
+
+| 归属的微服务群(组) | 服务名 | 主要功能 |
+| ---- | ---- | ---- |
+| MGS | linkis-mg-eureka | 负责服务注册与发现,上游其他组件也会复用linkis的注册中心,如dss|
+| MGS | linkis-mg-gateway | 作为Linkis的网关入口,主要承担了请求转发、用户访问认证 |
+| CGS | linkis-cg-entrance | 任务提交入口是用来负责计算任务的接收、调度、转发执行请求、生命周期管理的服务,并且能把计算结果、日志、进度返回给调用方 |
+| CGS | linkis-cg-linkismanager|提供了AppManager(应用管理)、ResourceManager(资源管理)、LabelManager(标签管理)、引擎连接器插件管理的能力 |
+| CGS | linkis-cg-engineconnmanager | EngineConn的管理器,提供引擎的生命周期管理 |
+| CGS | linkis-cg-engineconn| 引擎连接器服务,是与底层计算存储引擎(Hive/Spark)的实际连接的服务,包含了与实际引擎的会话信息。对于底层计算存储引擎来说 它充当了一个客户端的角色,由任务触发启动|
+| PES | linkis-ps-publicservice|公共增强服务组模块服务,为其他微服务模块提供统一配置管理、上下文服务、BML物料库、数据源管理、微服务管理和历史任务查询等功能 |
+
+启动后开源看到的所有服务如下图:
+![Linkis_Eureka](/Images/deployment/Linkis_combined_eureka.png)
+
+### 2.1 公共增强服务详解
+公共增强服务组(PES)在1.3.1版本后默认将相关模块服务合并为一个服务linkis-ps-publicservice提供相关功能,当然如果你希望分开部署也支持的。您只需要将对应模块的服务打包部署即可。
+合并后的公共增强服务,主要包含了以下功能:
+
+| 简称 | 全称 | 主要功能 |
+|-------- |------------------------- |---------------------|
+| CS/cs | Context Service | 上下文服务,用于任务间传递结果集、变量、文件等 |
+| UDF/udf | UDF | UDF管理模块,提供UDF和函数的管理功能,支持共享和版本控制 |
+| variable | Variable | 全局自定义模块,提供全局自定变量的管理功能 |
+| script | Script-dev | 脚本文件操作服务,提供脚本编辑保存、脚本目录管理功能 |
+| jobHistory | JobHistory | 任务历史持久化模块,提供任务的历史信息查询 |
+| BML/bml | BigData Material library | 大数据物料库 |
+| - | Configuration | 配置管理,提供配置参数的管理和查看的功能 |
+| - | instance-label | 微服务管理服务,提供微服务和路由标签的映射管理功能 |
+| - | error-code | 错误码管理,提供通过错误码管理的功能 |
+| DMS/dms | Data Source Manager Service | 数据源管理服务 |
+| MDS/mds | MetaData Manager Service | 元数据管理服务 |
+| - | linkis-metadata | 提供Hive元数据信息查看功能,后续将会合并到到MDS |
+| - | basedata-manager | 基础数据管理,用于管理Linkis自身的基础元数据信息 |
+
+## 3 模块介绍
+本节主要对Linkis的大模块和功能进行主要介绍
+
+- linkis-commons: linkis的公共模块,包含了公共的工具类模块、RPC模块、微服务基础等模块
+- linkis-computation-governance: 计算治理模块,包含了计算治理多个服务的模块:Entrance、LinkisManager、EngineConnManager、EngineConn等
+- linkis-engineconn-plugins: 引擎连接器插件模块,包含了所有的引擎连接器插件实现
+- linkis-extensions: Linkis的扩展增强模块,不是必要功能模块,现在主要包含了文件代理操作的IO模块
+- linkis-orchestrator: 编排模块,用于Linkis任务编排,任务多活、混算、AB等高级策略支持
+- linkis-public-enhancements: 公共增强模块,包含了所有的公共服务用于给到linkis内部和上层应用组件进行调用
+- linkis-spring-cloud-services: spring cloud相关的服务模块,包含了gateway、注册中心等
+- linkis-web: 前端模块
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/deverlop-conf.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/deverlop-conf.png
new file mode 100644
index 00000000000..3d5fc8af601
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/deverlop-conf.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/engine-debug.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/engine-debug.png
new file mode 100644
index 00000000000..788bd2b58f0
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/engine-debug.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/fs-conf.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/fs-conf.png
new file mode 100644
index 00000000000..85c4234a9b4
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/fs-conf.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/hive-meta.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/hive-meta.png
new file mode 100644
index 00000000000..50c02906a77
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/hive-meta.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/ldap.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/ldap.png
new file mode 100644
index 00000000000..9625ae20be0
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/ldap.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/linkis-db.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/linkis-db.png
new file mode 100644
index 00000000000..35f7f5573df
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/linkis-db.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/list-conf.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/list-conf.png
new file mode 100644
index 00000000000..d19c194a023
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/list-conf.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/login-user.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/login-user.png
new file mode 100644
index 00000000000..477c634f1d4
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/login-user.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/redis.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/redis.png
new file mode 100644
index 00000000000..3a064640613
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/redis.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/resource-enable.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/resource-enable.png
new file mode 100644
index 00000000000..973fcee8409
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/resource-enable.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/test-mode.png b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/test-mode.png
new file mode 100644
index 00000000000..3466b1b8857
Binary files /dev/null and b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/images/test-mode.png differ
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/introduction.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/introduction.md
new file mode 100644
index 00000000000..d28a99ba891
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/about/introduction.md
@@ -0,0 +1,115 @@
+---
+title: 简述
+sidebar_position: 0
+---
+## 关于 Linkis
+
+Linkis 在上层应用程序和底层引擎之间构建了一层计算中间件。通过使用Linkis 提供的REST/WebSocket/JDBC 等标准接口,上层应用可以方便地连接访问MySQL/Spark/Hive/Presto/Flink 等底层引擎,同时实现统一变量、脚本、用户定义函数和资源文件等用户资源的跨上层应用互通,以及通过REST标准接口提供了数据源管理和数据源对应的元数据查询服务。
+作为计算中间件,Linkis 提供了强大的连通、复用、编排、扩展和治理管控能力。通过将应用层和引擎层解耦,简化了复杂的网络调用关系,降低了整体复杂度,同时节约了整体开发和维护成本。
+Linkis 自2019年开源发布以来,已累计积累了700多家试用企业和1000多位沙盒试验用户,涉及金融、电信、制造、互联网等多个行业。许多公司已经将Linkis 作为大数据平台底层计算存储引擎的统一入口,和计算请求/任务的治理管控利器。
+
+![没有Linkis 之前](/Images-zh/before_linkis_cn.png)
+
+![有了Linkis 之后](/Images-zh/after_linkis_cn.png)
+
+## 核心特点
+- **丰富的底层计算存储引擎支持**:Spark、Hive、Python、Shell、Flink、JDBC、Pipeline、Sqoop、OpenLooKeng、Presto、ElasticSearch、Trino、SeaTunnel 等;
+- **丰富的语言支持**:SparkSQL、HiveSQL、Python、Shell、Pyspark、Scala、JSON 和 Java 等;
+- **强大的计算治理能力**: 能够提供基于多级标签的任务路由、负载均衡、多租户、流量控制、资源控制等能力;
+- **全栈计算存储引擎架构支持**: 能够接收、执行和管理针对各种计算存储引擎的任务和请求,包括离线批量任务、交互式查询任务、实时流式任务和数据湖任务;
+- **统一上下文服务**:支持跨用户、系统、计算引擎去关联管理用户和系统的资源文件(JAR、ZIP、Properties 等),结果集、参数变量、函数、UDF等,一处设置,处处自动引用;
+- **统一物料**: 提供了系统和用户级物料管理,可分享和流转,跨用户、跨系统共享物料;
+- **统一数据源管理**: 提供了Hive、ElasticSearch、Mysql、Kafka、MongoDB 等类型数据源信息的增删查改、版本控制、连接测试和对应数据源的元数据信息查询能力;
+- **错误码能力**:提供了任务常见错误的错误码和解决方案,方便用户自助定位问题;
+
+## 支持的引擎类型
+| **引擎名** | **支持底层组件版本 (默认依赖版本)** | **Linkis 1.X 版本要求** | **是否默认包含在发布包中** | **说明** |
+|:---- |:---- |:---- |:---- |:---- |
+|Spark|Apache 2.0.0~2.4.7, CDH >= 5.4.0, (默认Apache Spark 2.4.3)|\>=1.0.0_rc1|是|Spark EngineConn, 支持SQL, Scala, Pyspark 和R 代码。|
+|Hive|Apache >= 1.0.0, CDH >= 5.4.0, (默认Apache Hive 2.3.3)|\>=1.0.0_rc1|是|Hive EngineConn, 支持HiveQL 代码。|
+|Python|Python >= 2.6, (默认Python2*)|\>=1.0.0_rc1|是|Python EngineConn, 支持python 代码。|
+|Shell|Bash >= 2.0|\>=1.0.0_rc1|是|Shell EngineConn, 支持Bash shell 代码。|
+|JDBC|MySQL >= 5.0, Hive >=1.2.1, (默认Hive-jdbc 2.3.4)|\>=1.0.0_rc1|否|JDBC EngineConn, 已支持Mysql,Oracle,KingBase,PostgreSQL,SqlServer,DB2,Greenplum,DM,Doris,ClickHouse,TiDB,Starrocks,GaussDB和OceanBase, 可快速扩展支持其他有JDBC Driver 包的引擎, 如SQLite|
+|Flink |Flink >= 1.12.2, (默认Apache Flink 1.12.2)|\>=1.0.2|否|Flink EngineConn, 支持FlinkSQL 代码,也支持以Flink Jar 形式启动一个新的Yarn 应用程序。|
+|Pipeline|-|\>=1.0.2|否|Pipeline EngineConn, 支持文件的导入和导出。|
+|openLooKeng|openLooKeng >= 1.5.0, (默认openLookEng 1.5.0)|\>=1.1.1|否|openLooKeng EngineConn, 支持用Sql查询数据虚拟化引擎openLooKeng。|
+|Sqoop| Sqoop >= 1.4.6, (默认Apache Sqoop 1.4.6)|\>=1.1.2|否|Sqoop EngineConn, 支持 数据迁移工具 Sqoop 引擎。|
+|Presto|Presto >= 0.180|\>=1.2.0|否|Presto EngineConn, 支持Presto SQL 代码。|
+|ElasticSearch|ElasticSearch >=6.0|\>=1.2.0|否|ElasticSearch EngineConn, 支持SQL 和DSL 代码。|
+|Trino | Trino >=371 | >=1.3.1 | 否 | Trino EngineConn, 支持Trino SQL 代码 |
+|Seatunnel | Seatunnel >=2.1.2 | >=1.3.1 | 否 | Seatunnel EngineConn, 支持Seatunnel SQL 代码 |
+
+
+
+## 下载
+请前往[Linkis releases 页面](https://linkis.apache.org/zh-CN/download/main) 下载Linkis 已编译的部署安装包或源码包。
+
+## 安装部署
+
+请参考[编译指南](../development/build.md)来编译Linkis源代码。
+请参考[安装部署文档](../deployment/deploy-quick.md) 来部署Linkis 。
+
+## 示例和使用指引
+- [各引擎使用指引](../engine-usage/overview.md)
+- [API 文档](../api/overview.md)
+
+## 文档
+完整的Linkis文档代码存放在[linkis-website仓库中](https://github.com/apache/linkis-website)
+
+## 架构概要
+Linkis 基于微服务架构开发,其服务可以分为3类:计算治理服务、公共增强服务和微服务治理服务。
+- 计算治理服务,支持计算任务/请求处理流程的3个主要阶段:提交->准备->执行。
+- 公共增强服务,包括上下文服务、物料管理服务及数据源服务等。
+- 微服务治理服务,包括定制化的Spring Cloud Gateway、Eureka、Open Feign。
+
+下面是Linkis的架构概要图,更多详细架构文档请见 [Linkis/Architecture](../architecture/overview.md)。
+![architecture](/Images/Linkis_1.0_architecture.png)
+
+基于Linkis 计算中间件,我们在大数据平台套件[WeDataSphere](https://github.com/WeBankFinTech/WeDataSphere) 中构建了许多应用和工具系统,下面是目前可用的开源项目。
+
+![wedatasphere_stack_Linkis](/Images/wedatasphere_stack_Linkis.png)
+
+- [**DataSphere Studio** - 数据应用集成开发框架](https://github.com/WeBankFinTech/DataSphereStudio)
+
+- [**Scriptis** - 数据研发IDE工具](https://github.com/WeBankFinTech/Scriptis)
+
+- [**Visualis** - 数据可视化工具](https://github.com/WeBankFinTech/Visualis)
+
+- [**Schedulis** - 工作流调度工具](https://github.com/WeBankFinTech/Schedulis)
+
+- [**Qualitis** - 数据质量工具](https://github.com/WeBankFinTech/Qualitis)
+
+- [**MLLabis** - 容器化机器学习notebook 开发环境](https://github.com/WeBankFinTech/prophecis)
+
+更多项目开源准备中,敬请期待。
+
+## 贡献
+我们非常欢迎和期待更多的贡献者参与共建Linkis, 不论是代码、文档或是其他能够帮助到社区的贡献形式。
+
+代码和文档相关的贡献请参照[贡献指引](/community/how-to-contribute)。
+
+
+## 联系我们
+
+**方式1 邮件列表**
+
+|名称|描述|订阅|取消订阅|存档|
+|:-----|:--------|:-----|:------|:-----|
+| [dev@linkis.apache.org](mailto:dev@linkis.apache.org) | 社区活动信息 | [订阅](mailto:dev-subscribe@linkis.apache.org) | [取消订阅](mailto:dev-unsubscribe@linkis.apache.org) | [存档](http://mail-archives.apache.org/mod_mbox/linkis-dev) |
+
+**方式2 Issue**
+
+通过github提交[issue](https://github.com/apache/linkis/issues/new/choose),以便跟踪处理和经验沉淀共享
+
+**方式2 微信助手**
+
+|微信小助手|微信公众号|
+|:---|---|
+|||
+
+
+Meetup 视频 [Bilibili](https://space.bilibili.com/598542776?from=search&seid=14344213924133040656)。
+
+## 谁在使用 Linkis
+我们创建了[一个 issue](https://github.com/apache/linkis/issues/23) 以便用户反馈和记录谁在使用Linkis。
+Linkis 自2019年开源发布以来,累计已有700多家试验企业和1000+沙盒试验用户,涉及金融、电信、制造、互联网等多个行业。
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/api/_category_.json b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/api/_category_.json
new file mode 100644
index 00000000000..02d23f945ae
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/api/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "API 文档",
+ "position": 7
+}
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/api/http/_category_.json b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/api/http/_category_.json
new file mode 100644
index 00000000000..803138a2024
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/api/http/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Http API",
+ "position": 6
+}
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/api/http/linkis-cg-engineplugin-api/_category_.json b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/api/http/linkis-cg-engineplugin-api/_category_.json
new file mode 100644
index 00000000000..8f625f20e9e
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/api/http/linkis-cg-engineplugin-api/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "引擎插件管理服务",
+ "position": 4
+}
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/api/http/linkis-cg-engineplugin-api/engine-plugin-api.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/api/http/linkis-cg-engineplugin-api/engine-plugin-api.md
new file mode 100644
index 00000000000..1c83e70e2a6
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/api/http/linkis-cg-engineplugin-api/engine-plugin-api.md
@@ -0,0 +1,576 @@
+---
+title: 引擎插件API
+sidebar_position: 3
+---
+
+** EnginePluginRestful 类 **
+
+## 刷新
+
+
+**接口地址**:`/api/rest_j/v1/engineplugin/refresh`
+
+
+**请求方式**:`GET`
+
+
+**请求数据类型**:`application/x-www-form-urlencoded`
+
+
+**响应数据类型**:`*/*`
+
+
+**接口描述**:
| sqoop.args.columns | Columns to export to table |
+| \--direct | sqoop.args.direct | Use direct export fast path |
+| \--export-dir | sqoop.args.export.dir | HDFS source path for the export |
+| \-m,--num-mappers | sqoop.args.num.mappers | Use 'n' map tasks to export in parallel |
+| \--mapreduce-job-name | sqoop.args.mapreduce.job.name | Set name for generated mapreduce job |
+| \--staging-table | sqoop.args.staging.table | Intermediate staging table |
+| \--table | sqoop.args.table | Table to populate |
+| \--update-key | sqoop.args.update.key | Update records by specified key column |
+| \--update-mode | sqoop.args.update.mode | Specifies how updates are performed when new rows are found with non-matching keys in database |
+| \--validate | sqoop.args.validate | Validate the copy using the configured validator |
+| \--validation-failurehandler | sqoop.args.validation.failurehandler | Validate the copy using the configured validator |
+| \--validation-threshold | sqoop.args.validation.threshold | Fully qualified class name for ValidationThreshold |
+| \--validator | sqoop.args.validator | Fully qualified class name for the Validator |
+| | | |
+### 4.3 导入控制参数
+| 参数 | key | 说明 |
+| --------------------------------------------------------------------------------------------------------------------- | --------------------------------------- | ------------------------------------------------------------------------------------------------------------------ |
+| \--append | sqoop.args.append | Imports data in append mode |
+| \--as-avrodatafile | sqoop.args.as.avrodatafile | Imports data to Avro data files |
+| \--as-parquetfile | sqoop.args.as.parquetfile | Imports data to Parquet files |
+| \--as-sequencefile | sqoop.args.as.sequencefile | Imports data to SequenceFiles |
+| \--as-textfile | sqoop.args.as.textfile | Imports data as plain text (default) |
+| \--autoreset-to-one-mapper | sqoop.args.autoreset.to.one.mapper | Reset the number of mappers to one mapper if no split key available |
+| \--boundary-query | sqoop.args.boundary.query | Set boundary query for retrieving max and min value of the primary key |
+| \--case-insensitive | sqoop.args.case.insensitive | Data Base is case insensitive, split where condition transfrom to lower case! |
+| \--columns
| sqoop.args.columns | Columns to import from table |
+| \--compression-codec | sqoop.args.compression.codec | Compression codec to use for import |
+| \--delete-target-dir | sqoop.args.delete.target.dir | Imports data in delete mode |
+| \--direct | sqoop.args.direct | Use direct import fast path |
+| \--direct-split-size | sqoop.args.direct.split.size | Split the input stream every 'n' bytes when importing in direct mode |
+| \-e,--query | sqoop.args.query | Import results of SQL 'statement' |
+| \--fetch-size | sqoop.args.fetch.size | Set number 'n' of rows to fetch from the database when more rows are needed |
+| \--inline-lob-limit | sqoop.args.inline.lob.limit | Set the maximum size for an inline LOB |
+| \-m,--num-mappers | sqoop.args.num.mappers | Use 'n' map tasks to import in parallel |
+| \--mapreduce-job-name | sqoop.args.mapreduce.job.name | Set name for generated mapreduce job |
+| \--merge-key | sqoop.args.merge.key | Key column to use to join results |
+| \--split-by | sqoop.args.split.by | Column of the table used to split work units |
+| \--table | sqoop.args.table | Table to read |
+| \--target-dir | sqoop.args.target.dir | HDFS plain table destination |
+| \--validate | sqoop.args.validate | Validate the copy using the configured validator |
+| \--validation-failurehandler | sqoop.args.validation.failurehandler | Fully qualified class name for ValidationFa ilureHandler |
+| \--validation-threshold | sqoop.args.validation.threshold | Fully qualified class name for ValidationTh reshold |
+| \--validator | sqoop.args.validator | Fully qualified class name for the Validator |
+| \--warehouse-dir | sqoop.args.warehouse.dir | HDFS parent for table destination |
+| \--where | sqoop.args.where | WHERE clause to use during import |
+| \-z,--compress | sqoop.args.compress | Enable compression |
+| | | |
+
+### 4.4 增量导入参数
+
+| 参数 | key | 说明 |
+| --------------------------------------------------------------------------------------------------------------------- | --------------------------------------- | ------------------------------------------------------------------------------------------------------------------ |
+| \--check-column | sqoop.args.check.column | Source column to check for incremental change |
+| \--incremental | sqoop.args.incremental | Define an incremental import of type 'append' or 'lastmodified' |
+| \--last-value | sqoop.args.last.value | Last imported value in the incremental check column |
+| | | |
+
+### 4.5 输出行格式化参数
+| 参数 | key | 说明 |
+| --------------------------------------------------------------------------------------------------------------------- | --------------------------------------- | ------------------------------------------------------------------------------------------------------------------ |
+| \--enclosed-by | sqoop.args.enclosed.by | Sets a required field enclosing character |
+| \--escaped-by | sqoop.args.escaped.by | Sets the escape character |
+| \--fields-terminated-by | sqoop.args.fields.terminated.by | Sets the field separator character |
+| \--lines-terminated-by | sqoop.args.lines.terminated.by | Sets the end-of-line character |
+| \--mysql-delimiters | sqoop.args.mysql.delimiters | Uses MySQL's default delimiter set: fields: , lines: \\n escaped-by: \\ optionally-enclosed-by: ' |
+| \--optionally-enclosed-by | sqoop.args.optionally.enclosed.by | Sets a field enclosing character |
+| | | |
+
+### 4.6 输入解析参数
+
+| 参数 | key | 说明 |
+| --------------------------------------------------------------------------------------------------------------------- | --------------------------------------- | ------------------------------------------------------------------------------------------------------------------ |
+| \--input-enclosed-by | sqoop.args.input.enclosed.by | Sets a required field encloser |
+| \--input-escaped-by | sqoop.args.input.escaped.by | Sets the input escape character |
+| \--input-fields-terminated-by | sqoop.args.input.fields.terminated.by | Sets the input field separator |
+| \--input-lines-terminated-by | sqoop.args.input.lines.terminated.by | Sets the input end-of-line char |
+| \--input-optionally-enclosed-by | sqoop.args.input.optionally.enclosed.by | Sets a field enclosing character |
+| | | |
+
+ ### 4.7 Hive 参数
+
+| 参数 | key | 说明 |
+| --------------------------------------------------------------------------------------------------------------------- | --------------------------------------- | ------------------------------------------------------------------------------------------------------------------ |
+| \--create-hive-table | sqoop.args.create.hive.table | Fail if the target hive table exists |
+| \--hive-database | sqoop.args.hive.database | Sets the database name to use when importing to hive |
+| \--hive-delims-replacement | sqoop.args.hive.delims.replacement | Replace Hive record \\0x01 and row delimiters (\\n\\r) from imported string fields with user-defined string |
+| \--hive-drop-import-delims | sqoop.args.hive.drop.import.delims | Drop Hive record \\0x01 and row delimiters (\\n\\r) from imported string fields |
+| \--hive-home | sqoop.args.hive.home | Override $HIVE\_HOME |
+| \--hive-import | sqoop.args.hive.import | Import tables into Hive (Uses Hive's default delimiters if none are set.) |
+| \--hive-overwrite | sqoop.args.hive.overwrite | Overwrite existing data in the Hive table |
+| \--hive-partition-key | sqoop.args.hive.partition.key | Sets the partition key to use when importing to hive |
+| \--hive-partition-value | sqoop.args.hive.partition.value | Sets the partition value to use when importing to hive |
+| \--hive-table | sqoop.args.hive.table | Sets the table name to use when importing to hive |
+| \--map-column-hive | sqoop.args.map.column.hive | Override mapping for specific column to hive types. |
+
+
+### 4.8 HBase 参数
+
+| 参数 | key | 说明 |
+| --------------------------------------------------------------------------------------------------------------------- | --------------------------------------- | ------------------------------------------------------------------------------------------------------------------ |
+| \--column-family | sqoop.args.column.family | Sets the target column family for the import |
+| \--hbase-bulkload | sqoop.args.hbase.bulkload | Enables HBase bulk loading |
+| \--hbase-create-table | sqoop.args.hbase.create.table | If specified, create missing HBase tables |
+| \--hbase-row-key
| sqoop.args.hbase.row.key | Specifies which input column to use as the row key |
+| \--hbase-table
| sqoop.args.hbase.table | Import to
in HBase |
+| | | |
+
+### 4.9 HCatalog 参数
+
+| 参数 | key | 说明 |
+| --------------------------------------------------------------------------------------------------------------------- | --------------------------------------- | ------------------------------------------------------------------------------------------------------------------ |
+| \--hcatalog-database | sqoop.args.hcatalog.database | HCatalog database name |
+| \--hcatalog-home | sqoop.args.hcatalog.home | Override $HCAT\_HOME |
+| \--hcatalog-partition-keys | sqoop.args.hcatalog.partition.keys | Sets the partition keys to use when importing to hive |
+| \--hcatalog-partition-values | sqoop.args.hcatalog.partition.values | Sets the partition values to use when importing to hive |
+| \--hcatalog-table | sqoop.args.hcatalog.table | HCatalog table name |
+| \--hive-home | sqoop.args.hive.home | Override $HIVE\_HOME |
+| \--hive-partition-key | sqoop.args.hive.partition.key | Sets the partition key to use when importing to hive |
+| \--hive-partition-value | sqoop.args.hive.partition.value | Sets the partition value to use when importing to hive |
+| \--map-column-hive | sqoop.args.map.column.hive | Override mapping for specific column to hive types. |
+| | | |
+| HCatalog import specific options: | | |
+| \--create-hcatalog-table | sqoop.args.create.hcatalog.table | Create HCatalog before import |
+| \--hcatalog-storage-stanza | sqoop.args.hcatalog.storage.stanza | HCatalog storage stanza for table creation |
+| | |
+### 4.10 Accumulo 参数
+
+| 参数 | key | 说明 |
+| --------------------------------------------------------------------------------------------------------------------- | --------------------------------------- | ------------------------------------------------------------------------------------------------------------------ |
+| \--accumulo-batch-size | sqoop.args.accumulo.batch.size | Batch size in bytes |
+| \--accumulo-column-family | sqoop.args.accumulo.column.family | Sets the target column family for the import |
+| \--accumulo-create-table | sqoop.args.accumulo.create.table | If specified, create missing Accumulo tables |
+| \--accumulo-instance | sqoop.args.accumulo.instance | Accumulo instance name. |
+| \--accumulo-max-latency | sqoop.args.accumulo.max.latency | Max write latency in milliseconds |
+| \--accumulo-password | sqoop.args.accumulo.password | Accumulo password. |
+| \--accumulo-row-key
| sqoop.args.accumulo.row.key | Specifies which input column to use as the row key |
+| \--accumulo-table
+
+**Linkis 常用标签**
+
+|标签键|标签值|说明|
+|:-|:-|:-|
+|engineType| spark-2.4.3 | 指定引擎类型和版本|
+|userCreator| user + "-AppName" | 指定运行的用户和您的APPName|
+|codeType| sql | 指定运行的脚本类型|
+|jobRunningTimeout| 10 | job运行10s没完成自动发起Kill,单位为s|
+|jobQueuingTimeout| 10| job排队超过10s没完成自动发起Kill,单位为s|
+|jobRetryTimeout| 10000| job因为资源等原因失败重试的等待时间,单位为ms,如果因为队列资源不足的失败,会默认按间隔发起10次重试|
+|tenant| hduser02| 租户标签,设置前需要和BDP沟通需要单独机器进行隔离,则任务会被路由到单独的机器|
+
+
+## 1. 引入依赖模块
+```
+
+ org.apache.linkis
+ linkis-computation-client
+ ${linkis.version}
+
+如:
+
+ org.apache.linkis
+ linkis-computation-client
+ 1.0.3
+
+```
+
+## 2. Java测试代码
+建立Java的测试类LinkisClientTest,具体接口含义可以见注释:
+```java
+package org.apache.linkis.client.test;
+
+import org.apache.linkis.common.utils.Utils;
+import org.apache.linkis.httpclient.dws.authentication.StaticAuthenticationStrategy;
+import org.apache.linkis.httpclient.dws.config.DWSClientConfig;
+import org.apache.linkis.httpclient.dws.config.DWSClientConfigBuilder;
+import org.apache.linkis.manager.label.constant.LabelKeyConstant;
+import org.apache.linkis.protocol.constants.TaskConstant;
+import org.apache.linkis.ujes.client.UJESClient;
+import org.apache.linkis.ujes.client.UJESClientImpl;
+import org.apache.linkis.ujes.client.request.JobSubmitAction;
+import org.apache.linkis.ujes.client.request.JobExecuteAction;
+import org.apache.linkis.ujes.client.request.ResultSetAction;
+import org.apache.linkis.ujes.client.response.*;
+import org.apache.commons.io.IOUtils;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+public class LinkisClientTest {
+
+ // 1. build config: linkis gateway url
+ private static DWSClientConfig clientConfig = ((DWSClientConfigBuilder) (DWSClientConfigBuilder.newBuilder()
+ .addServerUrl("http://127.0.0.1:9001/") //set linkis-mg-gateway url: http://{ip}:{port}
+ .connectionTimeout(30000) //connectionTimeOut
+ .discoveryEnabled(false) //disable discovery
+ .discoveryFrequency(1, TimeUnit.MINUTES) // discovery frequency
+ .loadbalancerEnabled(true) // enable loadbalance
+ .maxConnectionSize(5) // set max Connection
+ .retryEnabled(false) // set retry
+ .readTimeout(30000) //set read timeout
+ .setAuthenticationStrategy(new StaticAuthenticationStrategy()) //AuthenticationStrategy Linkis authen suppory static and Token
+ .setAuthTokenKey("hadoop") // set submit user
+ .setAuthTokenValue("123456"))) // set passwd or token (setAuthTokenValue("test"))
+ .setDWSVersion("v1") //linkis rest version v1
+ .build();
+
+ // 2. new Client(Linkis Client) by clientConfig
+ private static UJESClient client = new UJESClientImpl(clientConfig);
+
+ public static void main(String[] args) {
+
+ String user = "hadoop"; // 用户需要和AuthTokenKey的值保持一致
+ String executeCode = "df=spark.sql(\"show tables\")\n" +
+ "show(df)"; // code support:sql/hql/py/scala
+ try {
+
+ System.out.println("user : " + user + ", code : [" + executeCode + "]");
+ // 3. build job and execute
+ JobExecuteResult jobExecuteResult = toSubmit(user, executeCode);
+ System.out.println("execId: " + jobExecuteResult.getExecID() + ", taskId: " + jobExecuteResult.taskID());
+ // 4. get job jonfo
+ JobInfoResult jobInfoResult = client.getJobInfo(jobExecuteResult);
+ int sleepTimeMills = 1000;
+ int logFromLen = 0;
+ int logSize = 100;
+ while (!jobInfoResult.isCompleted()) {
+ // 5. get progress and log
+ JobProgressResult progress = client.progress(jobExecuteResult);
+ System.out.println("progress: " + progress.getProgress());
+ JobLogResult logRes = client.log(jobExecuteResult, logFromLen, logSize);
+ logFromLen = logRes.fromLine();
+ // 0: info 1: warn 2: error 3: all
+ System.out.println(logRes.log().get(3));
+ Utils.sleepQuietly(sleepTimeMills);
+ jobInfoResult = client.getJobInfo(jobExecuteResult);
+ }
+
+ JobInfoResult jobInfo = client.getJobInfo(jobExecuteResult);
+ // 6. Get the result set list (if the user submits multiple SQLs at a time,
+ // multiple result sets will be generated)
+ String resultSet = jobInfo.getResultSetList(client)[0];
+ // 7. get resultContent
+ ResultSetResult resultSetResult = client.resultSet(ResultSetAction.builder().setPath(resultSet).setUser(jobExecuteResult.getUser()).build());
+ System.out.println("metadata: " + resultSetResult.getMetadata()); // column name type
+ System.out.println("res: " + resultSetResult.getFileContent()); //row data
+ } catch (Exception e) {
+ e.printStackTrace();// please use log
+ IOUtils.closeQuietly(client);
+ }
+ IOUtils.closeQuietly(client);
+ }
+
+
+ private static JobExecuteResult toSubmit(String user, String code) {
+ // 1. build params
+ // set label map :EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
+ Map labels = new HashMap();
+ labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "spark-2.4.3"); // required engineType Label
+ labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, user + "-APPName");// required execute user and creator eg:hadoop-IDE
+ labels.put(LabelKeyConstant.CODE_TYPE_KEY, "py"); // required codeType
+ // set start up map :engineConn start params
+ Map startupMap = new HashMap(16);
+ // Support setting engine native parameters,For example: parameters of engines such as spark/hive
+ startupMap.put("spark.executor.instances", 2);
+ // setting linkis params
+ startupMap.put("wds.linkis.rm.yarnqueue", "dws");
+
+ // 2. build jobSubmitAction
+ JobSubmitAction jobSubmitAction = JobSubmitAction.builder()
+ .addExecuteCode(code)
+ .setStartupParams(startupMap)
+ .setUser(user) //submit user
+ .addExecuteUser(user) // execute user
+ .setLabels(labels)
+ .build();
+ // 3. to execute
+ return client.submit(jobSubmitAction);
+ }
+}
+```
+
+运行上述的代码即可以完成任务提交/执行/日志/结果集获取等
+
+## 3. Scala测试代码
+
+```scala
+package org.apache.linkis.client.test
+
+import org.apache.commons.io.IOUtils
+import org.apache.commons.lang3.StringUtils
+import org.apache.linkis.common.utils.Utils
+import org.apache.linkis.httpclient.dws.authentication.StaticAuthenticationStrategy
+import org.apache.linkis.httpclient.dws.config.DWSClientConfigBuilder
+import org.apache.linkis.manager.label.constant.LabelKeyConstant
+import org.apache.linkis.ujes.client.request._
+import org.apache.linkis.ujes.client.response._
+import java.util
+import java.util.concurrent.TimeUnit
+
+import org.apache.linkis.ujes.client.UJESClient
+
+object LinkisClientTest {
+ // 1. build config: linkis gateway url
+ val clientConfig = DWSClientConfigBuilder.newBuilder()
+ .addServerUrl("http://127.0.0.1:8088/") //set linkis-mg-gateway url: http://{ip}:{port}
+ .connectionTimeout(30000) //connectionTimeOut
+ .discoveryEnabled(false) //disable discovery
+ .discoveryFrequency(1, TimeUnit.MINUTES) // discovery frequency
+ .loadbalancerEnabled(true) // enable loadbalance
+ .maxConnectionSize(5) // set max Connection
+ .retryEnabled(false) // set retry
+ .readTimeout(30000) //set read timeout
+ .setAuthenticationStrategy(new StaticAuthenticationStrategy()) //AuthenticationStrategy Linkis authen suppory static and Token
+ .setAuthTokenKey("hadoop") // set submit user
+ .setAuthTokenValue("hadoop") // set passwd or token (setAuthTokenValue("BML-AUTH"))
+ .setDWSVersion("v1") //linkis rest version v1
+ .build();
+
+ // 2. new Client(Linkis Client) by clientConfig
+ val client = UJESClient(clientConfig)
+
+ def main(args: Array[String]): Unit = {
+ val user = "hadoop" // execute user 用户需要和AuthTokenKey的值保持一致
+ val executeCode = "df=spark.sql(\"show tables\")\n" +
+ "show(df)"; // code support:sql/hql/py/scala
+ try {
+ // 3. build job and execute
+ println("user : " + user + ", code : [" + executeCode + "]")
+ // 推荐使用submit,支持传递任务label
+ val jobExecuteResult = toSubmit(user, executeCode)
+ println("execId: " + jobExecuteResult.getExecID + ", taskId: " + jobExecuteResult.taskID)
+ // 4. get job jonfo
+ var jobInfoResult = client.getJobInfo(jobExecuteResult)
+ var logFromLen = 0
+ val logSize = 100
+ val sleepTimeMills: Int = 1000
+ while (!jobInfoResult.isCompleted) {
+ // 5. get progress and log
+ val progress = client.progress(jobExecuteResult)
+ println("progress: " + progress.getProgress)
+ val logObj = client.log(jobExecuteResult, logFromLen, logSize)
+ logFromLen = logObj.fromLine
+ val logArray = logObj.getLog
+ // 0: info 1: warn 2: error 3: all
+ if (logArray != null && logArray.size >= 4 && StringUtils.isNotEmpty(logArray.get(3))) {
+ println(s"log: ${logArray.get(3)}")
+ }
+ Utils.sleepQuietly(sleepTimeMills)
+ jobInfoResult = client.getJobInfo(jobExecuteResult)
+ }
+ if (!jobInfoResult.isSucceed) {
+ println("Failed to execute job: " + jobInfoResult.getMessage)
+ throw new Exception(jobInfoResult.getMessage)
+ }
+
+ // 6. Get the result set list (if the user submits multiple SQLs at a time,
+ // multiple result sets will be generated)
+ val jobInfo = client.getJobInfo(jobExecuteResult)
+ val resultSetList = jobInfoResult.getResultSetList(client)
+ println("All result set list:")
+ resultSetList.foreach(println)
+ val oneResultSet = jobInfo.getResultSetList(client).head
+ // 7. get resultContent
+ val resultSetResult: ResultSetResult = client.resultSet(ResultSetAction.builder.setPath(oneResultSet).setUser(jobExecuteResult.getUser).build)
+ println("metadata: " + resultSetResult.getMetadata) // column name type
+ println("res: " + resultSetResult.getFileContent) //row data
+ } catch {
+ case e: Exception => {
+ e.printStackTrace() //please use log
+ }
+ }
+ IOUtils.closeQuietly(client)
+ }
+
+
+ def toSubmit(user: String, code: String): JobExecuteResult = {
+ // 1. build params
+ // set label map :EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
+ val labels: util.Map[String, AnyRef] = new util.HashMap[String, AnyRef]
+ labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "spark-2.4.3"); // required engineType Label
+ labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, user + "-APPName"); // 请求的用户和应用名,两个参数都不能少,其中APPName不能带有"-"建议替换为"_"
+ labels.put(LabelKeyConstant.CODE_TYPE_KEY, "py"); // 指定脚本类型
+
+ val startupMap = new java.util.HashMap[String, AnyRef]()
+ // Support setting engine native parameters,For example: parameters of engines such as spark/hive
+ val instances: Integer = 2
+ startupMap.put("spark.executor.instances", instances)
+ // setting linkis params
+ startupMap.put("wds.linkis.rm.yarnqueue", "default")
+ // 2. build jobSubmitAction
+ val jobSubmitAction = JobSubmitAction.builder
+ .addExecuteCode(code)
+ .setStartupParams(startupMap)
+ .setUser(user) //submit user
+ .addExecuteUser(user) //execute user
+ .setLabels(labels)
+ .build
+ // 3. to execute
+ client.submit(jobSubmitAction)
+ }
+}
+```
+
+## 4. Once SDK 使用
+Linkis-cli客户端支持提交Once类型的任务,引擎进程启动后只运行一次任务,任务结束后自动销毁
+
+OnceEngineConn 通过 LinkisManagerClient 调用 LinkisManager 的 createEngineConn 接口,并将代码发送到用户创建的引擎,然后引擎开始执行
+
+
+## Once模式使用:
+
+1.首先创建一个新的 maven 项目或者在项目中引入以下依赖项
+
+```plain
+
+ org.apache.linkis
+ linkis-computation-client
+ ${linkis.version}
+
+```
+2.编写一个测试类
+使用clien条件
+
+```plain
+1.配置正确可用的gatew地址:
+LinkisJobClient.config().setDefaultServerUrl("http://ip:9001");
+2.将引擎参数,配置项,执行code写在code里面:
+ String code = "env {
+ + " spark.app.name = \"SeaTunnel\"\n"
+ + " spark.executor.instances = 2\n"
+ + " spark.executor.cores = 1\n"
+ + " spark.executor.memory = \"1g\"\n"
+ + "}\n"
+ + "\n"
+ + "source {\n"
+ + " Fake {\n"
+ + " result_table_name = \"my_dataset\"\n"
+ + " }\n"
+ + "\n"
+ + "}\n"
+ + "\n"
+ + "transform {\n"
+ + "}\n"
+ + "\n"
+ + "sink {\n"
+ + " Console {}\n"
+ + "}";
+3.创建Once模式对象:SubmittableSimpleOnceJob :
+SubmittableSimpleOnceJob = LinkisJobClient.once()
+ .simple()
+ .builder()
+ .setCreateService("seatunnel-Test")
+ .setMaxSubmitTime(300000) 超时时间
+ .addLabel(LabelKeyUtils.ENGINE_TYPE_LABEL_KEY(), "seatunnel-2.1.2") 引擎标签
+ .addLabel(LabelKeyUtils.USER_CREATOR_LABEL_KEY(), "hadoop-seatunnel") 用户标签
+ .addLabel(LabelKeyUtils.ENGINE_CONN_MODE_LABEL_KEY(), "once") 引擎模式标签
+ .addStartupParam(Configuration.IS_TEST_MODE().key(), true) 是否开启测试模式
+ .addExecuteUser("hadoop") 执行用户
+ .addJobContent("runType", "spark") 执行引擎
+ .addJobContent("code", code) 执行代码
+ .addJobContent("master", "local[4]")
+ .addJobContent("deploy-mode", "client")
+ .addSource("jobName", "OnceJobTest") 名称
+ .build();
+```
+## 测试类示例代码:
+
+```plain
+package org.apache.linkis.ujes.client
+
+import org.apache.linkis.common.utils.Utils
+import java.util.concurrent.TimeUnit
+import java.util
+import org.apache.linkis.computation.client.LinkisJobBuilder
+import org.apache.linkis.computation.client.once.simple.{SimpleOnceJob, SimpleOnceJobBuilder, SubmittableSimpleOnceJob}
+import org.apache.linkis.computation.client.operator.impl.{EngineConnLogOperator, EngineConnMetricsOperator, EngineConnProgressOperator}
+import org.apache.linkis.computation.client.utils.LabelKeyUtils
+import scala.collection.JavaConverters._
+@Deprecated
+object SqoopOnceJobTest extends App {
+ LinkisJobBuilder.setDefaultServerUrl("http://gateway地址:9001")
+ val logPath = "C:\\Users\\resources\\log4j.properties"
+ System.setProperty("log4j.configurationFile", logPath)
+ val startUpMap = new util.HashMap[String, AnyRef]
+ startUpMap.put("wds.linkis.engineconn.java.driver.memory", "1g")
+ val builder = SimpleOnceJob.builder().setCreateService("Linkis-Client")
+ .addLabel(LabelKeyUtils.ENGINE_TYPE_LABEL_KEY, "sqoop-1.4.6")
+ .addLabel(LabelKeyUtils.USER_CREATOR_LABEL_KEY, "hadoop-Client")
+ .addLabel(LabelKeyUtils.ENGINE_CONN_MODE_LABEL_KEY, "once")
+ .setStartupParams(startUpMap)
+ .setMaxSubmitTime(30000)
+ .addExecuteUser("hadoop")
+ val onceJob = importJob(builder)
+ val time = System.currentTimeMillis()
+ onceJob.submit()
+ println(onceJob.getId)
+ val logOperator = onceJob.getOperator(EngineConnLogOperator.OPERATOR_NAME).asInstanceOf[EngineConnLogOperator]
+ println(onceJob.getECMServiceInstance)
+ logOperator.setFromLine(0)
+ logOperator.setECMServiceInstance(onceJob.getECMServiceInstance)
+ logOperator.setEngineConnType("sqoop")
+ logOperator.setIgnoreKeywords("[main],[SpringContextShutdownHook]")
+ var progressOperator = onceJob.getOperator(EngineConnProgressOperator.OPERATOR_NAME).asInstanceOf[EngineConnProgressOperator]
+ var metricOperator = onceJob.getOperator(EngineConnMetricsOperator.OPERATOR_NAME).asInstanceOf[EngineConnMetricsOperator]
+ var end = false
+ var rowBefore = 1
+ while (!end || rowBefore > 0) {
+ if (onceJob.isCompleted) {
+ end = true
+ metricOperator = null
+ }
+ logOperator.setPageSize(100)
+ Utils.tryQuietly {
+ val logs = logOperator.apply()
+ logs.logs.asScala.foreach(log => {
+ println(log)
+ })
+ rowBefore = logs.logs.size
+ }
+ Thread.sleep(3000)
+ Option(metricOperator).foreach(operator => {
+ if (!onceJob.isCompleted) {
+ println(s"Metric监控: ${operator.apply()}")
+ println(s"进度: ${progressOperator.apply()}")
+ }
+ })
+ }
+ onceJob.isCompleted
+ onceJob.waitForCompleted()
+ println(onceJob.getStatus)
+ println(TimeUnit.SECONDS.convert(System.currentTimeMillis() - time, TimeUnit.MILLISECONDS) + "s")
+ System.exit(0)
+
+ def importJob(jobBuilder: SimpleOnceJobBuilder): SubmittableSimpleOnceJob = {
+ jobBuilder
+ .addJobContent("sqoop.env.mapreduce.job.queuename", "queue_1003_01")
+ .addJobContent("sqoop.mode", "import")
+ .addJobContent("sqoop.args.connect", "jdbc:mysql://数据库地址/库名")
+ .addJobContent("sqoop.args.username", "数据库账户")
+ .addJobContent("sqoop.args.password", "数据库密码")
+ .addJobContent("sqoop.args.query", "select * from linkis_ps_udf_manager where 1=1 and $CONDITIONS")
+ #表一定要存在 $CONDITIONS不可缺少
+ .addJobContent("sqoop.args.hcatalog.database", "janicegong_ind")
+ .addJobContent("sqoop.args.hcatalog.table", "linkis_ps_udf_manager_sync2")
+ .addJobContent("sqoop.args.hcatalog.partition.keys", "ds")
+ .addJobContent("sqoop.args.hcatalog.partition.values", "20220708")
+ .addJobContent("sqoop.args.num.mappers", "1")
+ .build()
+ }
+ def exportJob(jobBuilder: SimpleOnceJobBuilder): SubmittableSimpleOnceJob = {
+ jobBuilder
+ .addJobContent("sqoop.env.mapreduce.job.queuename", "queue_1003_01")
+ .addJobContent("sqoop.mode", "import")
+ .addJobContent("sqoop.args.connect", "jdbc:mysql://数据库地址/库名")
+ .addJobContent("sqoop.args.username", "数据库账户")
+ .addJobContent("sqoop.args.password", "数据库密码")
+ .addJobContent("sqoop.args.query", "select * from linkis_ps_udf_manager where 1=1 and $CONDITIONS")
+ #表一定要存在 $CONDITIONS不可缺少
+ .addJobContent("sqoop.args.hcatalog.database", "janicegong_ind")
+ .addJobContent("sqoop.args.hcatalog.table", "linkis_ps_udf_manager_sync2")
+ .addJobContent("sqoop.args.hcatalog.partition.keys", "ds")
+ .addJobContent("sqoop.args.hcatalog.partition.values", "20220708")
+ .addJobContent("sqoop.args.num.mappers", "1")
+ .build
+ }
+}
+```
+3.测试程序完成,引擎会自动销毁,不用手动清除
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/user-guide/udf-function.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/user-guide/udf-function.md
new file mode 100644
index 00000000000..975aad8e934
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.4.0/user-guide/udf-function.md
@@ -0,0 +1,175 @@
+---
+title: UDF功能
+sidebar_position: 5
+---
+
+> 详细介绍一下如何使用UDF功能
+
+## 1.UDF创建的整体步骤说明
+### 1 通用类型的UDF函数
+整体步骤说明
+- 在本地按UDF函数格式 编写udf 函数 ,并打包称jar包文件
+- 在【Scriptis >> 工作空间】上传至工作空间对应的目录
+- 在 【管理台>>UDF函数】 创建udf (默认加载)
+- 在任务代码中使用(对于新起的引擎才生效)
+
+**Step1 本地编写jar包**
+
+Hive UDF示例:
+1. 引入 hive 依赖
+```xml
+
+ org.apache.hive
+ hive-exec
+ 3.1.3
+
+```
+2. 编写UDF 类
+```java
+import org.apache.hadoop.hive.ql.exec.UDF;
+
+public class UDFExample extends UDF {
+ public Integer evaluate(Integer value) {
+ return value == null ? null : value + 1;
+ }
+}
+
+3. 编译打包
+```shell
+mvn package
+```
+
+**Step2【Scriptis >> 工作空间】上传jar包**
+选择对应的文件夹 鼠标右键 选择上传
+
+![](/Images/udf/udf_14.png)
+
+**Step3【管理台>>UDF函数】 创建UDF**
+- 函数名称:符合规则即可,如test_udf_jar 在sql等脚本中使用
+- 函数类型:通用
+- 脚本路径:选择jar包存放的共享目录路径 如 ../../../wds_functions_1_0_0.jar
+- 注册格式:包名+类名,如 com.webank.wedatasphere.willink.bdp.udf.ToUpperCase
+- 使用格式:输入类型与返回类型,需与jar包里定义一致
+- 分类:下拉选择;或者输入自定义目录(会在个人函数下新建目标一级目录)
+
+![](/Images/udf/udf_15.png)
+
+注意 新建的udf 函数 是默认加载的 可以在 【Scriptis >> UDF函数】 页面查看到,方便大家在Scriptis 任务编辑时 方便查看,勾选中的UDF函数 表明是会被加载使用的
+
+![](/Images/udf/udf_16.png)
+
+**Step4 使用该udf函数**
+
+在任务中 使用上述步骤创新的udf 函数
+函数名为 【创建UDF】 函数名称
+在pyspark中:
+print (sqlContext.sql("select test_udf_jar(name1) from stacyyan_ind.result_sort_1_20200226").collect())
+
+### 2 Spark类型的UDF函数
+整体步骤说明
+- 在【Scriptis >> 工作空间】在需要的目录下新建Spark脚本文件
+- 在 【管理台>>UDF函数】 创建udf (默认加载)
+- 在任务代码中使用(对于新起的引擎才生效)
+
+**Step1 dss-scriptis-新建scala脚本**
+
+![](/Images/udf/udf_17.png)
+
+def helloWorld(str: String): String = "hello, " + str
+
+**Step2 创建UDF**
+- 函数名称:符合规则即可,如test_udf_scala
+- 函数类型:spark
+- 脚本路径:../../../b
+- 注册格式:输入类型与返回类型,需与定义一致;注册格式需定义的函数名严格保持一致,如helloWorld
+- 分类:下拉选择dss-scriptis-UDF函数-个人函数下存在的一级目录;或者输入自定义目录(会在个人函数下新建目标一级目录)
+
+![](/Images/udf/udf_18.png)
+
+
+**Step3 使用该udf函数**
+
+在任务中 使用上述步骤创建新的udf 函数
+函数名为 【创建UDF】 函数名称
+- 在scala中
+ val s=sqlContext.sql("select test_udf_scala(name1)
+ from stacyyan_ind.result_sort_1_20200226")
+ show(s)
+- 在pyspark中
+ print(sqlContext.sql("select test_udf_scala(name1)
+ from stacyyan_ind.result_sort_1_20200226").collect());
+- 在sql中
+ select test_udf_scala(name1) from stacyyan_ind.result_sort_1_20200226;
+
+### 3 python函数
+整体步骤说明
+- 在【Scriptis >> 工作空间】在需要的目录下新建Python脚本文件
+- 在 【管理台>>UDF函数】 创建udf (默认加载)
+- 在任务代码中使用(对于新起的引擎才生效)
+
+**Step1 dss-scriptis-新建pyspark脚本**
+
+![](/Images/udf/udf_19.png)
+
+def addation(a, b):
+return a + b
+**Step2 创建UDF**
+- 函数名称:符合规则即可,如test_udf_py
+- 函数类型:spark
+- 脚本路径:../../../a
+- 注册格式:需定义的函数名严格保持一致,如addation
+- 使用格式:输入类型与返回类型,需与定义一致
+- 分类:下拉选择dss-scriptis-UDF函数-个人函数下存在的一级目录;或者输入自定义目录(会在个人函数下新建目标一级目录)
+
+![](/Images/udf/udf_20.png)
+
+**Step3 使用该udf函数**
+在任务中 使用上述步骤创建新的udf 函数
+函数名为 【创建UDF】 函数名称
+- 在pyspark中
+ print(sqlContext.sql("select test_udf_py(pv,impression) from neiljianliu_ind.alias where entityid=504059 limit 50").collect());
+- 在sql中
+ select test_udf_py(pv,impression) from neiljianliu_ind.alias where entityid=504059 limit 50
+
+### 4 scala函数
+整体步骤说明
+- 在【Scriptis >> 工作空间】在需要的目录下新建Spark Scala脚本文件
+- 在 【管理台>>UDF函数】 创建udf (默认加载)
+- 在任务代码中使用(对于新起的引擎才生效)
+
+**Step1 dss-scriptis-新建scala脚本**
+def hellozdy(str:String):String = "hellozdy,haha " + str
+
+**Step2 创建函数**
+- 函数名称:需与定义的函数名严格保持一致,如hellozdy
+- 函数类型:自定义函数
+- 脚本路径:../../../d
+- 使用格式:输入类型与返回类型,需与定义一致
+- 分类:下拉选择dss-scriptis-方法函数-个人函数下存在的一级目录;或者输入自定义目录(会在个人函数下新建目标一级目录)
+
+**Step3 使用该函数**
+在任务中 使用上述步骤创建新的udf 函数
+函数名为 【创建UDF】 函数名称
+val a = hellozdy("abcd");
+print(a)
+
+### 5 常见的使用问题
+#### 5.1 UDF函数加载失败
+"FAILED: SemanticException [Error 10011]: Invalid function xxxx"
+
+![](/Images/udf/udf_10.png)
+
+- 首先检查UDF函数配置是否正确:
+
+ ![](/Images/udf/udf_11.png)
+
+- 注册格式即为函数路径名称:
+
+ ![](/Images/udf/udf_12.png)
+
+- 检查scriptis-udf函数-查看加载的函数是否勾选,当函数未勾选时,引擎启动时将不会加载udf
+
+ ![](/Images/udf/udf_13.png)
+
+- 检查引擎是否已加载UDF,如果未加载,请重新另起一个引擎或者重启当前引擎
+ 备注:只有当引擎初始化时,才会加载UDF,中途新增UDF,当前引擎将无法感知并且无法进行加载
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/about/_category_.json b/versioned_docs/version-1.4.0/about/_category_.json
new file mode 100644
index 00000000000..6b41c038f08
--- /dev/null
+++ b/versioned_docs/version-1.4.0/about/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "About Linkis",
+ "position": 1.0
+}
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/about/configuration.md b/versioned_docs/version-1.4.0/about/configuration.md
new file mode 100644
index 00000000000..63b0ea42e26
--- /dev/null
+++ b/versioned_docs/version-1.4.0/about/configuration.md
@@ -0,0 +1,180 @@
+---
+title: Recommended Configuration
+sidebar_position: 3
+---
+
+
+## 1. Recommended configuration of hardware and software environment
+
+Linkis builds a layer of computing middleware between the upper application and the underlying engine. As an open source distributed computing middleware, it can be well deployed and run on Intel architecture servers and mainstream virtualization environments, and supports mainstream Linux operating system environments
+
+### 1.1. Linux operating system version requirements
+
+| OS | Version |
+| --- | --- |
+| Red Hat Enterprise Linux | 7.0 and above |
+| CentOS | 7.0 and above |
+| Oracle Enterprise Linux | 7.0 and above |
+| Ubuntu LTS | 16.04 and above |
+
+> **Note:** The above Linux operating systems can run on physical servers and mainstream virtualization environments such as VMware, KVM, and XEN
+
+### 1.2. Server recommended configuration
+
+Linkis supports 64-bit general-purpose hardware server platforms running on the Intel x86-64 architecture. The following recommendations are made for the server hardware configuration of the production environment:
+
+#### Production Environment
+
+| **CPU** | **Memory** | **Disk type** | **Network** | **Number of instances** |
+| --- | --- | --- | --- | --- |
+| 16 cores + | 32GB + | SAS | Gigabit network card | 1+ |
+
+> **Note:**
+>
+> - The above recommended configuration is the minimum configuration for deploying Linkis, and a higher configuration is strongly recommended for production environments
+> - The hard disk size configuration is recommended to be 50GB+, and the system disk and data disk are separated
+
+### 1.3. Software requirements
+
+Linkis binary packages are compiled based on the following software versions:
+
+| Component | Version | Description |
+| --- | --- | --- |
+| Hadoop | 3.3.4 | |
+| Hive | 3.1.3 | |
+| Spark | 3.2.1 | |
+| Flink | 1.12.2 | |
+| openLooKeng | 1.5.0 | |
+| Sqoop | 1.4.6 | |
+| ElasticSearch | 7.6.2 | |
+| Presto | 0.234 | |
+| Python | Python2 | |
+
+> **Note:**
+> If the locally installed component version is inconsistent with the above, you need to modify the corresponding component version and compile the binary package yourself for installation.
+
+### 1.4. Client web browser requirements
+
+Linkis recommends Chrome version 73 for front-end access
+
+
+## 2. Common scenarios
+
+### 2.1 Open test mode
+The development process requires a password-free interface, which can be replaced or appended to `linkis.properties`
+
+| parameter name | default value | description |
+| ------------------------- | ------- | --------------- -----------------------------------------------|
+| wds.linkis.test.mode | false | Whether to enable debugging mode, if set to true, all microservices support password-free login, and all EngineConn open remote debugging ports |
+| wds.linkis.test.user | hadoop | When wds.linkis.test.mode=true, the default login user for password-free login |
+
+![](./images/test-mode.png)
+
+
+### 2.2 Login user settings
+Apache Linkis uses configuration files to manage admin users by default, and this configuration can be replaced or appended to `linkis-mg-gateway.properties`. For multi-user access LDAP implementation.
+
+| parameter name | default value | description |
+| ------------------------- | ------- | --------------- -----------------------------------------------|
+| wds.linkis.admin.user | hadoop | admin username |
+| wds.linkis.admin.password | 123456 | Admin user password |
+
+![](./images/login-user.png)
+
+
+### 2.3 LDAP Settings
+Apache Linkis can access LDAP through parameters to achieve multi-user management, and this configuration can be replaced or added in `linkis-mg-gateway.properties`.
+
+| parameter name | default value | description |
+| ------------------------- | ------- | --------------- -----------------------------------------------|
+| wds.linkis.ldap.proxy.url | None | LDAP URL address |
+| wds.linkis.ldap.proxy.baseDN | None | LDAP baseDN address |
+| wds.linkis.ldap.proxy.userNameFormat | None | |
+
+![](./images/ldap.png)
+
+### 2.4 Turn off resource checking
+Apache Linkis sometimes debugs exceptions when submitting tasks, such as: insufficient resources; you can replace or append this configuration in `linkis-cg-linkismanager.properties`.
+
+| parameter name | default value | description |
+| ------------------------- | ------- | --------------- -----------------------------------------------|
+| wds.linkis.manager.rm.request.enable | true | resource check |
+
+![](./images/resource-enable.png)
+
+### 2.5 Enable engine debugging
+Apache Linkis EC can enable debugging mode, and this configuration can be replaced or added in `linkis-cg-linkismanager.properties`.
+
+| parameter name | default value | description |
+| ------------------------- | ------- | --------------- -----------------------------------------------|
+| wds.linkis.engineconn.debug.enable | true | Whether to enable engine debugging |
+
+![](./images/engine-debug.png)
+
+### 2.6 Hive metadata configuration
+The public-service service of Apache Linkis needs to read hive metadata; this configuration can be replaced or appended in `linkis-ps-publicservice.properties`.
+
+| parameter name | default value | description |
+| ------------------------- | ------- | --------------- -----------------------------------------------|
+| hive.meta.url | None | The URL of the HiveMetaStore database. |
+| hive.meta.user | none | user of the HiveMetaStore database |
+| hive.meta.password | None | password for the HiveMetaStore database |
+
+![](./images/hive-meta.png)
+
+### 2.7 Linkis database configuration
+Apache Linkis access uses Mysql as data storage by default, you can replace or append this configuration in `linkis.properties`.
+
+| parameter name | default value | description |
+| ------------------------- | ------- | --------------- -----------------------------------------------|
+| wds.linkis.server.mybatis.datasource.url | None | Database connection string, for example: jdbc:mysql://127.0.0.1:3306/dss?characterEncoding=UTF-8 |
+| wds.linkis.server.mybatis.datasource.username | None | Database user name, for example: root |
+| wds.linkis.server.mybatis.datasource.password | None | Database password, for example: root |
+
+![](./images/linkis-db.png)
+
+### 2.8 Linkis Session cache configuration
+Apache Linkis supports using redis for session sharing; this configuration can be replaced or appended in `linkis.properties`.
+
+| parameter name | default value | description |
+| ------------------------- | ------- | --------------- -----------------------------------------------|
+| linkis.session.redis.cache.enabled | None | Whether to enable |
+| linkis.session.redis.host | 127.0.0.1 | hostname |
+| linkis.session.redis.port | 6379 | Port, eg |
+| linkis.session.redis.password | None | password |
+
+![](./images/redis.png)
+
+### 2.9 Linkis module development configuration
+When developing Apache Linkis, you can use this parameter to customize the database, Rest interface, and entity objects of the loading module; you can modify it in `linkis-ps-publicservice.properties`, and use commas to separate multiple modules.
+
+| parameter name | default value | description |
+| ------------------------- | ------- | --------------- -----------------------------------------------|
+| wds.linkis.server.restful.scan.packages | None | restful scan packages, for example: org.apache.linkis.basedatamanager.server.restful |
+| wds.linkis.server.mybatis.mapperLocations | None | Mybatis mapper file path, for example: classpath*:org/apache/linkis/basedatamanager/server/dao/mapper/*.xml|
+| wds.linkis.server.mybatis.typeAliasesPackage | None | Entity alias scanning package, for example: org.apache.linkis.basedatamanager.server.domain |
+| wds.linkis.server.mybatis.BasePackage | None | Database dao layer scan, for example: org.apache.linkis.basedatamanager.server.dao |
+
+![](./images/deverlop-conf.png)
+
+### 2.10 Linkis module development configuration
+This parameter can be used to customize the route of loading modules during Apache Linkis development; it can be modified in `linkis.properties`, and commas are used to separate multiple modules.
+
+| parameter name | default value | description |
+| ------------------------- | ------- | --------------- -----------------------------------------------|
+| wds.linkis.gateway.conf.publicservice.list | cs,contextservice,data-source-manager,metadataQuery,metadatamanager,query,jobhistory,application,configuration,filesystem,udf,variable,microservice,errorcode,bml,datasource,basedata -manager | publicservice services support routing modules |
+
+![](./images/list-conf.png)
+
+### 2.11 Linkis file system and material storage path
+This parameter can be used to customize the route of loading modules during Apache Linkis development; it can be modified in `linkis.properties`, and commas are used to separate multiple modules.
+
+| parameter name | default value | description |
+| ------------------------- | ------- | --------------- -----------------------------------------------|
+| wds.linkis.filesystem.root.path | file:///tmp/linkis/ | Local user directory, a folder named after the user name needs to be created under this directory |
+| wds.linkis.filesystem.hdfs.root.path | hdfs:///tmp/ | HDFS user directory |
+| wds.linkis.bml.is.hdfs | true | Whether to enable hdfs |
+| wds.linkis.bml.hdfs.prefix | /apps-data | hdfs path |
+| wds.linkis.bml.local.prefix | /apps-data | local path |
+
+![](./images/fs-conf.png)
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/about/glossary.md b/versioned_docs/version-1.4.0/about/glossary.md
new file mode 100644
index 00000000000..28fa28c1cca
--- /dev/null
+++ b/versioned_docs/version-1.4.0/about/glossary.md
@@ -0,0 +1,103 @@
+---
+title: Glossary
+sidebar_position: 4
+---
+
+## 1. Glossary
+
+Linkis is developed based on the microservice architecture, and its services can be divided into 3 types of service groups (groups): computing governance service group, public enhancement service group and microservice governance service group.
+- Computation Governance Services: The core service for processing tasks, supporting the 4 main stages of the computing task/request processing flow (submit->prepare->execute->result);
+- Public Enhancement Services: Provide basic support services, including context services, engine/udf material management services, job history and other public services and data source management services;
+- Microservice Governance Services: Customized Spring Cloud Gateway, Eureka. Provides a base for microservices
+
+The following will introduce the key Glossary and services of these three groups of services:
+
+### 1.1 Key module nouns
+
+| Abbreviation | Name | Main Functions |
+|--------- |------------------------- |--------------- -------|
+| MG/mg | Microservice Governance | Microservice Governance |
+| CG/cg | Computation Governance | Computation Governance |
+| EC/ec | EngineConn | Engine Connector |
+| - | Engine | The underlying computing storage engine, such as spark, hive, shell |
+| ECM/ecm | EngineConnManager | Management of Engine Connectors |
+| ECP/ecp | EngineConnPlugin | Engine Connector Plugin |
+| RM/rm | ResourceManager | Resource manager for managing task resource and user resource usage and control |
+| AM/am | AppManager | Application Manager to manage EngineConn and ECM services |
+| LM/lm | LinkisManager | Linkis manager service, including: RM, AM, LabelManager and other modules |
+| PES/pes | Public Enhancement Services |
+| - | Orchestrator | Orchestrator, used for Linkis task orchestration, task multi-active, mixed calculation, AB and other policy support |
+| UJES | Unified Job Execute Service | Unified Job Execute Service |
+| DDL/ddl | Data Definition Language | Database Definition Language |
+| DML/dml | Data Manipulation Language | Data Manipulation Language |
+
+### 1.2 Mission key nouns
+
+- JobRequest: job request, corresponding to the job submitted by the Client to Linkis, including the execution content, user, label and other information of the job
+- RuntimeMap: task runtime parameters, task level take effect, such as data source information for placing multiple data sources
+- StartupMap: Engine connector startup parameters, used to start the EngineConn connected machine, the EngineConn process takes effect, such as setting spark.executor.memory=4G
+- UserCreator: Task creator information: contains user information User and Client submitted application information Creator, used for tenant isolation of tasks and resources
+- submitUser: task submit user
+- executeUser: the real execution user of the task
+- JobSource: Job source information, record the IP or script address of the job
+- errorCode: error code, task error code information
+- JobHistory: task history persistence module, providing historical information query of tasks
+- ResultSet: The result set, the result set corresponding to the task, is saved with the .dolphin file suffix by default
+- JobInfo: Job runtime information, such as logs, progress, resource information, etc.
+- Resource: resource information, each task consumes resources
+- RequestTask: The smallest execution unit of EngineConn, the task unit transmitted to EngineConn for execution
+
+
+
+## 2. Service Introduction
+
+This section mainly introduces the services of Linkis, what services will be available after Linkis is started, and the functions of the services.
+
+## 2.1 Service List
+
+After Linkis is started, the microservices included in each service group (group) are as follows:
+
+| Belonging to the microservice group (group) | Service name | Main functions |
+| ---- | ---- | ---- |
+| MGS | linkis-mg-eureka | Responsible for service registration and discovery, other upstream components will also reuse the linkis registry, such as dss|
+| MGS | linkis-mg-gateway | As the gateway entrance of Linkis, it is mainly responsible for request forwarding and user access authentication |
+| CGS | linkis-cg-entrance | The task submission entry is a service responsible for receiving, scheduling, forwarding execution requests, and life cycle management of computing tasks, and can return calculation results, logs, and progress to the caller |
+| CGS | linkis-cg-linkismanager| Provides AppManager (application management), ResourceManager (resource management), LabelManager (label management), Engine connector plug-in manager capabilities |
+| CGS | linkis-cg-engineconnmanager | Manager for EngineConn, providing lifecycle management of engines |
+| CGS | linkis-cg-engineconn| The engine connector service is the actual connection service with the underlying computing storage engine (Hive/Spark), including session information with the actual engine. For the underlying computing storage engine, it acts as a client and is triggered and started by tasks|
+| PES | linkis-ps-publicservice|Public Enhanced Service Group Module Service, which provides functions such as unified configuration management, context service, BML material library, data source management, microservice management, and historical task query for other microservice modules |
+
+All services seen by open source after startup are as follows:
+![Linkis_Eureka](/Images/deployment/Linkis_combined_eureka.png)
+
+## 2.1 Detailed explanation of public enhanced services
+After version 1.3.1, the Public Enhanced Service Group (PES) merges related module services into one service linkis-ps-publicservice by default to provide related functions. Of course, if you want to deploy separately, it is also supported. You only need to package and deploy the services of the corresponding modules.
+The combined public enhanced service mainly includes the following functions:
+
+| Abbreviation | Service Name | Main Functions |
+|--------- |------------------------- |--------------- -------|
+| CS/cs | Context Service | Context Service, used to transfer result sets, variables, files, etc. between tasks |
+| UDF/udf | UDF | UDF management module, provides management functions for UDF and functions, supports sharing and version control |
+| variable | Variable | Global custom module, providing management functions for global custom variables |
+| script | Script-dev | Script file operation service, providing script editing and saving, script directory management functions |
+| jobHistory | JobHistory | Task history persistence module, providing historical information query of tasks |
+| BML/bml | BigData Material library |
+| - | Configuration | Configuration management, providing management and viewing of configuration parameters |
+| - | instance-label | Microservice management service, providing mapping management functions for microservices and routing labels |
+| - | error-code | Error code management, providing the function of managing through error codes |
+| DMS/dms | Data Source Manager Service | Data Source Management Service |
+| MDS/mds | MetaData Manager Service | Metadata Management Service |
+| - | linkis-metadata | Provides Hive metadata information viewing function, which will be merged into MDS later |
+| - | basedata-manager | Basic data management, used to manage Linkis' own basic metadata information |
+
+### 3 Module Introduction
+This section mainly introduces the major modules and functions of Linkis.
+
+- linkis-commons: The public modules of linkis, including public tool modules, RPC modules, microservice foundation and other modules
+- linkis-computation-governance: Computing governance module, including modules for computing governance multiple services: Entrance, LinkisManager, EngineConnManager, EngineConn, etc.
+- linkis-engineconn-plugins: Engine connector plugin module, contains all engine connector plugin implementations
+- linkis-extensions: The extension enhancement module of Linkis, not a necessary function module, now mainly includes the IO module for file proxy operation
+- linkis-orchestrator: Orchestration module for Linkis task orchestration, advanced strategy support such as task multi-active, mixed calculation, AB, etc.
+- linkis-public-enhancements: public enhancement module, which contains all public services for invoking linkis internal and upper-layer application components
+- linkis-spring-cloud-services: Spring cloud related service modules, including gateway, registry, etc.
+- linkis-web: front-end module
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/about/images/deverlop-conf.png b/versioned_docs/version-1.4.0/about/images/deverlop-conf.png
new file mode 100644
index 00000000000..3d5fc8af601
Binary files /dev/null and b/versioned_docs/version-1.4.0/about/images/deverlop-conf.png differ
diff --git a/versioned_docs/version-1.4.0/about/images/engine-debug.png b/versioned_docs/version-1.4.0/about/images/engine-debug.png
new file mode 100644
index 00000000000..788bd2b58f0
Binary files /dev/null and b/versioned_docs/version-1.4.0/about/images/engine-debug.png differ
diff --git a/versioned_docs/version-1.4.0/about/images/fs-conf.png b/versioned_docs/version-1.4.0/about/images/fs-conf.png
new file mode 100644
index 00000000000..85c4234a9b4
Binary files /dev/null and b/versioned_docs/version-1.4.0/about/images/fs-conf.png differ
diff --git a/versioned_docs/version-1.4.0/about/images/hive-meta.png b/versioned_docs/version-1.4.0/about/images/hive-meta.png
new file mode 100644
index 00000000000..50c02906a77
Binary files /dev/null and b/versioned_docs/version-1.4.0/about/images/hive-meta.png differ
diff --git a/versioned_docs/version-1.4.0/about/images/ldap.png b/versioned_docs/version-1.4.0/about/images/ldap.png
new file mode 100644
index 00000000000..9625ae20be0
Binary files /dev/null and b/versioned_docs/version-1.4.0/about/images/ldap.png differ
diff --git a/versioned_docs/version-1.4.0/about/images/linkis-db.png b/versioned_docs/version-1.4.0/about/images/linkis-db.png
new file mode 100644
index 00000000000..35f7f5573df
Binary files /dev/null and b/versioned_docs/version-1.4.0/about/images/linkis-db.png differ
diff --git a/versioned_docs/version-1.4.0/about/images/linkis-intro-01.png b/versioned_docs/version-1.4.0/about/images/linkis-intro-01.png
new file mode 100644
index 00000000000..5c672c8a931
Binary files /dev/null and b/versioned_docs/version-1.4.0/about/images/linkis-intro-01.png differ
diff --git a/versioned_docs/version-1.4.0/about/images/linkis-intro-03.png b/versioned_docs/version-1.4.0/about/images/linkis-intro-03.png
new file mode 100644
index 00000000000..3ba32d84349
Binary files /dev/null and b/versioned_docs/version-1.4.0/about/images/linkis-intro-03.png differ
diff --git a/versioned_docs/version-1.4.0/about/images/list-conf.png b/versioned_docs/version-1.4.0/about/images/list-conf.png
new file mode 100644
index 00000000000..d19c194a023
Binary files /dev/null and b/versioned_docs/version-1.4.0/about/images/list-conf.png differ
diff --git a/versioned_docs/version-1.4.0/about/images/login-user.png b/versioned_docs/version-1.4.0/about/images/login-user.png
new file mode 100644
index 00000000000..477c634f1d4
Binary files /dev/null and b/versioned_docs/version-1.4.0/about/images/login-user.png differ
diff --git a/versioned_docs/version-1.4.0/about/images/redis.png b/versioned_docs/version-1.4.0/about/images/redis.png
new file mode 100644
index 00000000000..3a064640613
Binary files /dev/null and b/versioned_docs/version-1.4.0/about/images/redis.png differ
diff --git a/versioned_docs/version-1.4.0/about/images/resource-enable.png b/versioned_docs/version-1.4.0/about/images/resource-enable.png
new file mode 100644
index 00000000000..973fcee8409
Binary files /dev/null and b/versioned_docs/version-1.4.0/about/images/resource-enable.png differ
diff --git a/versioned_docs/version-1.4.0/about/images/test-mode.png b/versioned_docs/version-1.4.0/about/images/test-mode.png
new file mode 100644
index 00000000000..3466b1b8857
Binary files /dev/null and b/versioned_docs/version-1.4.0/about/images/test-mode.png differ
diff --git a/versioned_docs/version-1.4.0/about/introduction.md b/versioned_docs/version-1.4.0/about/introduction.md
new file mode 100644
index 00000000000..84f4a47267c
--- /dev/null
+++ b/versioned_docs/version-1.4.0/about/introduction.md
@@ -0,0 +1,113 @@
+---
+title: Introduction
+sidebar_position: 1
+---
+
+ Linkis builds a layer of computation middleware between upper applications and underlying engines. By using standard interfaces such as REST/WS/JDBC provided by Linkis, the upper applications can easily access the underlying engines such as MySQL/Spark/Hive/Presto/Flink, etc., and achieve the intercommunication of user resources like unified variables, scripts, UDFs, functions and resource files,and provides data source and metadata management services through REST standard interface. at the same time.
+
+As a computation middleware, Linkis provides powerful connectivity, reuse, orchestration, expansion, and governance capabilities. By decoupling the application layer and the engine layer, it simplifies the complex network call relationship, and thus reduces the overall complexity and saves the development and maintenance costs as well.
+
+Since the first release of Linkis in 2019, it has accumulated more than **700** trial companies and **1000+** sandbox trial users, which involving diverse industries, from finance, banking, tele-communication, to manufactory, internet companies and so on. Lots of companies have already used Linkis as a unified entrance for the underlying computation and storage engines of the big data platform.
+
+
+![linkis-intro-01](images/linkis-intro-01.png)
+
+![linkis-intro-03](images/linkis-intro-03.png)
+
+## Features
+
+- **Support for diverse underlying computation storage engines** : Spark, Hive, Python, Shell, Flink, JDBC, Pipeline, Sqoop, OpenLooKeng, Presto, ElasticSearch, Trino, SeaTunnel, etc.;
+
+- **Support for diverse language** : SparkSQL, HiveSQL, Python, Shell, Pyspark, Scala, JSON and Java;
+
+- **Powerful computing governance capability** : It can provide task routing, load balancing, multi-tenant, traffic control, resource control and other capabilities based on multi-level labels;
+
+- **Support full stack computation/storage engine** : The ability to receive, execute and manage tasks and requests for various compute and storage engines, including offline batch tasks, interactive query tasks, real-time streaming tasks and data lake tasks;
+
+- **Unified context service** : supports cross-user, system and computing engine to associate and manage user and system resource files (JAR, ZIP, Properties, etc.), result sets, parameter variables, functions, UDFs, etc., one setting, automatic reference everywhere;
+
+- **Unified materials** : provides system and user level material management, can share and flow, share materials across users, across systems;
+
+- **Unified data source management** : provides the ability to add, delete, check and change information of Hive, ElasticSearch, Mysql, Kafka, MongoDB and other data sources, version control, connection test, and query metadata information of corresponding data sources;
+
+- **Error code capability** : provides error codes and solutions for common errors of tasks, which is convenient for users to locate problems by themselves;
+
+
+## Supported engine types
+
+| **Engine name** | **Support underlying component version (default dependency version)** | **Linkis Version Requirements** | **Included in Release Package By Default** | **Description** |
+|:---- |:---- |:---- |:---- |:---- |
+|Spark|Apache 2.0.0~2.4.7, CDH >= 5.4.0, (default Apache Spark 2.4.3)|\>=1.0.3|Yes|Spark EngineConn, supports SQL , Scala, Pyspark and R code|
+|Hive|Apache >= 1.0.0, CDH >= 5.4.0, (default Apache Hive 2.3.3)|\>=1.0.3|Yes|Hive EngineConn, supports HiveQL code|
+|Python|Python >= 2.6, (default Python2*)|\>=1.0.3|Yes|Python EngineConn, supports python code|
+|Shell|Bash >= 2.0|\>=1.0.3|Yes|Shell EngineConn, supports Bash shell code|
+|JDBC|MySQL >= 5.0, Hive >=1.2.1, (default Hive-jdbc 2.3.4)|\>=1.0.3|No |JDBC EngineConn, already supports Mysql,Oracle,KingBase,PostgreSQL,SqlServer,DB2,Greenplum,DM,Doris,ClickHouse,TiDB,Starrocks,GaussDB and OceanBase, can be extended quickly Support other engines with JDBC Driver package, such as SQLite|
+|Flink |Flink >= 1.12.2, (default Apache Flink 1.12.2)|\>=1.0.2|No |Flink EngineConn, supports FlinkSQL code, also supports starting a new Yarn in the form of Flink Jar Application|
+|Pipeline|-|\>=1.0.2|No|Pipeline EngineConn, supports file import and export|
+|openLooKeng|openLooKeng >= 1.5.0, (default openLookEng 1.5.0)|\>=1.1.1|No|openLooKeng EngineConn, supports querying data virtualization engine with Sql openLooKeng|
+|Sqoop| Sqoop >= 1.4.6, (default Apache Sqoop 1.4.6)|\>=1.1.2|No|Sqoop EngineConn, support data migration tool Sqoop engine|
+|Presto|Presto >= 0.180|\>=1.2.0|No|Presto EngineConn, supports Presto SQL code|
+|ElasticSearch|ElasticSearch >=6.0|\>=1.2.0|No|ElasticSearch EngineConn, supports SQL and DSL code|
+|Trino | Trino >=371 | >=1.3.1 | No | Trino EngineConn, supports Trino SQL code |
+|Seatunnel | Seatunnel >=2.1.2 | >=1.3.1 | No | Seatunnel EngineConn, supportt Seatunnel SQL code |
+## Download
+
+Please go to the [Linkis releases page](https://github.com/apache/linkis/releases) to download a compiled distribution or a source code package of Linkis.
+
+## Compile and deploy
+Please follow [Compile Guide](../development/build.md) to compile Linkis from source code.
+Please refer to [Deployment_Documents](../deployment/deploy-quick.md) to do the deployment.
+
+## Examples and Guidance
+- [Engine Usage Guidelines](../engine-usage/overview.md)
+- [API Documentation](../api/overview.md)
+
+## Documentation
+
+The documentation of linkis is in [Linkis-WebSite](https://github.com/apache/linkis-website)
+
+## Architecture
+Linkis services could be divided into three categories: computation governance services, public enhancement services and microservice governance services.
+- The computation governance services, support the 3 major stages of processing a task/request: submission -> preparation -> execution.
+- The public enhancement services, including the material library service, context service, and data source service.
+- The microservice governance services, including Spring Cloud Gateway, Eureka and Open Feign.
+
+Below is the Linkis architecture diagram. You can find more detailed architecture docs in [Architecture](../architecture/overview.md).
+![architecture](/Images/Linkis_1.0_architecture.png)
+
+Based on Linkis the computation middleware, we've built a lot of applications and tools on top of it in the big data platform suite [WeDataSphere](https://github.com/WeBankFinTech/WeDataSphere). Below are the currently available open-source projects.
+
+![wedatasphere_stack_Linkis](/Images/wedatasphere_stack_Linkis.png)
+
+- [**DataSphere Studio** - Data Application Integration& Development Framework](https://github.com/WeBankFinTech/DataSphereStudio)
+
+- [**Scriptis** - Data Development IDE Tool](https://github.com/WeBankFinTech/Scriptis)
+
+- [**Visualis** - Data Visualization Tool](https://github.com/WeBankFinTech/Visualis)
+
+- [**Schedulis** - Workflow Task Scheduling Tool](https://github.com/WeBankFinTech/Schedulis)
+
+- [**Qualitis** - Data Quality Tool](https://github.com/WeBankFinTech/Qualitis)
+
+- [**MLLabis** - Machine Learning Notebook IDE](https://github.com/WeBankFinTech/prophecis)
+
+More projects upcoming, please stay tuned.
+
+## Contributing
+
+Contributions are always welcomed, we need more contributors to build Linkis together. either code, or doc or other supports that could help the community.
+For code and documentation contributions, please follow the [contribution guide](/community/how-to-contribute).
+
+## Contact Us
+
+Any questions or suggestions please kindly submit an issue.
+You can scan the QR code below to join our WeChat group to get more immediate response.
+
+![introduction05](/Images/wedatasphere_contact_01.png)
+
+Meetup videos on [Bilibili](https://space.bilibili.com/598542776?from=search&seid=14344213924133040656).
+
+## Who is Using Linkis
+
+We opened [an issue](https://github.com/apache/linkis/issues/23) for users to feedback and record who is using Linkis.
+Since the first release of Linkis in 2019, it has accumulated more than **700** trial companies and **1000+** sandbox trial users, which involving diverse industries, from finance, banking, tele-communication, to manufactory, internet companies and so on.
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/api/_category_.json b/versioned_docs/version-1.4.0/api/_category_.json
new file mode 100644
index 00000000000..51de50a69b7
--- /dev/null
+++ b/versioned_docs/version-1.4.0/api/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "API",
+ "position": 7.0
+}
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/api/http/_category_.json b/versioned_docs/version-1.4.0/api/http/_category_.json
new file mode 100644
index 00000000000..803138a2024
--- /dev/null
+++ b/versioned_docs/version-1.4.0/api/http/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Http API",
+ "position": 6
+}
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/api/http/linkis-cg-engineplugin-api/_category_.json b/versioned_docs/version-1.4.0/api/http/linkis-cg-engineplugin-api/_category_.json
new file mode 100644
index 00000000000..b1c3a8501a2
--- /dev/null
+++ b/versioned_docs/version-1.4.0/api/http/linkis-cg-engineplugin-api/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Engine Plugin Management Service",
+ "position": 4
+}
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/api/http/linkis-cg-engineplugin-api/engine-plugin-api.md b/versioned_docs/version-1.4.0/api/http/linkis-cg-engineplugin-api/engine-plugin-api.md
new file mode 100644
index 00000000000..7fdb9898744
--- /dev/null
+++ b/versioned_docs/version-1.4.0/api/http/linkis-cg-engineplugin-api/engine-plugin-api.md
@@ -0,0 +1,576 @@
+---
+title: Engine Plugin API
+sidebar_position: 5
+---
+
+** EnginePluginRestful class **
+
+## refresh
+
+
+**Interface address**:`/api/rest_j/v1/engineplugin/refresh`
+
+
+**Request method**: `GET`
+
+
+**Request data type**: `application/x-www-form-urlencoded`
+
+
+**Response data type**: `*/*`
+
+
+**Interface description**:
+
+**Request example**:
+````javascript
+{
+ em: {
+ serviceInstance: {
+ applicationName: "linkis-cg-engineconnmanager",
+ instance: "localhost110003:9102"
+ }
+ }
+}
+````
+
+**Request Parameters**:
+
+
+| Parameter name | Parameter description | Request type | Required | Data type | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|applicationName|Engine tag name, which belongs to the value in serviceInstance|String|false|String|
+|em|The outermost layer of the input parameter|Map|false|Map|
+|emInstance|The name of the engine instance and the level of 'em' belong to the outermost layer|String|false|String|
+|engineType|The engine type belongs to the outermost level with the same level as 'em'|String|false|String|
+|instance|Instance name|String|false|String|
+|nodeStatus|The status is the outermost level with 'em', and the status has the following enumeration types 'Healthy', 'UnHealthy', 'WARN', 'StockAvailable', 'StockUnavailable'|String|false|String|
+|owner|The creator is at the same level as 'em' and belongs to the outermost layer|String|false|String|
+|serviceInstance|The input parameter belongs to ''em|Map|false|Map|
+
+
+**Response Status**:
+
+
+| Status code | Description | schema |
+| -------- | -------- | ----- |
+|200|OK|Message|
+|201|Created|
+|401|Unauthorized|
+|403|Forbidden|
+|404|Not Found|
+
+
+**Response parameters**:
+
+
+| parameter name | parameter description | type | schema |
+| -------- | -------- | ----- |----- |
+|data|Dataset|object|
+|message|Description|string|
+|method|request url|string|
+|status|Status|integer(int32)|integer(int32)|
+
+
+**Sample Response**:
+````javascript
+{
+ "method": "/api/linkisManager/listEMEngines",
+ "status": 0,
+ "message": "OK",
+ "data": {
+ "engines": []
+ }
+}
+````
+## Engine user collection
+
+
+**Interface address**:`/api/rest_j/v1/linkisManager/listUserEngines`
+
+
+**Request method**: `GET`
+
+
+**Request data type**: `application/x-www-form-urlencoded`
+
+
+**Response data type**: `application/json`
+
+
+**Interface description**:
+
+
+
+**Request Parameters**:
+
+
+| Parameter name | Parameter description | Required | Request type | Data type | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|description|Description|false|String|String|
+|id|id|false|Long|Long|
+|isLoad|Whether to load|false|Boolean|Boolean|
+|path|Only store the last uploaded path of the user for prompting|false|String|String|
+|registerFormat|register execution address|false|String|String|
+|udfName|udfName|false|String|String|
+|udfType|udfType|false|Integer|Integer|
+|useFormat|Use Format|false|String|String|
+
+
+**Response Status**:
+
+
+| Status code | Description | schema |
+| -------- | -------- | ----- |
+|200|OK|Message|
+|201|Created|
+|401|Unauthorized|
+|403|Forbidden|
+|404|Not Found|
+
+
+**Response parameters**:
+
+
+| parameter name | parameter description | type | schema |
+| -------- | -------- | ----- |----- |
+|data|Dataset|object|
+|message|Description|string|
+|method|request url|string|
+|status|Status|integer(int32)|integer(int32)|
+
+
+**Sample Response**:
+````javascript
+{
+"data": {},
+"message": "",
+"method": "",
+"status": 0
+}
+````
+
+
+## Get user directory
+
+
+**Interface address**: `/api/rest_j/v1/udf/userDirectory`
+
+
+**Request method**: `GET`
+
+
+**Request data type**: `application/x-www-form-urlencoded`
+
+
+**Response data type**: `*/*`
+
+
+**Interface description**:
Get the first-level classification of the user's personal function
+
+
+
+**Request Parameters**:
+
+
+| Parameter name | Parameter description | Required | Request type | Data type | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|category|Get the user directory of the specified collection type, if the type is UDF, get the user directory under this type |false|string|string|
+
+
+**Response Status**:
+
+
+| Status code | Description | schema |
+| -------- | -------- | ----- |
+|200|OK|Message|
+|401|Unauthorized|
+|403|Forbidden|
+|404|Not Found|
+
+
+**Response parameters**:
+
+
+| parameter name | parameter description | type | schema |
+| -------- | -------- | ----- |----- |
+|data|Dataset|object|
+|message|Description|string|
+|method|request url|string|
+|status|Status|integer(int32)|integer(int32)|
+
+
+**Sample Response**:
+````javascript
+{
+"data": {},
+"message": "",
+"method": "",
+"status": 0
+}
+````
+
+
+## version list
+
+
+**Interface address**:`/api/rest_j/v1/udf/versionList`
+
+
+**Request method**: `GET`
+
+
+**Request data type**: `application/x-www-form-urlencoded`
+
+
+**Response data type**: `*/*`
+
+
+**Interface description**:
View version list
+
+
+
+**Request Parameters**:
+
+
+| Parameter name | Parameter description | Required | Request type | Data type | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|udfId|udfId|false|integer|integer(int64)|
+
+
+**Response Status**:
+
+
+| Status code | Description | schema |
+| -------- | -------- | ----- |
+|200|OK|Message|
+|401|Unauthorized|
+|403|Forbidden|
+|404|Not Found|
+
+
+**Response parameters**:
+
+
+| parameter name | parameter description | type | schema |
+| -------- | -------- | ----- |----- |
+|data|Dataset|object|
+|message|Description|string|
+|method|request url|string|
+|status|Status|integer(int32)|integer(int32)|
+
+
+**Sample Response**:
+````javascript
+{
+"data": {},
+"message": "",
+"method": "",
+"status": 0
+}
+````
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/api/jdbc-api.md b/versioned_docs/version-1.4.0/api/jdbc-api.md
new file mode 100644
index 00000000000..e3e5ec640bf
--- /dev/null
+++ b/versioned_docs/version-1.4.0/api/jdbc-api.md
@@ -0,0 +1,59 @@
+---
+title: Task JDBC API
+sidebar_position: 4
+---
+
+# Task Submission And Execution Of JDBC API
+
+### 1. Introduce Dependent Modules
+The first way depends on the JDBC module in the pom:
+```xml
+
+ org.apache.linkis
+ linkis-jdbc-driver
+ ${linkis.version}
+
+```
+**Note:** The module has not been deployed to the central repository. You need to execute `mvn install -Dmaven.test.skip=true` in the linkis-computation-governance/linkis-jdbc-driver directory for local installation.
+
+**The second way is through packaging and compilation:**
+1. Enter the linkis-jdbc-driver directory in the Linkis project and enter the command in the terminal to package `mvn assembly:assembly -Dmaven.test.skip=true`
+The packaging instruction skips the running of the unit test and the compilation of the test code, and packages the dependencies required by the JDBC module into the Jar package.
+2. After the packaging is complete, two Jar packages will be generated in the target directory of JDBC. The one with dependencies in the Jar package name is the Jar package we need.
+### 2. Create A Test Category
+Establish a Java test class LinkisJDBCTest, the specific interface meaning can be seen in the notes:
+```java
+package org.apache.linkis.jdbc.test;
+
+import java.sql.*;
+
+public class LinkisJDBCTest {
+
+ public static void main(String[] args) throws SQLException, ClassNotFoundException {
+
+ //1. load driver:org.apache.linkis.ujes.jdbc.UJESSQLDriver
+ Class.forName("org.apache.linkis.ujes.jdbc.UJESSQLDriver");
+
+ //2. Get Connection:jdbc:linkis://gatewayIP:gatewayPort/dbName?EngineType=hive&creator=test, user/password
+ Connection connection = DriverManager.getConnection("jdbc:linkis://127.0.0.1:9001/default?EngineType=hive&creator=test","hadoop","hadoop");
+ //3. Create statement
+ Statement st= connection.createStatement();
+ ResultSet rs=st.executeQuery("show tables");
+ //4.get result
+ while (rs.next()) {
+ ResultSetMetaData metaData = rs.getMetaData();
+ for (int i = 1; i <= metaData.getColumnCount(); i++) {
+ System.out.print(metaData.getColumnName(i) + ":" +metaData.getColumnTypeName(i)+": "+ rs.getObject(i) + " ");
+ }
+ System.out.println();
+ }
+ //close resource
+ rs.close();
+ st.close();
+ connection.close();
+ }
+}
+```
+
+1. Where EngineType is the specified corresponding engine type: supports Spark/hive/presto/shell, etc.
+2. Creator is the specified corresponding application type, which is used for resource isolation between applications
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/api/linkis-task-operator.md b/versioned_docs/version-1.4.0/api/linkis-task-operator.md
new file mode 100644
index 00000000000..4f9afd104c6
--- /dev/null
+++ b/versioned_docs/version-1.4.0/api/linkis-task-operator.md
@@ -0,0 +1,426 @@
+---
+title: Task Rest API
+sidebar_position: 3
+---
+
+# Linkis Task submission and execution Rest API
+
+- The return of the Linkis Restful interface follows the following standard return format:
+
+```json
+{
+ "method": "",
+ "status": 0,
+ "message": "",
+ "data": {}
+}
+```
+
+**Convention**:
+
+ - method: Returns the requested Restful API URI, which is mainly used in WebSocket mode.
+ - status: return status information, where: -1 means no login, 0 means success, 1 means error, 2 means verification failed, 3 means no access to the interface.
+ - data: return specific data.
+ - message: return the requested prompt message. If the status is not 0, the message returned is an error message, and the data may have a stack field, which returns specific stack information.
+
+For more information about the Linkis Restful interface specification, please refer to: [Linkis Restful Interface Specification](../development/development-specification/api.md)
+
+### 1. Submit task
+
+
+- Interface `/api/rest_j/v1/entrance/submit`
+
+- Submission method `POST`
+
+- Request Parameters
+
+```json
+{
+ "executionContent": {
+ "code": "show tables",
+ "runType": "sql"
+ },
+ "params": {
+ "variable": {// task variable
+ "testvar": "hello"
+ },
+ "configuration": {
+ "runtime": {// task runtime params
+ "jdbc.url": "XX"
+ },
+ "startup": { // ec start up params
+ "spark.executor.cores": "4"
+ }
+ }
+ },
+ "source": { //task source information
+ "scriptPath": "file:///tmp/hadoop/test.sql"
+ },
+ "labels": {
+ "engineType": "spark-2.4.3",
+ "userCreator": "hadoop-IDE"
+ }
+}
+```
+
+-Sample Response
+
+```json
+{
+ "method": "/api/rest_j/v1/entrance/submit",
+ "status": 0,
+ "message": "Request executed successfully",
+ "data": {
+ "execID": "030418IDEhivelocalhost010004:10087IDE_hadoop_21",
+ "taskID": "123"
+ }
+}
+```
+
+- execID is the unique identification execution ID generated for the task after the user task is submitted to Linkis. It is of type String. This ID is only useful when the task is running, similar to the concept of PID. The design of ExecID is `(requestApplicationName length)(executeAppName length)(Instance length)${requestApplicationName}${executeApplicationName}${entranceInstance information ip+port}${requestApplicationName}_${umUser}_${index}`
+
+- taskID is the unique ID that represents the task submitted by the user. This ID is generated by the database self-increment and is of Long type
+
+
+### 2. Get Status
+
+- Interface `/api/rest_j/v1/entrance/${execID}/status`
+
+- Submission method `GET`
+
+- Sample Response
+
+```json
+{
+ "method": "/api/rest_j/v1/entrance/{execID}/status",
+ "status": 0,
+ "message": "Get status successful",
+ "data": {
+ "execID": "${execID}",
+ "status": "Running"
+ }
+}
+```
+
+### 3. Get Logs
+
+- Interface `/api/rest_j/v1/entrance/${execID}/log?fromLine=${fromLine}&size=${size}`
+
+- Submission method `GET`
+
+- The request parameter fromLine refers to the number of lines from which to get, and size refers to the number of lines of logs that this request gets
+
+- Sample Response, where the returned fromLine needs to be used as a parameter for the next request of this interface
+
+```json
+{
+ "method": "/api/rest_j/v1/entrance/${execID}/log",
+ "status": 0,
+ "message": "Return log information",
+ "data": {
+ "execID": "${execID}",
+ "log": ["error log","warn log","info log", "all log"],
+ "fromLine": 56
+ }
+}
+```
+
+### 4. Get Progress and resource
+
+- Interface `/api/rest_j/v1/entrance/${execID}/progressWithResource`
+
+- Submission method `GET`
+
+- Sample Response
+
+```json
+{
+ "method": "/api/entrance/exec_id018017linkis-cg-entrance127.0.0.1:9205IDE_hadoop_spark_2/progressWithResource",
+ "status": 0,
+ "message": "OK",
+ "data": {
+ "yarnMetrics": {
+ "yarnResource": [
+ {
+ "queueMemory": 9663676416,
+ "queueCores": 6,
+ "queueInstances": 0,
+ "jobStatus": "COMPLETED",
+ "applicationId": "application_1655364300926_69504",
+ "queue": "default"
+ }
+ ],
+ "memoryPercent": 0.009,
+ "memoryRGB": "green",
+ "coreRGB": "green",
+ "corePercent": 0.02
+ },
+ "progress": 0.5,
+ "progressInfo": [
+ {
+ "succeedTasks": 4,
+ "failedTasks": 0,
+ "id": "jobId-1(linkis-spark-mix-code-1946915)",
+ "totalTasks": 6,
+ "runningTasks": 0
+ }
+ ],
+ "execID": "exec_id018017linkis-cg-entrance127.0.0.1:9205IDE_hadoop_spark_2"
+ }
+}
+```
+
+### 5. Kill Task
+
+- Interface `/api/rest_j/v1/entrance/${execID}/kill`
+
+- Submission method `POST`
+
+- Sample Response
+
+```json
+{
+ "method": "/api/rest_j/v1/entrance/{execID}/kill",
+ "status": 0,
+ "message": "OK",
+ "data": {
+ "execID":"${execID}"
+ }
+}
+```
+
+### 6. Get task info
+
+- Interface `/api/rest_j/v1/jobhistory/{id}/get`
+
+- Submission method `GET`
+
+**Request Parameters**:
+
+| Parameter name | Parameter description | Request type | Required | Data type | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|id|task id|path|true|string||
+
+
+- Sample Response
+
+````json
+{
+ "method": null,
+ "status": 0,
+ "message": "OK",
+ "data": {
+ "task": {
+ "taskID": 1,
+ "instance": "xxx",
+ "execId": "exec-id-xxx",
+ "umUser": "test",
+ "engineInstance": "xxx",
+ "progress": "10%",
+ "logPath": "hdfs://xxx/xxx/xxx",
+ "resultLocation": "hdfs://xxx/xxx/xxx",
+ "status": "FAILED",
+ "createdTime": "2019-01-01 00:00:00",
+ "updatedTime": "2019-01-01 01:00:00",
+ "engineType": "spark",
+ "errorCode": 100,
+ "errDesc": "Task Failed with error code 100",
+ "executeApplicationName": "hello world",
+ "requestApplicationName": "hello world",
+ "runType": "xxx",
+ "paramJson": "{\"xxx\":\"xxx\"}",
+ "costTime": 10000,
+ "strongerExecId": "execId-xxx",
+ "sourceJson": "{\"xxx\":\"xxx\"}"
+ }
+ }
+}
+````
+
+### 7. Get result set info
+
+Support for multiple result sets
+
+- Interface `/api/rest_j/v1/filesystem/getDirFileTrees`
+
+- Submission method `GET`
+
+**Request Parameters**:
+
+| Parameter name | Parameter description | Request type | Required | Data type | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|path|result directory |query|true|string||
+
+
+- Sample Response
+
+````json
+{
+ "method": "/api/filesystem/getDirFileTrees",
+ "status": 0,
+ "message": "OK",
+ "data": {
+ "dirFileTrees": {
+ "name": "1946923",
+ "path": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923",
+ "properties": null,
+ "children": [
+ {
+ "name": "_0.dolphin",
+ "path": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923/_0.dolphin",//result set 1
+ "properties": {
+ "size": "7900",
+ "modifytime": "1657113288360"
+ },
+ "children": null,
+ "isLeaf": true,
+ "parentPath": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923"
+ },
+ {
+ "name": "_1.dolphin",
+ "path": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923/_1.dolphin",//result set 2
+ "properties": {
+ "size": "7900",
+ "modifytime": "1657113288614"
+ },
+ "children": null,
+ "isLeaf": true,
+ "parentPath": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923"
+ }
+ ],
+ "isLeaf": false,
+ "parentPath": null
+ }
+ }
+}
+````
+
+### 8. Get result content
+
+- Interface `/api/rest_j/v1/filesystem/openFile`
+
+- Submission method `GET`
+
+**Request Parameters**:
+
+| Parameter name | Parameter description | Request type | Required | Data type | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|path|result path|query|true|string||
+|charset|Charset|query|false|string||
+|page|page number|query|false|ref||
+|pageSize|page size|query|false|ref||
+
+
+- Sample Response
+
+````json
+{
+ "method": "/api/filesystem/openFile",
+ "status": 0,
+ "message": "OK",
+ "data": {
+ "metadata": [
+ {
+ "columnName": "count(1)",
+ "comment": "NULL",
+ "dataType": "long"
+ }
+ ],
+ "totalPage": 0,
+ "totalLine": 1,
+ "page": 1,
+ "type": "2",
+ "fileContent": [
+ [
+ "28"
+ ]
+ ]
+ }
+}
+````
+
+
+### 9. Get Result by stream
+
+Get the result as a CSV or Excel file
+
+- Interface `/api/rest_j/v1/filesystem/resultsetToExcel`
+
+- Submission method `GET`
+
+**Request Parameters**:
+
+| Parameter name | Parameter description | Request type | Required | Data type | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|autoFormat|Auto |query|false|boolean||
+|charset|charset|query|false|string||
+|csvSeerator|csv Separator|query|false|string||
+|limit|row limit|query|false|ref||
+|nullValue|null value|query|false|string||
+|outputFileName|Output file name|query|false|string||
+|outputFileType|Output file type csv or excel|query|false|string||
+|path|result path|query|false|string||
+|quoteRetouchEnable| Whether to quote modification|query|false|boolean||
+|sheetName|sheet name|query|false|string||
+
+
+- Response
+
+````json
+binary stream
+````
+
+
+### 10. Compatible with 0.x task submission interface
+
+- Interface `/api/rest_j/v1/entrance/execute`
+
+- Submission method `POST`
+
+
+- Request Parameters
+
+
+```json
+{
+ "executeApplicationName": "hive", //Engine type
+ "requestApplicationName": "dss", //Client service type
+ "executionCode": "show tables",
+ "params": {
+ "variable": {// task variable
+ "testvar": "hello"
+ },
+ "configuration": {
+ "runtime": {// task runtime params
+ "jdbc.url": "XX"
+ },
+ "startup": { // ec start up params
+ "spark.executor.cores": "4"
+ }
+ }
+ },
+ "source": { //task source information
+ "scriptPath": "file:///tmp/hadoop/test.sql"
+ },
+ "labels": {
+ "engineType": "spark-2.4.3",
+ "userCreator": "hadoop-IDE"
+ },
+ "runType": "hql", //The type of script to run
+ "source": {"scriptPath":"file:///tmp/hadoop/1.hql"}
+}
+```
+
+- Sample Response
+
+```json
+{
+ "method": "/api/rest_j/v1/entrance/execute",
+ "status": 0,
+ "message": "Request executed successfully",
+ "data": {
+ "execID": "030418IDEhivelocalhost010004:10087IDE_hadoop_21",
+ "taskID": "123"
+ }
+}
+```
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/api/login-api.md b/versioned_docs/version-1.4.0/api/login-api.md
new file mode 100644
index 00000000000..26c29f0dd67
--- /dev/null
+++ b/versioned_docs/version-1.4.0/api/login-api.md
@@ -0,0 +1,130 @@
+---
+title: Login API
+sidebar_position: 1
+---
+
+# Login Document
+## 1. Docking With LDAP Service
+
+Enter the /conf/linkis-spring-cloud-services/linkis-mg-gateway directory and execute the command:
+```bash
+ vim linkis-server.properties
+```
+
+Add LDAP related configuration:
+```bash
+wds.linkis.ldap.proxy.url=ldap://127.0.0.1:1389/ #LDAP service URL
+wds.linkis.ldap.proxy.baseDN=dc=webank,dc=com #Configuration of LDAP service
+```
+
+## 2. How To Open The Test Mode To Achieve Login-Free
+
+Enter the /conf/linkis-spring-cloud-services/linkis-mg-gateway directory and execute the command:
+```bash
+ vim linkis-server.properties
+```
+
+
+Turn on the test mode and the parameters are as follows:
+```bash
+ wds.linkis.test.mode=true # Open test mode
+ wds.linkis.test.user=hadoop # Specify which user to delegate all requests to in test mode
+```
+
+## 3.Log In Interface Summary
+We provide the following login-related interfaces:
+ - Login In
+
+ - Login Out
+
+ - Heart Beat
+
+
+## 4. Interface details
+
+- The return of the Linkis Restful interface follows the following standard return format:
+
+```json
+{
+ "method": "",
+ "status": 0,
+ "message": "",
+ "data": {}
+}
+```
+
+**Protocol**:
+
+- method: Returns the requested Restful API URI, which is mainly used in WebSocket mode.
+- status: returns status information, where: -1 means no login, 0 means success, 1 means error, 2 means verification failed, 3 means no access to the interface.
+- data: return specific data.
+- message: return the requested prompt message. If the status is not 0, the message returns an error message, and the data may have a stack field, which returns specific stack information.
+
+For more information about the Linkis Restful interface specification, please refer to: [Linkis Restful Interface Specification](../development/development-specification/api)
+
+### 1). Login In
+
+- Interface `/api/rest_j/v1/user/login`
+
+- Submission method `POST`
+
+```json
+ {
+ "userName": "",
+ "password": ""
+ }
+```
+
+- Return to example
+
+```json
+ {
+ "method": null,
+ "status": 0,
+ "message": "login successful(登录成功)!",
+ "data": {
+ "isAdmin": false,
+ "userName": ""
+ }
+ }
+```
+
+Among them:
+
+-isAdmin: Linkis only has admin users and non-admin users. The only privilege of admin users is to support viewing the historical tasks of all users in the Linkis management console.
+
+### 2). Login Out
+
+- Interface `/api/rest_j/v1/user/logout`
+
+- Submission method `POST`
+
+ No parameters
+
+- Return to example
+
+```json
+ {
+ "method": "/api/rest_j/v1/user/logout",
+ "status": 0,
+ "message": "Logout successful(退出登录成功)!"
+ }
+```
+
+### 3). Heart Beat
+
+- Interface `/api/rest_j/v1/user/heartbeat`
+
+- Submission method `POST`
+
+ No parameters
+
+- Return to example
+
+```json
+ {
+ "method": "/api/rest_j/v1/user/heartbeat",
+ "status": 0,
+ "message": "Maintain heartbeat success(维系心跳成功)!"
+ }
+```
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/api/overview.md b/versioned_docs/version-1.4.0/api/overview.md
new file mode 100644
index 00000000000..3f08075bdec
--- /dev/null
+++ b/versioned_docs/version-1.4.0/api/overview.md
@@ -0,0 +1,13 @@
+---
+title: Overview
+sidebar_position: 0
+---
+
+## 1. Document description
+Linkis1.0 has been refactored and optimized on the basis of Linkix0.x, and it is also compatible with the 0.x interface. However, in order to prevent compatibility problems when using version 1.0, you need to read the following documents carefully:
+
+1. When using Linkis1.0 for customized development, you need to use Linkis's authorization authentication interface. Please read [Login API Document](login-api.md) carefully.
+
+2. Linkis1.0 provides a JDBC interface. You need to use JDBC to access Linkis. Please read [Task Submit and Execute JDBC API Document](jdbc-api.md).
+
+3. Linkis1.0 provides the Rest interface. If you need to develop upper-level applications on the basis of Linkis, please read [Task Submit and Execute Rest API Document](linkis-task-operator.md).
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/_category_.json b/versioned_docs/version-1.4.0/architecture/_category_.json
new file mode 100644
index 00000000000..2daa77685e2
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Architecture",
+ "position": 8.0
+}
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/_category_.json b/versioned_docs/version-1.4.0/architecture/feature/_category_.json
new file mode 100644
index 00000000000..5775546574d
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Feature",
+ "position": 5.1
+}
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/commons/_category_.json b/versioned_docs/version-1.4.0/architecture/feature/commons/_category_.json
new file mode 100644
index 00000000000..71bc86501fe
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/commons/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Commons",
+ "position": 1
+}
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/commons/proxy-user.md b/versioned_docs/version-1.4.0/architecture/feature/commons/proxy-user.md
new file mode 100644
index 00000000000..2753359e658
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/commons/proxy-user.md
@@ -0,0 +1,37 @@
+---
+title: Proxy User Mode
+sidebar_position: 6
+---
+
+## 1 Background
+At present, when linkis is executing the task submitted by the user, the main process service of linkis will switch to the corresponding user through sudo -u ${submit user}, and then execute the corresponding engine startup command.
+This requires creating a corresponding system user for each ${submit user} in advance, and configuring relevant environment variables.
+For new users, a series of environment initialization preparations are required. Frequent user changes will increase operation and maintenance costs, and there are too many users, so resources cannot be configured for a single user, and resources cannot be well managed. If the A agent can be executed to the designated agent user, the execution entry can be converged uniformly, and the problem of needing to initialize the environment can be solved.
+
+## 2 Basic Concepts
+- Login user: the user who directly logs in to the system through the user name and password
+- Proxy user: The user who actually performs the operation as the login user is called the proxy user, and the related operations of the proxy login user
+
+## 3 Goals achieved
+- Login user A can choose a proxy user and decide which proxy user to proxy to
+- Login user A can delegate tasks to proxy user B for execution
+- When logging in to user A as an agent to agent user B, you can view B-related execution records, task results and other data
+- A proxy user can proxy multiple login users at the same time, but a login user can only be associated with a certain proxy user at the same time
+
+## 4 Realize the general idea
+
+Modify the existing interface cookie processing, which needs to be able to parse out the logged-in user and proxy user in the cookie
+```html
+The key of the proxy user's cookie is: linkis_user_session_proxy_ticket_id_v1
+Login user's cookie: linkis_user_session_ticket_id_v1
+
+````
+
+- The relevant interface of linkis needs to be able to identify the proxy user information based on the original UserName obtained, and use the proxy user to perform various operations. And record the audit log, including the user's task execution operation, download operation
+- When the task is submitted for execution, the entry service needs to modify the executing user to be the proxy user
+
+## 5 Things to Consider & Note
+
+- Users are divided into proxy users and non-proxy users. Users of proxy type cannot perform proxying to other users again.
+- It is necessary to control the list of logged-in users and system users who can be proxied, to prohibit the occurrence of arbitrary proxies, and to avoid uncontrollable permissions. It is best to support database tables to configure, and can be directly modified to take effect without restarting the service
+- Separately record log files containing proxy user operations, such as proxy execution, function update, etc. All proxy user operations of PublicService are recorded in the log, which is convenient for auditing
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/commons/rpc.md b/versioned_docs/version-1.4.0/architecture/feature/commons/rpc.md
new file mode 100644
index 00000000000..2917cc5612d
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/commons/rpc.md
@@ -0,0 +1,21 @@
+---
+title: RPC Module
+sidebar_position: 2
+---
+
+## 1.Overview
+The call of HTTP interface between Feign-based microservices can only satisfy a simple A microservice instance that randomly selects a service instance in B microservices according to simple rules, and if this B microservice instance wants to asynchronously return information To the caller, it is simply impossible to achieve.
+At the same time, because Feign only supports simple service selection rules, it cannot forward the request to the specified microservice instance, and cannot broadcast a request to all instances of the recipient microservice.
+
+## 2. Architecture description
+### 2.1 Architecture design diagram
+![Linkis RPC architecture diagram](/Images/Architecture/Commons/linkis-rpc.png)
+### 2.2 Module description
+The functions of the main modules are introduced as follows:
+* Eureka: service registration center, user management service, service discovery.
+* Sender: Service request interface, the sender uses Sender to request service from the receiver.
+* Receiver: The service request receives the corresponding interface, and the receiver responds to the service through this interface.
+* Interceptor: Sender will pass the user's request to the interceptor. The interceptor intercepts the request and performs additional functional processing on the request. The broadcast interceptor is used to broadcast operations on the request, the retry interceptor is used to retry the processing of failed requests, and the cache interceptor is used to read and cache simple and unchanged requests. , And the default interceptor that provides the default implementation.
+* Decoder, Encoder: used for request encoding and decoding.
+* Feign: is a lightweight framework for http request calls, a declarative WebService client program, used for Linkis-RPC bottom communication.
+* Listener: monitor module, mainly used to monitor broadcast requests.
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/commons/variable.md b/versioned_docs/version-1.4.0/architecture/feature/commons/variable.md
new file mode 100644
index 00000000000..a2708c32dc9
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/commons/variable.md
@@ -0,0 +1,94 @@
+---
+title: Custom Variable Design
+sidebar_position: 1
+---
+
+## 1. Overview
+### need
+1. The user hopes that Linkis can provide some public variables and then replace them during execution. For example, the user runs the same SQL in batches every day, and needs to specify the partition time of the previous day. Writing based on SQL will be more complicated if the system provides a run_date variable It will be very convenient to use.
+2. The user hopes that Linkis supports date pattern calculation, supports writing variables such as &{YYYY-MM-DD} in the code to calculate time variables
+3. The user wants to define variables by himself, such as setting a float variable, and then use it in the code
+
+### Target
+1. Support variable replacement of task code
+2. Support custom variables, support users to define custom variables in scripts and task parameters submitted to Linkis, support simple +, - and other calculations
+3. Preset system variables: run_date, run_month, run_today and other system variables
+4. Support date pattern variable, support +, - operation of pattern
+
+## 2. Overall Design
+ During the execution of the Linkis task, the custom variable is carried out in Entrance, mainly through the interceptor of Entrance before the task is submitted and executed. Variables and defined variables, and the initial value of the custom variable passed in through the task completes the code replacement and becomes the final executable code.
+
+### 2.1 Technical Architecture
+ The overall structure of custom variables is as follows. After the task is submitted, it will pass through the variable replacement interceptor. First, it will analyze all the variables and expressions used in the code, and then replace them with the initial values of the system and user-defined variables, and finally submit the parsed code to EngineConn for execution. So the underlying engine is already replaced code.
+
+![arc](/Images/Architecture/Commons/var_arc.png)
+
+Remarks: Because the functions of variable and parsing are more general, the extraction tool class is defined in linkis-commons: org.apache.linkis.common.utils.VariableUtils
+
+### 2.2 Business Architecture
+ The feature this time is mainly to complete the analysis, calculation, and replacement functions of variable substitution, which mainly involves the Entrance module of linkis for code interception and the variable substitution tools defined by the Linkis-commons module :
+
+| Component Name | Level 1 Module | Level 2 Module | Function Point |
+|---|---|---|---|
+| Linkis | CG | Entrance|Intercept task code and call Linkis-common's VariableUtils for code replacement|
+| Linkis | Linkis-commons | linkis-common|Provide variable, analysis, calculation tool class VariableUtils|
+
+## 3. Module design
+### 3.1 Core Execution Process
+[input port] The input port is code and code type (python/sql/scala/sh).
+[Processing flow] Entrance will first enter the interceptor after receiving the task, and start the variable interceptor to complete the analysis, replacement and calculation of variables
+The overall timing diagram is as follows:
+
+![time](/Images/Architecture/Commons/var_time.png)
+
+What needs to be explained here is:
+1. Custom variables and system variables are used in ${}, such as ${run_date}
+2. The date pattern variable is used as &{}, for example, the value of &{yyyy-01-01} is 2022-01-01.
+ The reason why it is divided into two different ways is to prevent the string defined by the custom variable from containing pattern characters. For example, if a custom variable with y=1 is defined, it may represent different meanings, and it will be the year variable by the pattern task.
+
+
+### 3.2 Specific details:
+1. run_date is a date variable that comes with the core, and supports user-defined dates. If not specified, it defaults to the day before the current system time.
+2. Definition of other derived built-in date variables: other date built-in variables are calculated relative to run_date. Once run_date changes, the values of other variables will also change automatically. Other date variables do not support setting initial values and can only be modified by modifying run_date .
+3. The built-in variables support richer usage scenarios: ${run_date-1} is the day before run_data; ${run_month_begin-1} is the first day of the previous month of run_month_begin, where -1 means minus one month.
+4. Pattern type variables are also calculated based on run_date, and then replaced and +—
+
+### 3.3 Variable scope
+Custom variables also have a scope in linkis, and the priority is that the variable defined in the script is greater than the Variable defined in the task parameter and greater than the built-in run_date variable. The task parameters are defined as follows:
+```
+## restful
+{
+ "executionContent": {"code": "select \"${f-1}\";", "runType": "sql"},
+ "params": {
+ "variable": {f: "20.1"},
+ "configuration": {
+ "runtime": {
+ "linkis.openlookeng.url":"http://127.0.0.1:9090"
+ }
+ }
+ },
+ "source": {"scriptPath": "file:///mnt/bdp/hadoop/1.sql"},
+ "labels": {
+ "engineType": "spark-2.4.3",
+ "userCreator": "hadoop-IDE"
+ }
+}
+## java SDK
+JobSubmitAction. builder
+ .addExecuteCode(code)
+ .setStartupParams(startupMap)
+ .setUser(user) //submit user
+ .addExecuteUser(user) //execute user
+ .setLabels(labels)
+ .setVariableMap(varMap) //setVar
+ .build
+```
+
+## 4. Interface design:
+The main tools are:
+```
+VariableUtils:
+def replace(replaceStr: String): String replaces the variable in the code and returns the replaced code
+def replace(replaceStr: String, variables: util.Map[String, Any]): String supports passing in the value of a custom variable for replacement
+def replace(code: String, runtType: String, variables: util.Map[String, String]): String supports incoming code types, and performs replacement parsing according to different types
+```
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/_category_.json b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/_category_.json
new file mode 100644
index 00000000000..5058aeccbad
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Computation Governance Services",
+ "position": 2
+}
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/engine-conn-manager.md b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/engine-conn-manager.md
new file mode 100644
index 00000000000..383a614f78c
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/engine-conn-manager.md
@@ -0,0 +1,50 @@
+---
+title: EngineConnManager Design
+sidebar_position: 2
+---
+
+EngineConnManager architecture design
+-------------------------
+
+EngineConnManager (ECM): EngineConn's manager, provides engine lifecycle management, and reports load information and its own health status to RM.
+### ECM architecture
+
+![](/Images/Architecture/engine/ecm-01.png)
+
+### Introduction to the second-level module
+
+**Linkis-engineconn-linux-launch**
+
+The engine launcher, whose core class is LinuxProcessEngineConnLauch, is used to provide instructions for executing commands.
+
+**Linkis-engineconn-manager-core**
+
+The core module of ECM includes the top-level interface of ECM health report and EngineConn health report function, defines the relevant indicators of ECM service, and the core method of constructing EngineConn process.
+
+| Core top-level interface/class | Core function |
+|------------------------------------|--------------------------------------------------------------------------|
+| EngineConn | Defines the properties of EngineConn, including methods and parameters |
+| EngineConnLaunch | Define the start method and stop method of EngineConn |
+| ECMEvent | ECM related events are defined |
+| ECMEventListener | Defined ECM related event listeners |
+| ECMEventListenerBus | Defines the listener bus of ECM |
+| ECMMetrics | Defines the indicator information of ECM |
+| ECMHealthReport | Defines the health report information of ECM |
+| NodeHealthReport | Defines the health report information of the node |
+
+**Linkis-engineconn-manager-server**
+
+The server side of ECM defines top-level interfaces and implementation classes such as ECM health information processing service, ECM indicator information processing service, ECM registration service, EngineConn start service, EngineConn stop service, EngineConn callback service, etc., which are mainly used for ECM to itself and EngineConn Life cycle management, health information reporting, heartbeat sending, etc.
+Core Service and Features module are as follows:
+
+| Core service | Core function |
+|---------------------------------|-------------------------------------------------|
+| EngineConnLaunchService | Contains core methods for generating EngineConn and starting the process |
+| BmlResourceLocallizationService | Used to download BML engine related resources and generate localized file directory |
+| ECMHealthService | Report your own healthy heartbeat to AM regularly |
+| ECMMetricsService | Report your own indicator status to AM regularly |
+| EngineConnKillSerivce | Provides related functions to stop the engine |
+| EngineConnListService | Provide caching and management engine related functions |
+| EngineConnCallBackService | Provide the function of the callback engine |
+
+
diff --git a/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/engine/_category_.json b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/engine/_category_.json
new file mode 100644
index 00000000000..98fe7a2eea9
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/engine/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "EngineConn",
+ "position": 5
+}
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/engine/add-an-engine-conn.md b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/engine/add-an-engine-conn.md
new file mode 100644
index 00000000000..34505b1ba4b
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/engine/add-an-engine-conn.md
@@ -0,0 +1,109 @@
+---
+title: EngineConn Startup Process
+sidebar_position: 4
+---
+# How to add an EngineConn
+
+Adding EngineConn is one of the core processes of the computing task preparation phase of Linkis computing governance. It mainly includes the following steps. First, client side (Entrance or user client) initiates a request for a new EngineConn to LinkisManager . Then LinkisManager initiates a request to EngineConnManager to start EngineConn based on demands and label rules. Finally, LinkisManager returns the usable EngineConn to the client side.
+
+Based on the figure below, let's explain the whole process in detail:
+
+![Process of adding a EngineConn](/Images/Architecture/Add_an_EngineConn/add_an_EngineConn_flow_chart.png)
+
+## 1. LinkisManger receives the requests from client side
+
+**Glossary:**
+
+- LinkisManager: The management center of Linkis computing governance capabilities. Its main responsibilities are:
+ 1. Based on multi-level combined tags, provide users with available EngineConn after complex routing, resource management and load balancing.
+
+ 2. Provide EC and ECM full life cycle management capabilities.
+
+ 3. Provide users with multi-Yarn cluster resource management functions based on multi-level combined tags. It is mainly divided into three modules: AppManager, ResourceManager and LabelManager , which can support multi-active deployment and have the characteristics of high availability and easy expansion.
+
+After the AM module receives the Client’s new EngineConn request, it first checks the parameters of the request to determine the validity of the request parameters. Secondly, selects the most suitable EngineConnManager (ECM) through complex rules for use in the subsequent EngineConn startup. Next, it will apply to RM for the resources needed to start the EngineConn, Finally, it will request the ECM to create an EngineConn.
+
+The four steps will be described in detail below.
+
+### 1. Request parameter verification
+
+After the AM module receives the engine creation request, it will check the parameters. First, it will check the permissions of the requesting user and the creating user, and then check the Label attached to the request. Since in the subsequent creation process of AM, Label will be used to find ECM and perform resource information recording, etc, you need to ensure that you have the necessary Label. At this stage, you must bring the Label with UserCreatorLabel (For example: hadoop-IDE) and EngineTypeLabel ( For example: spark-2.4.3).
+
+### 2. Select a EngineConnManager(ECM)
+
+ECM selection is mainly to complete the Label passed through the client to select a suitable ECM service to start EngineConn. In this step, first, the LabelManager will be used to search in the registered ECM through the Label passed by the client, and return in the order according to the label matching degree. After obtaining the registered ECM list, rules will be selected for these ECMs. At this stage, rules such as availability check, resource surplus, and machine load have been implemented. After the rule is selected, the ECM with the most matching label, the most idle resource, and the low load will be returned.
+
+### 3. Apply resources required for EngineConn
+
+1. After obtaining the assigned ECM, AM will then request how many resources will be used by the client's engine creation request by calling the EngineConnPluginServer service. Here, the resource request will be encapsulated, mainly including Label, the EngineConn startup parameters passed by the Client, and the user configuration parameters obtained from the Configuration module. The resource information is obtained by calling the ECP service through RPC.
+
+2. After the EngineConnPluginServer service receives the resource request, it will first find the corresponding engine tag through the passed tag, and select the EngineConnPlugin of the corresponding engine through the engine tag. Then use EngineConnPlugin's resource generator to calculate the engine startup parameters passed in by the client, calculate the resources required to apply for a new EngineConn this time, and then return it to LinkisManager.
+
+ **Glossary:**
+
+- EgineConnPlugin: It is the interface that Linkis must implement when connecting a new computing storage engine. This interface mainly includes several capabilities that this EngineConn must provide during the startup process, including EngineConn resource generator, EngineConn startup command generator, EngineConn engine connection Device. Please refer to the Spark engine implementation class for the specific implementation: [SparkEngineConnPlugin](https://github.com/apache/linkis/blob/master/linkis-engineconn-pluginsspark/src/main/scala/com/webank/wedatasphere/linkis/engineplugin/spark/SparkEngineConnPlugin.scala).
+- EngineConnPluginServer: It is a microservice that loads all the EngineConnPlugins and provides externally the required resource generation capabilities of EngineConn and EngineConn's startup command generation capabilities.
+- EngineConnResourceFactory: Calculate the total resources needed when EngineConn starts this time through the parameters passed in.
+- EngineConnLaunchBuilder: Through the incoming parameters, a startup command of the EngineConn is generated to provide the ECM to start the engine.
+3. After AM obtains the engine resources, it will then call the RM service to apply for resources. The RM service will use the incoming Label, ECM, and the resources applied for this time to make resource judgments. First, it will judge whether the resources of the client corresponding to the Label are sufficient, and then judge whether the resources of the ECM service are sufficient, if the resources are sufficient, the resource application is approved this time, and the resources of the corresponding Label are added or subtracted.
+
+### 4. Request ECM for engine creation
+
+1. After completing the resource application for the engine, AM will encapsulate the engine startup request, send it to the corresponding ECM via RPC for service startup, and obtain the instance object of EngineConn.
+2. AM will then determine whether EngineConn is successfully started and become available through the reported information of EngineConn. If it is, the result will be returned, and the process of adding an engine this time will end.
+
+## 2. ECM initiates EngineConn
+
+**Glossary:**
+
+- EngineConnManager: EngineConn's manager. Provides engine life-cycle management, and at the same time reports load information and its own health status to RM.
+- EngineConnBuildRequest: The start engine command passed by LinkisManager to ECM, which encapsulates all tag information, required resources and some parameter configuration information of the engine.
+- EngineConnLaunchRequest: Contains the BML materials, environment variables, ECM required local environment variables, startup commands and other information required to start an EngineConn, so that ECM can build a complete EngineConn startup script based on this.
+
+After ECM receives the EngineConnBuildRequest command passed by LinkisManager, it is mainly divided into three steps to start EngineConn:
+
+1. Request EngineConnPluginServer to obtain EngineConnLaunchRequest encapsulated by EngineConnPluginServer.
+2. Parse EngineConnLaunchRequest and encapsulate it into EngineConn startup script.
+3. Execute startup script to start EngineConn.
+
+### 2.1 EngineConnPluginServer encapsulates EngineConnLaunchRequest
+
+Get the EngineConn type and corresponding version that actually needs to be started through the label information of EngineConnBuildRequest, get the EngineConnPlugin of the EngineConn type from the memory of EngineConnPluginServer, and convert the EngineConnBuildRequest into EngineConnLaunchRequest through the EngineConnLaunchBuilder of the EngineConnPlugin.
+
+### 2.2 Encapsulate EngineConn startup script
+
+After the ECM obtains the EngineConnLaunchRequest, it downloads the BML materials in the EngineConnLaunchRequest to the local, and checks whether the local necessary environment variables required by the EngineConnLaunchRequest exist. After the verification is passed, the EngineConnLaunchRequest is encapsulated into an EngineConn startup script.
+
+### 2.3 Execute startup script
+
+Currently, ECM only supports Bash commands for Unix systems, that is, only supports Linux systems to execute the startup script.
+
+Before startup, the sudo command is used to switch to the corresponding requesting user to execute the script to ensure that the startup user (ie, JVM user) is the requesting user on the Client side.
+
+After the startup script is executed, ECM will monitor the execution status and execution log of the script in real time. Once the execution status returns to non-zero, it will immediately report EngineConn startup failure to LinkisManager and the entire process is complete; otherwise, it will keep monitoring the log and status of the startup script until The script execution is complete.
+
+## 3. EngineConn initialization
+
+After ECM executed EngineConn's startup script, EngineConn microservice was officially launched.
+
+**Glossary:**
+
+- EngineConn microservice: Refers to the actual microservices that include an EngineConn and one or more Executors to provide computing power for computing tasks. When we talk about adding an EngineConn, we actually mean adding an EngineConn microservice.
+- EngineConn: The engine connector is the actual connection unit with the underlying computing storage engine, and contains the session information with the actual engine. The difference between it and Executor is that EngineConn only acts as a connection and a client, and does not actually perform calculations. For example, SparkEngineConn, its session information is SparkSession.
+- Executor: As a real computing storage scenario executor, it is the actual computing storage logic execution unit. It abstracts the various capabilities of EngineConn and provides multiple different architectural capabilities such as interactive execution, subscription execution, and responsive execution.
+
+The initialization of EngineConn microservices is generally divided into three stages:
+
+1. Initialize the EngineConn of the specific engine. First use the command line parameters of the Java main method to encapsulate an EngineCreationContext that contains relevant label information, startup information, and parameter information, and initialize EngineConn through EngineCreationContext to complete the establishment of the connection between EngineConn and the underlying Engine, such as: SparkEngineConn will initialize one at this stage SparkSession is used to establish a connection relationship with a Spark application.
+2. Initialize the Executor. After the EngineConn is initialized, the corresponding Executor will be initialized according to the actual usage scenario to provide service capabilities for subsequent users. For example, the SparkEngineConn in the interactive computing scenario will initialize a series of Executors that can be used to submit and execute SQL, PySpark, and Scala code capabilities, and support the Client to submit and execute SQL, PySpark, Scala and other codes to the SparkEngineConn.
+3. Report the heartbeat to LinkisManager regularly, and wait for EngineConn to exit. When the underlying engine corresponding to EngineConn is abnormal, or exceeds the maximum idle time, or Executor is executed, or the user manually kills, the EngineConn automatically ends and exits.
+
+----
+
+At this point, The process of how to add a new EngineConn is basically over. Finally, let's make a summary:
+
+- The client initiates a request for adding EngineConn to LinkisManager.
+- LinkisManager checks the legitimacy of the parameters, first selects the appropriate ECM according to the label, then confirms the resources required for this new EngineConn according to the user's request, applies for resources from the RM module of LinkisManager, and requires ECM to start a new EngineConn as required after the application is passed.
+- ECM first requests EngineConnPluginServer to obtain an EngineConnLaunchRequest containing BML materials, environment variables, ECM required local environment variables, startup commands and other information needed to start an EngineConn, and then encapsulates the startup script of EngineConn, and finally executes the startup script to start the EngineConn.
+- EngineConn initializes the EngineConn of a specific engine, and then initializes the corresponding Executor according to the actual usage scenario, and provides service capabilities for subsequent users. Finally, report the heartbeat to LinkisManager regularly, and wait for the normal end or termination by the user.
+
diff --git a/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/engine/engine-conn-metrics.md b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/engine/engine-conn-metrics.md
new file mode 100644
index 00000000000..739f085886a
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/engine/engine-conn-metrics.md
@@ -0,0 +1,76 @@
+---
+title: EngineConn Metrics Reporting Feature
+sidebar_position: 6
+tags: [Feature]
+---
+
+
+## 1. Functional requirements
+### 1.1 Requirement Background
+The reported information lacks engine information, and the reported resources and progress interfaces are redundant, which reduces performance. It needs to be aligned for optimization and adjustment, and an extension module should be added to the reporting protocol.
+
+### 1.2 Goals
+- Added RPC protocol containing resources, progress, and additional information, supporting reporting of these information in one request
+- Reconstruct existing resources and progress reporting links, and combine the actions of reporting related information into one request
+
+## 2. Overall Design
+
+This requirement involves the `linkis-entrance, linkis-computation-orchestrator, linkis-orchestrator-ecm-plugin, linkis-computation-engineconn` modules. Add and refactor the reporting information in the `computation-engineconn` module, and parse the information and store it on the entry side.
+
+### 2.1 Technical Architecture
+
+The engine information reporting architecture is shown in the figure. After the user submits the task to the entry, the entry applies to the linkismanager for an engine.
+After applying to the engine, submit tasks to the application, and receive regular reports of tasks (resources, progress, status). Until the task is executed, the entry returns the final result when the user queries.
+For this modification, the engine metrics information needs to be added to the entry into the database;
+Combine Resource and Progress interface information in Orchestrator, and add additional information such as metrics;
+On the ComputationEngineConn side of the interactive engine, the reported resources and progress information are combined, and engine statistics are additionally reported.
+
+![engineconn-mitrics-1.png](/Images-zh/Architecture/EngineConn/engineconn-mitrics-1.png)
+
+
+### 2.2 Business Architecture
+This feature involves the following function point modules:
+
+| First-level module | Second-level module | Function point |
+| :------------ | :------------ | :------------ |
+| Entrance | | Merge resource and progress interfaces; parse new engine metrics |
+| Orchestrator | orchestrator-core | Merge resource and progress interfaces; handle TaskRunningInfo messages |
+| Orchestrator | orchestrator-plugin-ecm | Resource and progress interfaces for merging monitor engine information |
+| Orchestrator | computation-engineconn | Reporting interface for combining resources and progress; new reporting engine example metrics |
+
+
+## 3. Module Design
+### Core execution flow
+-\[input] The input is the interactive engine `computation-engineconn`. When the engine executes a task, it reports the running information `TaskRunningInfo`, including the original `TaskProgressInfo` and `TaskResourceInfo`, and adds the engine example information and the information about the number of existing tasks of the engine.
+- \[Processing process] `orchestrator-plugin-ecm` is responsible for monitoring the reporting information when the engine runs tasks, receiving the reporting information, and generating the `TaskRunningInfoEvent` asynchronous message,
+ Sent to `OrchestratorAsyncListenerBus` for processing. The `TaskRunningInfoListener` registered to the `OrchestratorAsyncListener` receives the message, triggers the `listener` method, and calls back to the `TaskRunningInfo` callback method of the `Entrance` Job.
+ The callback method parses the resource, progress, and engine `metrancs` information in `TaskRunningInfo` and persists them respectively.
+
+![engineconn-mitrics-2.png](/Images-zh/Architecture/EngineConn/engineconn-mitrics-2.png)
+
+## 4. Data structure
+
+`RPC protocol TaskRunningInfo` has been added to the requirement, no db table has been added
+
+## 5. Interface Design
+No external interface
+
+## 6. Non-functional design:
+### 6.1 Security
+RPC interface internal authentication, does not involve external security issues
+
+### 6.2 Performance
+Combined two RPC interfaces to reduce the number of reports and improve performance
+
+### 6.3 Capacity
+Less metrics information, no impact
+
+### 6.4 High Availability
+not involving
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/engine/engine-conn-plugin.md b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/engine/engine-conn-plugin.md
new file mode 100644
index 00000000000..5fc84c21e18
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/engine/engine-conn-plugin.md
@@ -0,0 +1,77 @@
+---
+title: EngineConnPlugin (ECP) Design
+sidebar_position: 3
+---
+
+
+EngineConnPlugin (ECP) architecture design
+===============================
+
+The engine connector plug-in is an implementation that can dynamically load the engine connector and reduce the occurrence of version conflicts. It has the characteristics of convenient expansion, fast refresh, and selective loading. In order to allow developers to freely extend Linkis's Engine engine, and dynamically load engine dependencies to avoid version conflicts, the EngineConnPlugin was designed and developed, allowing new engines to be introduced into the execution life cycle of computing middleware by implementing established plug-in interfaces.
+The plug-in interface disassembles the definition of the engine, including parameter initialization, allocation of engine resources, construction of engine connections, and setting of engine default tags.
+
+一、ECP architecture diagram
+
+![](/Images/Architecture/linkis-engineConnPlugin-01.png)
+
+Introduction to the second-level module:
+==============
+
+EngineConn-Plugin-Server
+------------------------
+
+The engine connector plug-in service is an entrance service that provides external registration plug-ins, management plug-ins, and plug-in resource construction. The engine plug-in that is successfully registered and loaded will contain the logic of resource allocation and startup parameter configuration. During the engine initialization process, EngineConn
+Other services such as Manager call the logic of the corresponding plug-in in Plugin Server through RPC requests.
+
+| Core Class | Core Function |
+|----------------------------------|---------------------------------------|
+| EngineConnLaunchService | Responsible for building the engine connector launch request |
+| EngineConnResourceFactoryService | Responsible for generating engine resources |
+| EngineConnResourceService | Responsible for downloading the resource files used by the engine connector from BML |
+
+
+EngineConn-Plugin-Loader Engine Connector Plugin Loader
+---------------------------------------
+
+The engine connector plug-in loader is a loader used to dynamically load the engine connector plug-ins according to request parameters, and has the characteristics of caching. The specific loading process is mainly composed of two parts: 1) Plug-in resources such as the main program package and program dependency packages are loaded locally (not open). 2) Plug-in resources are dynamically loaded from the local into the service process environment, for example, loaded into the JVM virtual machine through a class loader.
+
+| Core Class | Core Function |
+|---------------------------------|----------------------------------------------|
+| EngineConnPluginsResourceLoader | Load engine connector plug-in resources |
+| EngineConnPluginsLoader | Load the engine connector plug-in instance, or load an existing one from the cache |
+| EngineConnPluginClassLoader | Dynamically instantiate engine connector instance from jar |
+
+EngineConn-Plugin-Cache engine plug-in cache module
+----------------------------------------
+
+Engine connector plug-in cache is a cache service specially used to cache loaded engine connectors, and supports the ability to read, update, and remove. The plug-in that has been loaded into the service process will be cached together with its class loader to prevent multiple loading from affecting efficiency; at the same time, the cache module will periodically notify the loader to update the plug-in resources. If changes are found, it will be reloaded and refreshed automatically Cache.
+
+| Core Class | Core Function |
+|-----------------------------|------------------------------|
+| EngineConnPluginCache | Cache loaded engine connector instance |
+| RefreshPluginCacheContainer | Engine connector that refreshes the cache regularly |
+
+EngineConn-Plugin-Core: Engine connector plug-in core module
+---------------------------------------------
+
+The engine connector plug-in core module is the core module of the engine connector plug-in. Contains the implementation of the basic functions of the engine plug-in, such as the construction of the engine connector start command, the construction of the engine resource factory and the implementation of the core interface of the engine connector plug-in.
+
+| Core Class | Core Function |
+|-------------------------|----------------------------------------------------------|
+| EngineConnLaunchBuilder | Build Engine Connector Launch Request |
+| EngineConnFactory | Create Engine Connector |
+| EngineConnPlugin | The engine connector plug-in implements the interface, including resources, commands, and instance construction methods. |
+| EngineResourceFactory | Engine Resource Creation Factory |
+
+EngineConn-Plugins: Engine connection plugin collection
+-----------------------------------
+
+The engine connection plug-in collection is used to place the default engine connector plug-in library that has been implemented based on the plug-in interface defined by us. Provides the default engine connector implementation, such as jdbc, spark, python, shell, etc. Users can refer to the implemented cases based on their own needs to implement more engine connectors.
+
+| Core Class | Core Function |
+|---------------------|------------------|
+| engineplugin-jdbc | jdbc engine connector |
+| engineplugin-shell | Shell engine connector |
+| engineplugin-spark | spark engine connector |
+| engineplugin-python | python engine connector |
+
diff --git a/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/engine/engine-conn.md b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/engine/engine-conn.md
new file mode 100644
index 00000000000..f3581a3f25b
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/engine/engine-conn.md
@@ -0,0 +1,139 @@
+---
+title: EngineConn Design
+sidebar_position: 1
+---
+
+## 1. Overview
+
+ EngineConn: engine connector, which is used to connect to the underlying computing and storage engine to complete task execution, task information push and result return, etc. It is the basis for Linkis to provide computing and storage capabilities.
+
+## 2. Overall Design
+
+ The overall design idea of EngineConn is to complete the acquisition and storage of the session information of the underlying engine when starting, complete the connection between the EngineConn process and the underlying engine, and then complete the scheduling of tasks to the underlying engine Session stored in EngineConn through the Executor unit for execution. and get execution-related information.
+
+### 2.1 Technical Architecture
+
+**Introduction to related terms:**
+
+**EngineConn:** Used to store the session information of the underlying engine. To complete the connection with the underlying engine, for example, the Spark engine stores the SparkSession.
+
+**Executor:** The scheduling executor used to accept the task passed by the caller (such as: Entrance), and finally submit the task to the underlying engine Session for execution. Different tasks implement different Executor classes. The most used is the interactive ComputationExecutor, which is used to accept tasks and push task information to the caller in real time. And the non-interactive ManageableOnceExecutor that accepts only one task is used to complete the submission and execution of the task started by EngineConn.
+
+![arc](/Images/Architecture/engine/ec_arc_01.png)
+
+### 2.2 Business Architecture
+
+|Component name|First-level module|Second-level module|Function points|
+|:----|:----|:----|:----|
+|Linkis|EngineConn|linkis-engineconn-common|The common module of engine conn, which defines the most basic entity classes and interfaces in engine conn. |
+|Linkis|EngineConn|linkis-engineconn-core|The core module of the engine connector, which defines the interfaces involved in the core logic of EngineConn. |
+|Linkis|EngineConn|linkis-executor-core|The core module of the executor, which defines the core classes related to the executor. |
+|Linkis|EngineConn|linkis-accessible-executor|The underlying abstraction of the accessible Executor. You can interact with it through RPC requests to obtain its status, load, concurrency and other basic indicators Metrics data |
+|Linkis|EngineConn|linkis-computation-engineconn|Related classes that provide capabilities for interactive computing tasks. |
+
+## 3. Module design
+
+Input: The caller executes the task
+
+Output: return task information such as execution status, results, logs, etc.
+
+Key logic: the timing diagram of the key logic of task execution
+
+![time](/Images/Architecture/engine/ec_arc_02.png)
+
+Key Notes:
+
+1. If it is a serial Executor, after EngineConn receives a task, it will mark EngineConn as Busy and cannot accept other tasks, and will judge whether the lock of the task is consistent to prevent EngineConn from being submitted by multiple callers at the same time. After the task is executed, it becomes the Unlock state
+2. If it is a parallel Executor, after EngineConn receives the task, the state is still in the Unlock state, and it can continue to accept the task. Only when the number of concurrent tasks is reached or the machine index is abnormal will it be marked as Busy state
+3. If it is an Once type task, EngineConn will automatically execute the task after it is started, and the EngineConn process will exit after the task is executed.
+
+## 4. Data structure/storage design
+
+not involving
+
+## 5. Interface design
+
+**Brief introduction of other classes:**
+
+The common module of linkis-engineconn-common engine connector defines the most basic entity classes and interfaces in the engine connector.
+
+|Core Service|Core Function|
+|:----|:----|
+|EngineCreationContext|contains the context information of EngineConn during startup|
+|EngineConn| contains the specific information of EngineConn, such as type, specific connection information with layer computing storage engine, etc.|
+|EngineExecution|Provides the creation logic of Executor|
+|EngineConnHook|Defines the operations before and after each stage of engine startup|
+
+The core module of linkis-engineconn-core engine connector defines the interfaces involved in the core logic of EngineConn.
+
+|Core Classes|Core Functions|
+|:----|:----|
+|EngineConnManager|Provides related interfaces for creating and obtaining EngineConn|
+|ExecutorManager|Provides related interfaces for creating and obtaining Executor|
+|ShutdownHook|Defines actions during engine shutdown|
+|EngineConnServer|Startup class of EngineConn microservice|
+
+linkis-executor-core is the core module of the executor, which defines the core classes related to the executor. The executor is the real computing execution unit, which is responsible for submitting user code to EngineConn for execution.
+
+|Core Classes|Core Functions|
+|:----|:----|
+|Executor| is the actual computing logic execution unit, and provides top-level abstraction of various capabilities of the engine. |
+|EngineConnAsyncEvent| defines EngineConn related asynchronous events|
+|EngineConnSyncEvent| defines the synchronization event related to EngineConn|
+|EngineConnAsyncListener| defines EngineConn-related asynchronous event listeners|
+|EngineConnSyncListener| defines EngineConn-related synchronization event listeners|
+|EngineConnAsyncListenerBus|Defines the listener bus for EngineConn asynchronous events|
+|EngineConnSyncListenerBus|Defines the listener bus for EngineConn sync events|
+|ExecutorListenerBusContext| defines the context of the EngineConn event listener|
+|LabelService|Provide label reporting function|
+|ManagerService|Provides the function of information transfer with LinkisManager|
+
+linkis-accessible-executor: The underlying abstraction of the Executor that can be accessed. You can interact with it through RPC requests to obtain basic metrics such as its status, load, and concurrency.
+
+|Core Classes|Core Functions|
+|:----|:----|
+|LogCache|Provides the function of log caching|
+|AccessibleExecutor| An Executor that can be accessed and interacted with via RPC requests. |
+|NodeHealthyInfoManager|Manage Executor's health information|
+|NodeHeartbeatMsgManager|Manage Executor's heartbeat information|
+|NodeOverLoadInfoManager|Manage Executor load information|
+|Listener-related|Provides events related to Executor and corresponding listener definitions|
+|EngineConnTimedLock|Define Executor level lock|
+|AccessibleService|Provide the start-stop and status acquisition functions of Executor|
+|ExecutorHeartbeatService|Provides Executor's heartbeat-related functions|
+|LockService|Provides lock management functions|
+|LogService|Provides log management functions|
+|EngineConnCallback|Define the callback logic of EngineConn|
+
+Related classes that provide capabilities for interactive computing tasks.
+
+|Core Classes|Core Functions|
+|:----|:----|
+|EngineConnTask| defines interactive computing tasks submitted to EngineConn|
+|ComputationExecutor| defines an interactive Executor, which has interactive capabilities such as status query and task kill, and can only execute tasks once by default. |
+|ConcurrentComputationExecutor|Interactive synchronous concurrent Executor, inherited from ComputationExecutor, but supports executing multiple tasks at the same time|
+|AsyncConcurrentComputationExecutor|Interactive asynchronous concurrent Executor, inherited from ComputationExecutor, supports multiple tasks to be executed at the same time, and the task does not occupy the execution thread and adopts the form of asynchronous notification|
+|TaskExecutionService|Provides management functions for interactive computing tasks|
+
+
+## 6. Non-functional design
+
+### 6.1 Security
+
+1. All the relevant information of the task can only be queried by submitting the user
+2. The default startup user of the EngineConn process is the submission user
+### 6.2 Performance
+
+EngineConn that supports concurrency supports colleagues to run a large number of tasks concurrently. For example, a single Trino EngineConn can run more than 300 trino tasks at the same time
+
+### 6.3 Capacity
+
+not involving
+
+### 6.4 High Availability
+
+EngineConn is a process started on demand and task. Support high availability
+
+### 6.5 Data Quality
+
+not involving
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/entrance.md b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/entrance.md
new file mode 100644
index 00000000000..ef1c4ccba83
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/entrance.md
@@ -0,0 +1,27 @@
+---
+title: Entrance Architecture Design
+sidebar_position: 2
+---
+
+The Links task submission portal is used to receive, schedule, forward execution requests, life cycle management services for computing tasks, and can return calculation results, logs, and progress to the caller. It is split from the Entrance of Linkis0.X Native capabilities.
+
+1. Entrance architecture diagram
+
+![](/Images/Architecture/linkis-entrance-01.png)
+
+**Introduction to the second-level module:**
+
+EntranceServer
+--------------
+
+EntranceServer computing task submission portal service is the core service of Entrance, responsible for the reception, scheduling, execution status tracking, and job life cycle management of Linkis execution tasks. It mainly realizes the conversion of task execution requests into schedulable Jobs, scheduling, applying for Executor execution, job status management, result set management, log management, etc.
+
+| Core Class | Core Function |
+|-------------------------|---------------------|
+| EntranceInterceptor | Entrance interceptor is used to supplement the information of the incoming parameter task, making the content of this task more complete. The supplementary information includes: database information supplement, custom variable replacement, code inspection, limit restrictions, etc. |
+| EntranceParser | The Entrance parser is used to parse the request parameter Map into Task, and it can also convert Task into schedulable Job, or convert Job into storable Task. |
+| EntranceExecutorManager | Entrance executor management creates an Executor for the execution of EntranceJob, maintains the relationship between Job and Executor, and supports the labeling capabilities requested by Job |
+| PersistenceManager | Persistence management is responsible for job-related persistence operations, such as the result set path, job status changes, progress, etc., stored in the database. |
+| ResultSetEngine | The result set engine is responsible for the storage of the result set after the job is run, and it is saved in the form of a file to HDFS or a local storage directory. |
+| LogManager | Log Management is responsible for the storage of job logs and the management of log error codes. |
+| Scheduler | The job scheduler is responsible for the scheduling and execution of all jobs, mainly through scheduling job queues. |
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/linkis-cli.md b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/linkis-cli.md
new file mode 100644
index 00000000000..9910e86f75f
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/linkis-cli.md
@@ -0,0 +1,36 @@
+---
+title: Linkis-Client Architecture
+sidebar_position: 3
+---
+
+Provide users with a lightweight client that submits tasks to Linkis for execution.
+
+#### Linkis-Client architecture diagram
+
+![img](/Images/Architecture/linkis-client-01.png)
+
+
+
+#### Second-level module introduction
+
+##### Linkis-Computation-Client
+
+Provides an interface for users to submit execution tasks to Linkis in the form of SDK.
+
+| Core Class | Core Function |
+| ---------- | -------------------------------------- ---------- |
+| Action | Defines the requested attributes, methods and parameters included |
+| Result | Defines the properties of the returned result, the methods and parameters included |
+| UJESClient | Responsible for request submission, execution, status, results and related parameters acquisition |
+
+
+
+##### Linkis-Cli
+
+Provides a way for users to submit tasks to Linkis in the form of a shell command terminal.
+
+| Core Class | Core Function |
+| ----------- | ------------------------------------- ----------------------- |
+| Common | Defines the parent class and interface of the instruction template parent class, the instruction analysis entity class, and the task submission and execution links |
+| Core | Responsible for parsing input, task execution and defining output methods |
+| Application | Call linkis-computation-client to perform tasks, and pull logs and final results in real time |
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/linkis-manager/_category_.json b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/linkis-manager/_category_.json
new file mode 100644
index 00000000000..8bb53d93abc
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/linkis-manager/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Linkis Manager",
+ "position": 4
+}
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/linkis-manager/app-manager.md b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/linkis-manager/app-manager.md
new file mode 100644
index 00000000000..04040528e07
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/linkis-manager/app-manager.md
@@ -0,0 +1,38 @@
+---
+title: App Manager
+sidebar_position: 1
+---
+
+## 1. Background
+ The Entrance module of the old version of Linkis is responsible for too much responsibilities, the management ability of the Engine is weak, and it is not easy to follow-up expansion, the AppManager module is newly extracted to complete the following responsibilities:
+1. Add the AM module to move the engine management function previously done by Entrance to the AM module.
+2. AM needs to support operating Engine, including: adding, multiplexing, recycling, preheating, switching and other functions.
+3. Need to connect to the Manager module to provide Engine management functions: including Engine status maintenance, engine list maintenance, engine information, etc.
+4. AM needs to manage EM services, complete EM registration and forward the resource registration to RM.
+5. AM needs to be connected to the Label module, including the addition and deletion of EM/Engine, the label manager needs to be notified to update the label.
+6. AM also needs to dock the label module for label analysis, and need to obtain a list of serverInstances with a series of scores through a series of labels (How to distinguish between EM and Engine? the labels are completely different).
+7. Need to provide external basic interface: including the addition, deletion and modification of engine and engine manager, metric query, etc.
+## Architecture diagram
+![AppManager03](/Images/Architecture/AppManager-03.png)
+ As shown in the figure above: AM belongs to the AppManager module in LinkisMaster and provides services.
+ New engine application flow chart:
+![AppManager02](/Images/Architecture/AppManager-02.png)
+ From the above engine life cycle flow chart, it can be seen that Entrance is no longer doing the management of the Engine, and the startup and management of the engine are controlled by AM.
+## Architecture description
+ AppManager mainly includes engine service and EM service:
+Engine service includes all operations related to EngineConn, such as engine creation, engine reuse, engine switching, engine recycling, engine stopping, engine destruction, etc.
+EM service is responsible for information management of all EngineConnManager, and can perform service management on ECM online, including tag modification, suspension of ECM service, obtaining ECM instance information, obtaining ECM running engine information, killing ECM operation, and also according to EM Node information Query all EngineNodes, and also support searching by user, saving EM Node load information, node health information, resource usage information, etc.
+The new EngineConnManager and EngineConn both support tag management, and the types of engines have also added offline, streaming, and interactive support.
+
+ Engine creation: specifically responsible for the new engine function of the LinkisManager service. The engine startup module is fully responsible for the creation of a new engine, including obtaining ECM tag collections, resource requests, obtaining engine startup commands, notifying ECM to create new engines, updating engine lists, etc.
+CreateEngienRequest->RPC/Rest -> MasterEventHandler ->CreateEngineService ->
+->LabelContext/EnginePlugin/RMResourcevice->(RcycleEngineService)EngineNodeManager->EMNodeManager->sender.ask(EngineLaunchRequest)->EngineManager service->EngineNodeManager->EngineLocker->Engine->EngineNodeManager->EngineFactory=>EngineService=> ServerInstance
+When creating an engine is the part that interacts with RM, EnginePlugin should return specific resource types through Labels, and then AM sends resource requests to RM.
+
+ Engine reuse: In order to reduce the time and resources consumed for engine startup, the principle of reuse must be given priority to the use of engines. Reuse generally refers to the reuse of engines that users have created. The engine reuse module is responsible for providing a collection of reusable engines. Election and lock the engine and start using it, or return that there is no engine that can be reused.
+ReuseEngienRequest->RPC/Rest -> MasterEventHandler ->ReuseEngineService ->
+->abelContext->EngineNodeManager->EngineSelector->EngineLocker->Engine->EngineNodeManager->EngineReuser->EngineService=>ServerInstance
+
+ Engine switching: It mainly refers to the label switching of existing engines. For example, when the engine is created, it was created by Creator1. Now it can be changed to Creator2 by engine switching. At this time, you can allow the current engine to receive tasks with the tag Creator2.
+SwitchEngienRequest->RPC/Rest -> MasterEventHandler ->SwitchEngineService ->LabelContext/EnginePlugin/RMResourcevice->EngineNodeManager->EngineLocker->Engine->EngineNodeManager->EngineReuser->EngineService=>ServerInstance.
+Engine manager: Engine manager is responsible for managing the basic information and metadata information of all engines.
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/linkis-manager/engine-conn-history.md b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/linkis-manager/engine-conn-history.md
new file mode 100644
index 00000000000..46e2d888d92
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/linkis-manager/engine-conn-history.md
@@ -0,0 +1,89 @@
+---
+title: EngineConn History Features
+sidebar_position: 5
+tags: [Feature]
+---
+
+## 1. Functional requirements
+### 1.1 Requirement Background
+Before version 1.1.3, LinkisManager only recorded the information and resource usage of the running EngineConn, but the information was lost after the task was completed. If you need to do some statistics and view of historical ECs, or to view the logs of ECs that have ended, it is too cumbersome, so it is more important to record historical ECs.
+
+### 1.2 Goals
+- Complete the storage of EC information and resource information persistent to DB
+- Supports viewing and searching of historical EC information through the restful interface
+- Support to view logs of EC that has ended
+
+## 2. Overall Design
+
+The main changes in this feature are the RM and AM modules under LinkisManager, and an information record table has been added.
+
+### 2.1 Technical Architecture
+Because this implementation needs to record EC information and resource information, and resource information is divided into three concepts, such as requesting resources, actually using resources, and releasing resources, and all of them need to be recorded. Therefore, the general plan for this implementation is: based on the EC in the life cycle of the ResourceManager to implement, and when the EC completes the above three stages, the update operation of the EC information is added. The overall picture is shown below:
+
+![engineconn-history-01.png](/Images-zh/Architecture/EngineConn/engineconn-history-01.png)
+
+
+
+### 2.2 Business Architecture
+
+This feature is mainly to complete the information recording of historical ECs and support the log viewing of historical technical ECs. The modules designed by the function point are as follows:
+
+| First-level module | Second-level module | Function point |
+|---|---|---|
+| LinkisManager | ResourceManager| Complete the EC information record when the EC requests resources, reports the use of resources, and releases resources|
+| LinkisManager | AppManager| Provides an interface to list and search all historical EC information|
+
+## 3. Module Design
+### Core execution flow
+
+- \[Input] The input is mainly for the requested resource when the engine is created, the real used resource reported after the engine is started, and the information input when the resource is released when the engine exits, mainly including the requested label, resource, EC's unique ticketid, resource type etc.
+- \[Processing process] Information recording service, which processes the input data, and parses the corresponding engine information, user, creator, and log path through tags. Confirm the resource request, use, and release by the resource type. Then talk about the information stored in the DB.
+
+The call sequence diagram is as follows:
+![engineconn-history-02.png](/Images-zh/Architecture/EngineConn/engineconn-history-02.png)
+
+
+
+## 4. Data structure:
+```sql
+# EC information resource record table
+DROP TABLE IF EXISTS `linkis_cg_ec_resource_info_record`;
+CREATE TABLE `linkis_cg_ec_resource_info_record` (
+ `id` INT(20) NOT NULL AUTO_INCREMENT,
+ `label_value` VARCHAR(255) NOT NULL COMMENT 'ec labels stringValue',
+ `create_user` VARCHAR(128) NOT NULL COMMENT 'ec create user',
+ `service_instance` varchar(128) COLLATE utf8_bin DEFAULT NULL COMMENT 'ec instance info',
+ `ecm_instance` varchar(128) COLLATE utf8_bin DEFAULT NULL COMMENT 'ecm instance info ',
+ `ticket_id` VARCHAR(100) NOT NULL COMMENT 'ec ticket id',
+ `log_dir_suffix` varchar(128) COLLATE utf8_bin DEFAULT NULL COMMENT 'log path',
+ `request_times` INT(8) COMMENT 'resource request times',
+ `request_resource` VARCHAR(255) COMMENT 'request resource',
+ `used_times` INT(8) COMMENT 'resource used times',
+ `used_resource` VARCHAR(255) COMMENT 'used resource',
+ `release_times` INT(8) COMMENT 'resource released times',
+ `released_resource` VARCHAR(255) COMMENT 'released resource',
+ `release_time` datetime DEFAULT NULL COMMENT 'released time',
+ `used_time` datetime DEFAULT NULL COMMENT 'used time',
+ `create_time` datetime DEFAULT CURRENT_TIMESTAMP COMMENT 'create time',
+ PRIMARY KEY (`id`),
+ KEY (`ticket_id`),
+ UNIQUE KEY `label_value_ticket_id` (`ticket_id`, `label_value`)
+) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
+````
+
+## 5. Interface Design
+Engine history management page API interface, refer to the document Add history engine page to the management console
+
+## 6. Non-functional design
+
+### 6.1 Security
+No security issues are involved, the restful interface requires login authentication
+
+### 6.2 Performance
+Less impact on engine life cycle performance
+
+### 6.3 Capacity
+Requires regular cleaning
+
+### 6.4 High Availability
+not involving
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/linkis-manager/label-manager.md b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/linkis-manager/label-manager.md
new file mode 100644
index 00000000000..2d8eeafbe32
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/linkis-manager/label-manager.md
@@ -0,0 +1,43 @@
+---
+title: Label Manager
+sidebar_position: 3
+---
+
+## LabelManager architecture design
+
+#### Brief description
+LabelManager is a functional module in Linkis that provides label services to upper-level applications. It uses label technology to manage cluster resource allocation, service node election, user permission matching, and gateway routing and forwarding; it includes generalized analysis and processing tools that support various custom Label labels, And a universal tag matching scorer.
+### Overall architecture schematic
+
+![label_manager_global](/Images/Architecture/LabelManager/label_manager_global.png)
+
+#### Architecture description
+- LabelBuilder: Responsible for the work of label analysis. It can parse the input label type, keyword or character value to obtain a specific label entity. There is a default generalization implementation class or custom extensions.
+- LabelEntities: Refers to a collection of label entities, including cluster labels, configuration labels, engine labels, node labels, routing labels, search labels, etc.
+- NodeLabelService: The associated service interface class of instance/node and label, which defines the interface method of adding, deleting, modifying and checking the relationship between the two and matching the instance/node according to the label.
+- UserLabelService: Declare the associated operation between the user and the label.
+- ResourceLabelService: Declare the associated operations of cluster resources and labels, involving resource management of combined labels, cleaning or setting the resource value associated with the label.
+- NodeLabelScorer: Node label scorer, corresponding to the implementation of different label matching algorithms, using scores to indicate node label matching.
+
+### 1. LabelBuilder parsing process
+Take the generic label analysis class GenericLabelBuilder as an example to clarify the overall process:
+The process of label parsing/construction includes several steps:
+1. According to the input, select the appropriate label class to be parsed.
+2. According to the definition information of the tag class, recursively analyze the generic structure to obtain the specific tag value type.
+3. Convert the input value object to the tag value type, using implicit conversion or positive and negative analysis framework.
+4. According to the return of 1-3, instantiate the label, and perform some post operations according to different label classes.
+
+### 2. NodeLabelScorer scoring process
+In order to select a suitable engine node based on the tag list attached to the Linkis user execution request, it is necessary to make a selection of the matching engine list, which is quantified as the tag matching degree of the engine node, that is, the score.
+In the label definition, each label has a feature value, namely CORE, SUITABLE, PRIORITIZED, OPTIONAL, and each feature value has a boost value, which is equivalent to a weight and an incentive value.
+At the same time, some features such as CORE and SUITABLE must be unique features, that is, strong filtering is required during the matching process, and a node can only be associated with one CORE/SUITABLE label.
+According to the relationship between existing tags, nodes, and request attached tags, the following schematic diagram can be drawn:
+![label_manager_scorer](/Images/Architecture/LabelManager/label_manager_scorer.png)
+
+The built-in default scoring logic process should generally include the following steps:
+1. The input of the method should be two sets of network relationship lists, namely `Label -> Node` and `Node -> Label`, where the Node node in the `Node -> Label` relationship must have all the CORE and SUITABLE feature labels, these nodes are also called candidate nodes.
+2. The first step is to traverse and calculate the relationship list of `Node -> Label`, and traverse the label Label associated with each node. In this step, the label is scored first. If the label is not the label attached to the request, the score is 0.
+Otherwise, the score is divided into: (basic score/the number of times the tag corresponds to the feature value in the request) * the incentive value of the corresponding feature value, where the basic score defaults to 1, and the initial score of the node is the sum of the associated tag scores; where because The CORE/SUITABLE type label must be the only label, and the number of occurrences is always 1.
+3. After obtaining the initial score of the node, the second step is to traverse the calculation of the `Label -> Node` relationship. Since the first step ignores the effect of unrequested attached labels on the score, the proportion of irrelevant labels will indeed affect the score. This type of label is unified with the UNKNOWN feature, and this feature also has a corresponding incentive value;
+We set that the higher the proportion of candidate nodes associated with irrelevant labels in the total associated nodes, the more significant the impact on the score, which can further accumulate the initial score of the node obtained in the first step.
+4. Normalize the standard deviation of the scores of the candidate nodes and sort them.
diff --git a/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/linkis-manager/overview.md b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/linkis-manager/overview.md
new file mode 100644
index 00000000000..81bc1e6f3f8
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/linkis-manager/overview.md
@@ -0,0 +1,48 @@
+---
+title: Overview
+sidebar_position: 0
+---
+
+LinkisManager Architecture Design
+====================
+ As an independent microservice of Linkis, LinkisManager provides AppManager (application management), ResourceManager (resource management), and LabelManager (label management) capabilities. It can support multi-active deployment and has the characteristics of high availability and easy expansion.
+## 1. Architecture Diagram
+![Architecture Diagram](/Images/Architecture/LinkisManager/LinkisManager-01.png)
+### 1.1 Noun explanation
+- EngineConnManager (ECM): Engine Manager, used to start and manage engines.
+- EngineConn (EC): Engine connector, used to connect the underlying computing engine.
+- ResourceManager (RM): Resource Manager, used to manage node resources.
+## 2. Introduction to the second-level module
+### 2.1 Application management module linkis-application-manager
+ AppManager is used for unified scheduling and management of engines:
+
+| Core Interface/Class | Main Function |
+|------------|--------|
+|EMInfoService | Defines EngineConnManager information query and modification functions |
+|EMRegisterService| Defines EngineConnManager registration function |
+|EMEngineService | Defines EngineConnManager's creation, query, and closing functions of EngineConn |
+|EngineAskEngineService | Defines the function of querying EngineConn |
+|EngineConnStatusCallbackService | Defines the function of processing EngineConn status callbacks |
+|EngineCreateService | Defines the function of creating EngineConn |
+|EngineInfoService | Defines EngineConn query function |
+|EngineKillService | Defines the stop function of EngineConn |
+|EngineRecycleService | Defines the recycling function of EngineConn |
+|EngineReuseService | Defines the reuse function of EngineConn |
+|EngineStopService | Defines the self-destruct function of EngineConn |
+|EngineSwitchService | Defines the engine switching function |
+|AMHeartbeatService | Provides EngineConnManager and EngineConn node heartbeat processing functions |
+
+ The process of applying for an engine through AppManager is as follows:
+![AppManager](/Images/Architecture/LinkisManager/AppManager-01.png)
+### 2.2 Label management module linkis-label-manager
+ LabelManager provides label management and analysis capabilities.
+
+| Core Interface/Class | Main Function |
+|------------|--------|
+|LabelService | Provides the function of adding, deleting, modifying and checking labels |
+|ResourceLabelService | Provides resource label management functions |
+|UserLabelService | Provides user label management functions |
+The LabelManager architecture diagram is as follows:
+![ResourceManager](/Images/Architecture/LinkisManager/ResourceManager-01.png)
+### 2.3 Monitoring module linkis-manager-monitor
+ Monitor provides the function of node status monitoring.
diff --git a/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/linkis-manager/resource-manager.md b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/linkis-manager/resource-manager.md
new file mode 100644
index 00000000000..df3ea73a8dd
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/linkis-manager/resource-manager.md
@@ -0,0 +1,138 @@
+---
+title: Resource Manager
+sidebar_position: 2
+---
+
+## 1. Background
+ ResourceManager (RM for short) is the computing resource management module of Linkis. All EngineConn (EC for short), EngineConnManager (ECM for short), and even external resources including Yarn are managed by RM. RM can manage resources based on users, ECM, or other granularities defined by complex tags.
+## 2. The role of RM in Linkis
+![01](/Images/Architecture/rm-01.png)
+![02](/Images/Architecture/rm-02.png)
+ As a part of Linkis Manager, RM mainly functions as follows: maintain the available resource information reported by ECM, process the resource application submitted by ECM, record the actual resource usage information reported by EC in real time during the life cycle after successful application, and provide query current resource usage The relevant interface of the situation.
+In Linkis, other services that interact with RM mainly include:
+1. Engine Manager, ECM for short: Processes the microservices that start the engine connector request. As a resource provider, ECM is responsible for registering and unregistering resources with RM. At the same time, as the manager of the engine, ECM is responsible for applying for resources from RM instead of the new engine connector that is about to start. For each ECM instance, there is a corresponding resource record in the RM, which contains information such as the total resources and protection resources it provides, and dynamically updates the used resources.
+![03](/Images/Architecture/rm-03.png)
+2. The engine connector, referred to as EC, is the actual execution unit of user operations. At the same time, as the actual user of the resource, the EC is responsible for reporting the actual use of the resource to the RM. Each EC has a corresponding resource record in the RM: during the startup process, it is reflected as a locked resource; during the running process, it is reflected as a used resource; after being terminated, the resource record is subsequently deleted.
+![04](/Images/Architecture/rm-04.png)
+## 3. Resource type and format
+![05](/Images/Architecture/rm-05.png)
+ As shown in the figure above, all resource classes implement a top-level Resource interface, which defines the calculation and comparison methods that all resource classes need to support, and overloads the corresponding mathematical operators to enable resources to be Directly calculated and compared like numbers.
+
+| Operator | Correspondence Method | Operator | Correspondence Method |
+|--------|-------------|--------|-------------|
+| \+ | add | \> | moreThan |
+| \- | minus | \< | lessThan |
+| \* | multiply | = | equals |
+| / | divide | \>= | notLessThan |
+| \<= | notMoreThan | | |
+
+ The currently supported resource types are shown in the following table. All resources have corresponding json serialization and deserialization methods, which can be stored in json format and transmitted across the network:
+
+| Resource Type | Description |
+|-----------------------|--------------------------------------------------------|
+| MemoryResource | Memory Resource |
+| CPUResource | CPU Resource |
+| LoadResource | Both memory and CPU resources |
+| YarnResource | Yarn queue resources (queue, queue memory, queue CPU, number of queue instances) |
+| LoadInstanceResource | Server resources (memory, CPU, number of instances) |
+| DriverAndYarnResource | Driver and executor resources (with server resources and Yarn queue resources at the same time) |
+| SpecialResource | Other custom resources |
+
+## 4. Available resource management
+ The available resources in the RM mainly come from two sources: the available resources reported by the ECM, and the resource limits configured according to tags in the Configuration module.
+**ECM resource report**:
+1. When the ECM is started, it will broadcast the ECM registration message. After receiving the message, the RM will register the resource according to the content contained in the message. The resource-related content includes:
+
+ 1. Total resources: the total number of resources that the ECM can provide.
+
+ 2. Protect resources: When the remaining resources are less than this resource, no further resources are allowed to be allocated.
+
+ 3. Resource type: such as LoadResource, DriverAndYarnResource and other type names.
+
+ 4. Instance information: machine name plus port name.
+
+2. After RM receives the resource registration request, it adds a record in the resource table, the content is consistent with the parameter information of the interface, and finds the label representing the ECM through the instance information, and adds an association in the resource and label association table recording.
+
+3. When the ECM is closed, it will broadcast a message that the ECM is closed. After receiving the message, the RM will go offline according to the ECM instance information in the message, that is, delete the resource and associated records corresponding to the ECM instance tag.
+
+**Configuration module tag resource configuration**
+ In the Configuration module, users can configure the number of resources based on different tag combinations, such as limiting the maximum available resources of the User/Creator/EngineType combination.
+
+ The RM queries the Configuration module for resource information through the RPC message, using the combined tag as the query condition, and converts it into a Resource object to participate in subsequent comparison and recording.
+
+## 5. Resource Usage Management
+**Receive user's resource application:**
+1. When LinkisManager receives a request to start EngineConn, it will call RM's resource application interface to apply for resources. The resource application interface accepts an optional time parameter. When the waiting time for applying for a resource exceeds the limit of the time parameter, the resource application will be automatically processed as a failure.
+**Judging whether there are enough resources:**
+That is, to determine whether the remaining available resources are greater than the requested resources, if greater than or equal to, the resources are sufficient; otherwise, the resources are insufficient.
+
+1. RM preprocesses the label information attached to the resource application, and filters, combines and converts the original labels according to the rules (such as combining the User/Creator label and EngineType label), which makes the subsequent resource judgment more granular flexible.
+
+2. Lock each converted label one by one, so that their corresponding resource records remain unchanged during the processing of resource applications.
+
+3. According to each label:
+
+ 1. Query the corresponding resource record from the database through the Persistence module. If the record contains the remaining available resources, it is directly used for comparison.
+
+ 2. If there is no direct remaining available resource record, it will be calculated by the formula of [Remaining Available Resource=Maximum Available Resource-Used Resource-Locked Resource-Protected Resource].
+
+ 3. If there is no maximum available resource record, request the Configuration module to see if there is configured resource information, if so, use the formula for calculation, if not, skip the resource judgment for this tag.
+
+ 4. If there is no resource record, skip the resource judgment for this tag.
+
+4. As long as one tag is judged to be insufficient in resources, the resource application will fail, and each tag will be unlocked one by one.
+
+5. Only when all tags are judged to be sufficient resources, can the resource application be successfully passed and proceed to the next step.
+
+**lock by application of resources:**
+
+1. The number of resource request by generating a new record in the resource table, and associated with each tag.
+
+2. If there is a tag corresponding to the remaining available resource record, the corresponding number of the abatement.
+
+3. Generate a timed task, the lock checks whether these resources are actually used after a certain time, if the timeout is not used, it is mandatory recycling.
+
+4. unlock each tag.
+
+**report the actual use of resources:**
+
+1. EngineConn after the start, broadcast a resource utilization message. RM after receiving the message, check whether the label corresponding to the EngineConn lock resource record, and if not, an error.
+
+2. If you have locked resource, the EngineConn all labels associated lock.
+
+3. For each tag, the resource record corresponding lock record for the conversion of used resources.
+
+4. Unlock all labels.
+
+**Release actual used resources:**
+
+1. EngineConn after the end of the life cycle, recycling broadcast messages. RM after receiving the message, check whether the EngineConn corresponding label resources have been recorded.
+
+2. If so, all the labels associated EngineConn be locked.
+
+3, minus the amount used in the corresponding resource record for each label.
+
+4. If there is a tag corresponding to the remaining available resource record, the corresponding increase in number.
+
+5. The unlocking each tag
+
+## 6. External resource management
+ In RM, in order to classify resources more flexibly and expansively, support multi-cluster resource management and control, and at the same time make it easier to access new external resources, the following considerations have been made in the design:
+
+1. Unified management of resources through tags. After the resource is registered, it is associated with the tag, so that the attributes of the resource can be expanded infinitely. At the same time, resource applications are also tagged to achieve flexible matching.
+
+2. Abstract the cluster into one or more tags, and maintain the environmental information corresponding to each cluster tag in the external resource management module to achieve dynamic docking.
+
+3. Abstract a general external resource management module. If you need to access new external resource types, you can convert different types of resource information into Resource entities in the RM as long as you implement a fixed interface to achieve unified management.
+![06](/Images/Architecture/rm-06.png)
+
+ Other modules of RM obtain external resource information through the interface provided by ExternalResourceService.
+
+ The ExternalResourceService obtains information about external resources through resource types and tags:
+
+1. The type, label, configuration and other attributes of all external resources (such as cluster name, Yarn web
+ url, Hadoop version and other information) are maintained in the linkis\_external\_resource\_provider table.
+
+2. For each resource type, there is an implementation of the ExternalResourceProviderParser interface, which parses the attributes of external resources, converts the information that can be matched to the Label into the corresponding Label, and converts the information that can be used as a parameter to request the resource interface into params . Finally, an ExternalResourceProvider instance that can be used as a basis for querying external resource information is constructed.
+
+3. According to the resource type and label information in the parameters of the ExternalResourceService method, find the matching ExternalResourceProvider, generate an ExternalResourceRequest based on the information in it, and formally call the API provided by the external resource to initiate a resource information request.
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/overview.md b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/overview.md
new file mode 100644
index 00000000000..813381c0597
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/computation-governance-services/overview.md
@@ -0,0 +1,38 @@
+---
+title: Overview
+sidebar_position: 1
+---
+
+## **Overview**
+
+Computing Governance Service Group, CGS: Computation Governance Services. It is the core module of Linkis to complete the main steps of computing tasks and requests such as submission, preparation, execution, and return of results.
+
+## Architecture Diagram
+![linkis Computation Gov](/Images/Linkis_1.0_architecture.png)
+**Operation process optimization:** Linkis1.0 will optimize the overall execution process of the job, from submission —\> preparation —\>
+Perform three stages to fully upgrade Linkis's Job execution architecture, as shown in the following figure:
+![](/Images/Architecture/linkis-computation-gov-02.png)
+## Architecture Description
+### 1. Entrance
+ Entrance, as the submission portal for computing tasks, provides task reception, scheduling and job information forwarding capabilities. It is a native capability split from Linkis0.X's Entrance.
+[Entrance Architecture Design](entrance.md)
+### 2. Orchestrator
+ Orchestrator, as the entrance to the preparation phase, inherits the capabilities of parsing Jobs, applying for Engines, and submitting execution from Entrance of Linkis0.X; at the same time, Orchestrator will provide powerful orchestration and computing strategy capabilities to meet multiple activities, active backups, transactions, and replays. , Current limiting, heterogeneous and mixed computing and other application scenarios.
+
+
+
+### 3. LinkisManager
+ As the management brain of Linkis, LinkisManager is mainly composed of AppManager, ResourceManager, LabelManager and EngineConnPlugin.
+1. ResourceManager not only has Linkis0.X's resource management capabilities for Yarn and Linkis EngineManager, but also provides tag-based multi-level resource allocation and recycling capabilities, allowing ResourceManager to have full resource management capabilities across clusters and across computing resource types;
+2. AppManager will coordinate and manage all EngineConnManager and EngineConn. The life cycle of EngineConn application, reuse, creation, switching, and destruction will be handed over to AppManager for management; and LabelManager will provide cross-IDC and cross-cluster based on multi-level combined tags EngineConn and EngineConnManager routing and management capabilities;
+3. EngineConnPlugin is mainly used to reduce the access cost of new computing storage, so that users can access a new computing storage engine only by implementing one class.
+ [Enter LinkisManager Architecture Design](linkis-manager/overview.md)
+### 4. Engine Manager
+ Engine conn Manager (ECM) is a simplified and upgraded version of linkis0. X engine manager. The ECM under linkis1.0 removes the application ability of the engine, and the whole microservice is completely stateless. It will focus on supporting the startup and destruction of all kinds of enginecon.
+[Enter EngineConnManager Architecture Design](engine-conn-manager.md)
+ ### 5. EngineConn
+ EngineConn is an optimized and upgraded version of Linkis0.X Engine. It will provide EngineConn and Executor two modules. EngineConn is used to connect the underlying computing storage engine and provide a session session that connects the underlying computing storage engines; Executor is based on this Session session , Provide full-stack computing support for interactive computing, streaming computing, offline computing, and data storage.
+ [Enter EngineConn Architecture Design](engine/engine-conn.md)
diff --git a/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/_category_.json b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/_category_.json
new file mode 100644
index 00000000000..54c60ad8ffe
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Public Enhancement Services",
+ "position": 3
+}
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/bml/_category_.json b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/bml/_category_.json
new file mode 100644
index 00000000000..66c486ba0f2
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/bml/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "BML",
+ "position": 4
+}
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/bml/engine-bml-dissect.md b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/bml/engine-bml-dissect.md
new file mode 100644
index 00000000000..95a2d5355ed
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/bml/engine-bml-dissect.md
@@ -0,0 +1,294 @@
+---
+title: Analysis of Engine BML
+sidebar_position: 1
+---
+
+> Introduction: This article takes the engine-related material management process as the entry point, and combines the underlying data model and source code to analyze the implementation details of the engine material management function in detail, hoping to help you better understand the BML (material library) service. Architecture.
+
+## 1. BML material library service
+
+The BML material library is a functional module under the PublicEnhancementService (PS) in Linkis, the public enhancement service framework.
+
+![PS-BML](/Images/Architecture/Public_Enhancement_Service/engine_bml/PS-BML.png)
+
+In the Linkis architecture system, the concept of `material` refers to various file data that are stored and hosted in a unified manner, including script code, resource files, third-party jars, related class libraries and configuration files required when the engine starts, as well as keytab files for security authentication, etc.
+
+In short, any data that exists in the file state can be centrally hosted in the material library, and then downloaded and used in the respective required scenarios.
+
+The material service is stateless and can be deployed in multiple instances to achieve high service availability. Each instance provides independent services to the outside world without interfering with each other. All material metadata and version information are shared in the database, and the underlying material data can be accessed. Store in HDFS or local (shared) file system, and support the implementation of file storage-related interfaces, extending other file storage systems, etc.
+
+The material service provides precise permission control. For the material of the engine resource type, it can be shared and accessed by all users; for some material data containing sensitive information, only limited users can read it.
+
+The material file adopts the method of appending, which can combine multiple versions of resource files into one large file to avoid generating too many small HDFS files. Too many small HDFS files will reduce the overall performance of HDFS.
+
+The material service provides lifecycle management of operation tasks such as file upload, update, and download. At the same time, there are two forms of using the material service, the rest interface and the SDK. Users can choose according to their own needs.
+
+The BML architecture diagram is as follows:
+
+![BML Architecture](/Images/Architecture/Public_Enhancement_Service/engine_bml/bml-jiagou.png)
+
+For the above overview of the BML architecture, please refer to the official website document: https://linkis.apache.org/zh-CN/docs/latest/architecture/public-enhancement-services/bml
+
+## 2. BML material library service underlying table model
+
+Before deeply understanding the process details of BML material management, it is necessary to sort out the database table model that the underlying BML material management service relies on.
+
+![BML-Model](/Images/Architecture/Public_Enhancement_Service/engine_bml/BML-Model.png)
+
+Combined with Linkis' linkis_ddl.sql file and the engine material upload and update process described below, you can understand the meaning of fields in bml resources related tables and the field relationship between tables.
+
+## 3. Usage scenarios of BML material library service
+
+Currently in Linkis, the usage scenarios of the BML material library service include:
+
+- Engine material files, including files in conf and lib required for engine startup
+- Stored scripts, such as the scripts in the Scripts linked by the workflow task node are stored in the BML material library
+- Workflow content version management in DSS
+- Management of resource files required when tasks are running
+
+## 4. Analysis of engine material management process
+
+`Engine material` is a subset of the Linkis material concept, and its role is to provide the latest version of jar package resources and configuration files for the engine to start. This section mainly starts from the engine material management function, and analyzes the flow details of engine material data in BML.
+
+### 4.1 Engine Material Description
+
+After the Linkis installation package is deployed normally, you can see all the engine material directories under the `LINKIS_INSTALL_HOME/lib/linkis-engineconn-plugins` directory. Taking the jdbc engine as an example, the structure of the engine material directory is as follows:
+
+```shell
+jdbc
+├── dist
+│ └── v4
+│ ├── conf
+│ ├── conf.zip
+│ ├── lib
+│ └── lib.zip
+└── plugin
+ └── 4
+ └── linkis-engineplugin-jdbc-1.1.2.jar
+```
+
+Material catalog composition:
+
+```shell
+jdbc/dist/version/conf.zip
+jdbc/dist/version/lib.zip
+
+jdbc/plugin/version number (remove v and leave the number)/linkis-engineplugin-engine name-1.1.x.jar
+````
+
+conf.zip and lib.zip will be hosted in the material management service as engine materials. After each local modification to the material conf or lib, a new version number will be generated for the corresponding material, and the material file data will be re-uploaded. When the engine starts, the material data of the latest version number will be obtained, lib and conf will be loaded, and the java process of the engine will be started.
+
+### 4.2 Engine material upload and update process
+
+When Linkis is deployed and started for the first time, the engine material (lib.zip and conf.zip) will be triggered to upload to the material library for the first time; when the jar package under the engine lib or the engine configuration file in conf is modified, the engine material needs to be triggered. The refresh mechanism ensures that the latest material file can be loaded when the engine is started.
+
+Taking the current version of Linkis 1.1.x as an example, there are two ways to trigger the engine material refresh:
+
+Restart the engineplugin service with the command `sh sbin/linkis-daemon.sh restart cg-engineplugin`
+
+Interface to refresh by requesting engine material
+
+```shell
+# refresh all engine materials
+curl --cookie "linkis_user_session_ticket_id_v1=kN4HCk555Aw04udC1Npi4ttKa3duaCOv2HLiVea4FcQ=" http://127.0.0.1:9001/api/rest_j/v1/engineplugin/refreshAll
+# Specify the engine type and version to refresh the item
+curl --cookie "linkis_user_session_ticket_id_v1=kN4HCk555Aw04udC1Npi4ttKa3duaCOv2HLiVea4FcQ=" http://127.0.0.1:9001/api/rest_j/v1/engineplugin/refresh?ecType=jdbc&version=4
+```
+
+The underlying implementation mechanism of the two types of engine material refresh methods is the same, both call the refreshAll() or refresh() method in the `EngineConnResourceService` class.
+
+In the init() method in the default implementation class `DefaultEngineConnResourceService` of the abstract class `EngineConnResourceService`, the parameter wds.linkis.engineconn.dist.load.enable (default is true) is used to control whether to start the engineplugin service every time. Execute refreshAll(false) to check whether all engine materials have been updated (where faslse represents asynchronous acquisition of execution results).
+
+> The init() method is modified by the annotation @PostConstruct. After the DefaultEngineConnResourceService is loaded, it is executed before the object is used, and it is executed only once.
+
+Manually call the interface of engineplugin/refresh, that is, manually execute the refreshAll or refresh method in the `EngineConnResourceService` class.
+
+So the logic of engine material detection and update is in the refreshAll and refresh methods in `DefaultEngineConnResourceService`.
+
+The core logic of refreshAll() is:
+
+1) Obtain the installation directory of the engine through the parameter wds.linkis.engineconn.home, the default is:
+
+```scala
+getEngineConnsHome = Configuration.getLinkisHome() + "/lib/linkis-engineconn-plugins";
+````
+
+2) Traverse the engine directory
+
+```scala
+getEngineConnTypeListFromDisk: Array[String] = new File(getEngineConnsHome).listFiles().map(_.getName)
+```
+
+3) The `EngineConnBmlResourceGenerator` interface provides the validity detection of the underlying files or directories of each engine (version). The corresponding implementation exists in the abstract class `AbstractEngineConnBmlResourceGenerator`.
+
+4) The `DefaultEngineConnBmlResourceGenerator` class is mainly used to generate `EngineConnLocalizeResource`. EngineConnLocalizeResource is the encapsulation of the material resource file metadata and InputStream. In the subsequent logic, EngineConnLocalizeResource will be used as a material parameter to participate in the material upload process.
+
+The code details of the three files EngineConnBmlResourceGenerator, AbstractEngineConnBmlResourceGenerator, and DefaultEngineConnBmlResourceGenerator will not be described in detail. You can use the following UML class diagram to get a general understanding of its inheritance mechanism, and combine the specific implementation in the method to understand the function of this part.
+
+![BML](/Images/Architecture/Public_Enhancement_Service/engine_bml/bml_uml.png)
+
+Go back to the refreshAll method in the `DefaultEngineConnResourceService` class, and continue to look at the core process of the refreshTask thread:
+
+```scala
+engineConnBmlResourceGenerator.getEngineConnTypeListFromDisk foreach { engineConnType =>
+ Utils.tryCatch {
+ engineConnBmlResourceGenerator.generate(engineConnType).foreach {
+ case (version, localize) =>
+ logger.info(s" Try to initialize ${engineConnType}EngineConn-$version.")
+ refresh(localize, engineConnType, version)
+ }
+ }
+ ......
+}
+```
+
+Scan the installation directory of the engine to get a list of each engine material directory. After the legality check of each engine material directory structure is passed, you can get the corresponding `EngineConnLocalizeResource`, and then call refresh(localize: Array[EngineConnLocalizeResource] , engineConnType: String, version: String) to complete the upload of subsequent materials.
+
+Inside the refresh() method, the main processes are as follows:
+
+Obtain the material list data corresponding to engineConnType and version from the table `linkis_cg_engine_conn_plugin_bml_resources`, and assign it to the variable engineConnBmlResources.
+
+```scala
+val engineConnBmlResources = asScalaBuffer(engineConnBmlResourceDao.getAllEngineConnBmlResource(engineConnType, version))
+````
+
+![ec data](/Images/Architecture/Public_Enhancement_Service/engine_bml/ec-data.png)
+
+
+
+#### 4.2.1 Engine material upload process
+
+**Engine material upload process sequence diagram**
+
+![Engine material upload process sequence diagram](/Images/Architecture/Public_Enhancement_Service/engine_bml/bml-shixu.png)
+
+If there is no matching data in the table `linkis_cg_engine_conn_plugin_bml_resources`, you need to use the data in EngineConnLocalizeResource to construct an EngineConnBmlResource object and save it to the `linkis_cg_engine_conn_plugin_bml_resources` table. Before saving this data, you need to upload the material file, that is, execute `uploadToBml` (localizeResource)` method.
+
+Inside the uploadToBml(localizeResource) method, the interface for requesting material upload is constructed by constructing bmlClient. which is:
+
+```scala
+private val bmlClient = BmlClientFactory.createBmlClient()
+bmlClient.uploadResource(Utils.getJvmUser, localizeResource.fileName, localizeResource.getFileInputStream)
+```
+
+In BML Server, the location of the material upload interface is in the uploadResource interface method in the BmlRestfulApi class. The main process is:
+
+```scala
+ResourceTask resourceTask = taskService.createUploadTask(files, user, properties);
+```
+
+Every time a material is uploaded, a ResourceTask will be constructed to complete the file upload process, and the execution record of the file upload task will be recorded. Inside the createUploadTask method, the main operations are as follows:
+
+1) Generate a globally unique resource_id for the uploaded resource file, String resourceId = UUID.randomUUID().toString();
+
+2) Build a ResourceTask record and store it in the table `linkis_ps_bml_resources_task`, as well as a series of subsequent Task state modifications.
+
+```scala
+ResourceTask resourceTask = ResourceTask.createUploadTask(resourceId, user, properties);
+taskDao.insert(resourceTask);
+
+taskDao.updateState(resourceTask.getId(), TaskState.RUNNING.getValue(), new Date());
+```
+
+3) The actual writing of material files into the material library is completed by the upload method in the ResourceServiceImpl class. Inside the upload method, a set of byte streams corresponding to `List files` will be persisted to the material library file storage In the system; store the properties data of the material file in the resource record table (linkis_ps_bml_resources) and the resource version record table (linkis_ps_bml_resources_version).
+
+```scala
+MultipartFile p = files[0]
+String resourceId = (String) properties.get("resourceId");
+String fileName =new String(p.getOriginalFilename().getBytes(Constant.ISO_ENCODE),
+ Constant.UTF8_ENCODE);
+fileName = resourceId;
+String path = resourceHelper.generatePath(user, fileName, properties);
+// generatePath currently supports Local and HDFS paths, and the composition rules of paths are determined by LocalResourceHelper or HdfsResourceHelper
+// implementation of the generatePath method in
+StringBuilder sb = new StringBuilder();
+long size = resourceHelper.upload(path, user, inputStream, sb, true);
+// The file size calculation and the file byte stream writing to the file are implemented by the upload method in LocalResourceHelper or HdfsResourceHelper
+Resource resource = Resource.createNewResource(resourceId, user, fileName, properties);
+// Insert a record into the resource table linkis_ps_bml_resources
+long id = resourceDao.uploadResource(resource);
+// Add a new record to the resource version table linkis_ps_bml_resources_version, the version number at this time is instant.FIRST_VERSION
+// In addition to recording the metadata information of this version, the most important thing is to record the storage location of the file of this version, including the file path, starting location, and ending location.
+String clientIp = (String) properties.get("clientIp");
+ResourceVersion resourceVersion = ResourceVersion.createNewResourceVersion(
+ resourceId, path, md5String, clientIp, size, Constant.FIRST_VERSION, 1);
+versionDao.insertNewVersion(resourceVersion);
+```
+
+After the above process is successfully executed, the material data is truly completed, and then the UploadResult is returned to the client, and the status of this ResourceTask is marked as completed. Exception information.
+
+![resource-task](/Images/Architecture/Public_Enhancement_Service/engine_bml/resource-task.png)
+
+
+
+#### 4.2.2 Engine material update process
+
+**Engine material update process sequence diagram**
+
+![Engine material update process sequence diagram](/Images/Architecture/Public_Enhancement_Service/engine_bml/engine-bml-update-shixu.png)
+
+If the table `linkis_cg_engine_conn_plugin_bml_resources` matches the local material data, you need to use the data in EngineConnLocalizeResource to construct an EngineConnBmlResource object, and update the metadata information such as the version number, file size, modification time, etc. of the original material file in the `linkis_cg_engine_conn_plugin_bml_resources` table. Before updating, you need to complete the update and upload operation of the material file, that is, execute the `uploadToBml(localizeResource, engineConnBmlResource.getBmlResourceId)` method.
+
+Inside the uploadToBml(localizeResource, resourceId) method, an interface for requesting material resource update by constructing bmlClient. which is:
+
+```scala
+private val bmlClient = BmlClientFactory.createBmlClient()
+bmlClient.updateResource(Utils.getJvmUser, resourceId, localizeResource.fileName, localizeResource.getFileInputStream)
+```
+
+In BML Server, the interface for material update is located in the updateVersion interface method in the BmlRestfulApi class. The main process is as follows:
+
+Complete the validity detection of resourceId, that is, check whether the incoming resourceId exists in the linkis_ps_bml_resources table. If the resourceId does not exist, an exception will be thrown to the client, and the material update operation at the interface level will fail.
+
+Therefore, the corresponding relationship of the resource data in the tables `linkis_cg_engine_conn_plugin_bml_resources` and `linkis_ps_bml_resources` needs to be complete, otherwise an error will occur that the material file cannot be updated.
+
+```scala
+resourceService.checkResourceId(resourceId)
+```
+
+If resourceId exists in the linkis_ps_bml_resources table, it will continue to execute:
+
+```scala
+StringUtils.isEmpty(versionService.getNewestVersion(resourceId))
+````
+
+The getNewestVersion method is to obtain the maximum version number of the resourceId in the table `linkis_ps_bml_resources_version`. If the maximum version corresponding to the resourceId is empty, the material will also fail to update, so the integrity of the corresponding relationship of the data here also needs to be strictly guaranteed.
+
+After the above two checks are passed, a ResourceUpdateTask will be created to complete the final file writing and record update saving.
+
+```scala
+ResourceTask resourceTask = null;
+synchronized (resourceId.intern()) {
+ resourceTask = taskService.createUpdateTask(resourceId, user, file, properties);
+}
+```
+
+Inside the createUpdateTask method, the main functions implemented are:
+
+```scala
+// Generate a new version for the material resource
+String lastVersion = getResourceLastVersion(resourceId);
+String newVersion = generateNewVersion(lastVersion);
+// Then the construction of ResourceTask, and state maintenance
+ResourceTask resourceTask = ResourceTask.createUpdateTask(resourceId, newVersion, user, system, properties);
+// The logic of material update upload is completed by the versionService.updateVersion method
+versionService.updateVersion(resourceTask.getResourceId(), user, file, properties);
+```
+
+Inside the versionService.updateVersion method, the main functions implemented are:
+
+```scala
+ResourceHelper resourceHelper = ResourceHelperFactory.getResourceHelper();
+InputStream inputStream = file.getInputStream();
+// Get the path of the resource
+String newVersion = params.get("newVersion").toString();
+String path = versionDao.getResourcePath(resourceId) + "_" + newVersion;
+// The acquisition logic of getResourcePath is to limit one from the original path, and then splice newVersion with _
+// select resource from linkis_ps_bml_resources_version WHERE resource_id = #{resourceId} limit 1
+// upload resources to hdfs or local
+StringBuilder stringBuilder = new StringBuilder();
+long size = resourceHelper.upload(path, user, inputStream, stringBuilder, OVER_WRITE);
+// Finally insert a new resource version record in the linkis_ps_bml_resources_version table
+ResourceVersion resourceVersion = ResourceVersion.createNewResourceVersion(resourceId, path, md5String, clientIp, size, newVersion, 1);
+versionDao.insertNewVersion(resourceVersion);
+```
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/bml/overview.md b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/bml/overview.md
new file mode 100644
index 00000000000..429239e031b
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/bml/overview.md
@@ -0,0 +1,99 @@
+---
+title: Overview
+sidebar_position: 0
+---
+
+
+## Background
+
+BML (Material Library Service) is a material management system of linkis, which is mainly used to store various file data of users, including user scripts, resource files, third-party Jar packages, etc., and can also store class libraries that need to be used when the engine is running.
+
+It has the following functions:
+
+1) Support various types of files. Supports text and binary files. If you are a user in the field of big data, you can store their script files and material compression packages in the system.
+
+2), the service is stateless, multi-instance deployment, to achieve high service availability. When the system is deployed, it can be deployed with multiple instances. Each instance provides services independently to the outside world without interfering with each other. All information is stored in the database for sharing.
+
+3) Various ways of use. Provides two ways of Rest interface and SDK, users can choose according to their needs.
+
+4) The file is appended to avoid too many small HDFS files. Many small HDFS files will lead to a decrease in the overall performance of HDFS. We have adopted a file append method to combine multiple versions of resource files into one large file, effectively reducing the number of files in HDFS.
+
+5) Accurate authority control, safe storage of user resource file content. Resource files often have important content, and users only want to read it by themselves
+
+6) Provide life cycle management of file upload, update, download and other operational tasks.
+
+## Architecture diagram
+
+![BML Architecture Diagram](/Images/Architecture/bml-02.png)
+
+## Schema description
+
+1. The Service layer includes resource management, uploading resources, downloading resources, sharing resources, and project resource management.
+
+Resource management is responsible for basic operations such as adding, deleting, modifying, and checking resources, controlling access rights, and whether files are out of date.
+
+2. File version control
+ Each BML resource file has version information. Each update operation of the same resource will generate a new version. Of course, it also supports historical version query and download operations. BML uses the version information table to record the deviation position and size of each version of the resource file HDFS storage, and can store multiple versions of data on one HDFS file.
+
+3. Resource file storage
+ HDFS files are mainly used as actual data storage. HDFS files can effectively ensure that the material library files are not lost. The files are appended to avoid too many small HDFS files.
+
+### Core Process
+
+**upload files:**
+
+1. Determine the operation type of the file uploaded by the user, whether it is the first upload or update upload. If it is the first upload, a new resource information record needs to be added. The system has generated a globally uniquely identified resource_id and a resource_location for this resource. The first version A1 of resource A needs to be stored in the resource_location location in the HDFS file system. After storing, you can get the first version marked as V00001. If it is an update upload, you need to find the latest version last time.
+
+2. Upload the file stream to the specified HDFS file. If it is an update, it will be added to the end of the last content by file appending.
+
+3. Add a new version record, each upload will generate a new version record. In addition to recording the metadata information of this version, the most important thing is to record the storage location of the version of the file, including the file path, start location, and end location.
+
+**download file:**
+
+1. When users download resources, they need to specify two parameters: one is resource_id and the other is version. If version is not specified, the latest version will be downloaded by default.
+
+2. After the user passes in the two parameters resource_id and version to the system, the system queries the resource_version table, finds the corresponding resource_location, start_byte and end\_byte to download, and uses the skipByte method of stream processing to set the front (start_byte- 1) skip bytes, then read to end_byte
+ The number of bytes. After the reading is successful, the stream information is returned to the user.
+
+3. Insert a successful download record in resource_download_history
+
+## Database Design
+
+1. Resource information table (resource)
+
+| Field name | Function | Remarks |
+|-------------------|------------------------------|----------------------------------|
+| resource_id | A string that uniquely identifies a resource globally | UUID can be used for identification |
+| resource_location | The location where resources are stored | For example, hdfs:///tmp/bdp/\${USERNAME}/ |
+| owner | The owner of the resource | e.g. zhangsan |
+| create_time | Record creation time | |
+| is_share | Whether to share | 0 means not to share, 1 means to share |
+| update\_time | Last update time of the resource | |
+| is\_expire | Whether the record resource expires | |
+| expire_time | Record resource expiration time | |
+
+2. Resource version information table (resource_version)
+
+| Field name | Function | Remarks |
+|-------------------|--------------------|----------|
+| resource_id | Uniquely identifies the resource | Joint primary key |
+| version | The version of the resource file | |
+| start_byte | Start byte count of resource file | |
+| end\_byte | End bytes of resource file | |
+| size | Resource file size | |
+| resource_location | Resource file placement location | |
+| start_time | Record upload start time | |
+| end\_time | End time of record upload | |
+| updater | Record update user | |
+
+3. Resource download history table (resource_download_history)
+
+| Field | Function | Remarks |
+|-------------|---------------------------|--------------------------------|
+| resource_id | Record the resource_id of the downloaded resource | |
+| version | Record the version of the downloaded resource | |
+| downloader | Record downloaded users | |
+| start\_time | Record download time | |
+| end\_time | Record end time | |
+| status | Whether the record is successful | 0 means success, 1 means failure |
+| err\_msg | Log failure reason | null means success, otherwise log failure reason |
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/_category_.json b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/_category_.json
new file mode 100644
index 00000000000..795162be5ef
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Context Service",
+ "position": 3
+}
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/content-service-cleanup.md b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/content-service-cleanup.md
new file mode 100644
index 00000000000..8a7fb39aae0
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/content-service-cleanup.md
@@ -0,0 +1,238 @@
+---
+title: CS Cleanup Interface Features
+sidebar_position: 9
+tags: [Feature]
+---
+
+## 1. Functional requirements
+### 1.1 Background
+Before version 1.1.3, the ContextService unified context service lacked a cleaning mechanism, and lacked the creation time, update time fields, and batch cleaning interfaces.
+In the case of long-term accumulation, millions of data may appear, affecting query efficiency.
+
+### 1.2 Goals
+- Modify 1ContextService` underlying library table, add creation time, modification time, last access time fields, complete the update time of `ContextID` and `ContextMap` related data into the database
+- Add `restful` interface for cleaning and cleaning, support batch and retail cleaning interface according to time range and id list
+- Add the corresponding `java sdk` interface of `cs-client`
+
+## 2. Overall Design
+This requirement involves `cs-client`, `cs-persistence` and `cs-server` modules under `ContextService`.
+Add 3 fields of the existing table in the `cs-persistence` module; add 3 `restful` interfaces in the `cs-server` module, and add 3 `sdk api` in the `cs-client` module.
+
+### 2.1 Technical Architecture
+
+For the overall architecture of ContextService, please refer to the existing document: [ContextService Architecture Document](overview.md)
+
+The access relationship of each module of ContestService is shown in the following figure
+![linkis-contextservice-clean-01.png](/Images-zh/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-clean-01.png)
+
+
+Table changes are made in the `cs-persistence` module. This change involves 5 tables `context_id, context_map, context_id_listener, context_key_listener, context_history`, all of which need to add 3 fields `create_time, update_time, access_time`. The `context_id and context_map` tables are enabled, and the other three tables are not enabled. `create_time` adds the time before the persistence module performs the insert operation. `update_time` and `access_time` are actively called by the upstream interface. In the update interface, `update_time` and `access_time` are mutually exclusive updates, that is, when `access_time` exists (not null), `update_time` will not be updated, otherwise update_time will be updated .
+
+The `update_time` field is updated in the cs-cache module, the ADD message is detected when a new `context_id` is loaded from the db, and the `access_time` is synchronized to the db at this time.
+Only the `create_time, update_time, access_time` of the `context_id` table are recorded in the table. Subsequent search cleaning is also performed from the context_id table.
+
+Add 3 cleanup related interfaces: `searchContextIDByTime, clearAllContextByID, clearAllContextByTime`
+- `searchContextIDByTime` searches according to 3 time ranges and returns a list of contextIDs
+- `clearAllContextByID` clears the content of the context_map table and context_id table corresponding to the ID in the input contextID list
+- `clearAllContextByTime` searches according to 3 time ranges, and clears all the contents of the context_map table and context_id table corresponding to the searched contextID
+
+### 2.2 Business Architecture
+This feature is to add related interfaces for batch query and cleanup to the ContextService service, and to add fields such as the update time of the underlying data table, so as to clean up expired data according to the access situation. The function points involve the modules as shown in the table below.
+
+| First-level module | Second-level module | Function point |
+| :------------ | :------------ | :------------ |
+| linkis-ps-cs | cs-client | Added batch cleaning interface related java sdk api interface |
+| Linkis-ps-cs | cs-server | Added restful interface related to batch cleaning interface |
+| linkis-ps-cs | cs-persistence | Add 3 time-related fields of the underlying table |
+
+
+## 3. Module Design
+### Main execution process
+- Create ContextID. When the user creates the ContextID, the create_time will be recorded, and this field will not be updated later
+- Update ContextID. When the user updates the ContextID, the update_time field is updated. Note that if the update is from the cache at this time, the access_time field will not be updated; if it is loaded from the db to the cache and then updated with the new contextID, the access_time will be updated first, and then the new update_time will be updated separately.
+- Query ContextID according to time. When the user queries the ContextID of the corresponding time range, only a list of haid strings will be returned. This interface has paging, the default is limited to 5000 pieces of data
+- Bulk cleanup of ContextIDs. All contextMap data and contextID data corresponding to the incoming idList will be cleaned up in batches. The maximum number of incoming arrays is 5000
+- Query and clear ContextID, first query and then batch clear
+
+The corresponding timing diagrams above are as follows:
+![linkis-contextservice-clean-02.png](/Images-zh/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-clean-02.png)
+
+Two of them require additional attention:
+①The restful api in the cs-server service will encapsulate the request as a Job and submit it to the queue and block waiting for the result. The operation type of CLEAR is newly defined to facilitate matching to the cleanup related interface.
+② To process the Service service of the Job in ①, the name needs to be defined as not including the ContextID to avoid the dynamic proxy conversion of the HighAvailable module. This conversion is only for the interface with only one ContextID in the request, and it is meaningless and affects the batch cleanup and batch query interface. performance.
+
+## 4. Data structure
+````
+# The main involved context_id table structure is as follows, adding create_time, update_time, expire_time fields
+CREATE TABLE `linkis_ps_cs_context_id` (
+ `id` int(11) NOT NULL AUTO_INCREMENT,
+ `user` varchar(32) DEFAULT NULL,
+ `application` varchar(32) DEFAULT NULL,
+ `source` varchar(255) DEFAULT NULL,
+ `expire_type` varchar(32) DEFAULT NULL,
+ `expire_time` datetime DEFAULT NULL,
+ `instance` varchar(128) DEFAULT NULL,
+ `backup_instance` varchar(255) DEFAULT NULL,
+ `update_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'update unix timestamp',
+ `create_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'create time',
+ `access_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'last access time',
+ PRIMARY KEY (`id`),
+ KEY `instance` (`instance`(128)),
+ KEY `backup_instance` (`backup_instance`(191)),
+ KEY `instance_2` (`instance`(128), `backup_instance`(128))
+) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
+````
+
+## 5. Interface Design
+### 5.1 Restful interface
+
+1 Query ID interface `searchContextIDByTime`
+
+①Interface name
+GET `/api/rest_j/v1/contextservice/searchContextIDByTime`
+
+②Input parameters
+
+| Parameter name | Parameter description | Request type | Required | Data type | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|accessTimeEnd|Access end time|query|false|string|
+|accessTimeStart|Access start time|query|false|string|
+|createTimeEnd|Create end time|query|false|string|
+|createTimeStart|create time|query|false|string|
+|pageNow|page number|query|false|string|
+|pageSize|page size|query|false|string|
+|updateTimeEnd|Update end time|query|false|string|
+|updateTimeStart|Update time|query|false|string|
+
+
+③Example of output parameters
+````
+{
+ "method": "/api/contextservice/searchContextIDByTime",
+ "status": 0,
+ "message": "OK",
+ "data": {
+ "contextIDs": [
+ "8-8--cs_1_devcs_2_dev10493",
+ "8-8--cs_1_devcs_2_dev10494",
+ "8-8--cs_1_devcs_2_dev10495",
+ "8-8--cs_1_devcs_2_dev10496",
+ "8-8--cs_1_devcs_2_dev10497",
+ "8-8--cs_2_devcs_2_dev10498"
+ ]
+ }
+}
+````
+
+
+2. Clear the specified ID interface clearAllContextByID
+
+①Interface name `POST /api/rest_j/v1/contextservice/clearAllContextByID`
+② Example of input parameters
+````
+{
+"idList" : [
+"8-8--cs_1_devcs_1_dev2236"
+]
+}
+````
+
+③Example of output parameters
+````
+{
+ "method": "/api/contextservice/clearAllContextByID",
+ "status": 0,
+ "message": "OK",
+ "data": {
+ "num": "1"
+ }
+}
+````
+
+3. Clean up the interface `clearAllContextByTime` according to the time
+ ①Interface name
+ POST /api/rest_j/v1/contextservice/clearAllContextByTime
+ ② Example of input parameters
+ {
+ "createTimeStart": "2022-06-01 00:00:00",
+ "createTimeEnd": "2022-06-30 00:00:00"
+ }
+ ③Example of output parameters
+````
+{
+ "method": "/api/contextservice/clearAllContextByTime",
+ "status": 0,
+ "message": "OK",
+ "data": {
+ "num": "1"
+ }
+}
+````
+
+### 5.2 JAVA SDK API
+````
+# import pom
+
+ org.apache.linkis
+ linkis-cs-client
+ 1.1.3
+
+
+# Code reference is as follows
+
+String createTimeStart = "2022-05-26 22:04:00";
+ String createTimeEnd = "2022-06-01 24:00:00";
+
+ ContextClient contextClient = ContextClientFactory.getOrCreateContextClient();
+
+# Interface 1 searchHAIDByTime
+ List idList =
+ contextClient.searchHAIDByTime(
+ createTimeStart, createTimeEnd, null, null, null, null, 0, 0);
+
+ for (String id : idList) {
+ System.out.println(id);
+ }
+
+ System.out.println("Got " + idList.size() + "ids.");
+
+ if (idList.size() > 0) {
+ String id1 = idList.get(0);
+ System.out.println("will clear context of id : " + id1);
+ }
+
+# Interface 2 batchClearContextByHAID
+ List tmpList = new ArrayList<>();
+ tmpList.add(id1);
+ int num = contextClient.batchClearContextByHAID(tmpList);
+ System.out.println("Succeed to clear " + num + " ids.");
+
+# Interface 3 batchClearContextByTime
+ int num1 =
+ contextClient.batchClearContextByTime(
+ createTimeStart, createTimeEnd, null, null, null, null);
+ System.out.println("Succeed to clear " + num1 + " ids by time.");
+
+````
+
+
+## 6. Non-functional design
+### 6.1 Security
+The resultful interface requires login authentication and requires an administrator to operate. The administrator user is configured in the properties file
+
+### 6.2 Performance
+- The query ID interface searchContextIDByTime has paging, no performance impact
+- Clear the specified ID interface clearAllContextByID to limit the amount of operation data, no performance impact
+- The interface clearAllContextByTime is cleared according to the time. If the query time range is too large, the query may time out, but the task will not fail. and the cleanup operation is a single operation and does not affect other queries
+
+### 6.3 Capacity
+This requirement provides a time range query and batch cleaning interface, which requires the upper-layer application that uses ContextService to actively clean up data.
+
+### 6.4 High Availability
+The interface reuses the high availability of the ContextService microservice itself.
+
+
+
+
+
+
diff --git a/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/context-service-cache.md b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/context-service-cache.md
new file mode 100644
index 00000000000..2bca0c8d5e5
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/context-service-cache.md
@@ -0,0 +1,101 @@
+---
+title: CS Cache Architecture
+sidebar_position: 8
+---
+
+
+## **CSCache Architecture**
+### **issues that need resolving**
+
+### 1.1. Memory structure needs to be solved:
+
+1. Support splitting by ContextType: speed up storage and query performance
+
+2. Support splitting according to different ContextID: Need to complete ContextID, see metadata isolation
+
+3. Support LRU: Recycle according to specific algorithm
+
+4. Support searching by keywords: Support indexing by keywords
+
+5. Support indexing: support indexing directly through ContextKey
+
+6. Support traversal: need to support traversal according to ContextID and ContextType
+
+### 1.2 Loading and parsing problems to be solved:
+
+1. Support parsing ContextValue into memory data structure: It is necessary to complete the parsing of ContextKey and value to find the corresponding keywords.
+
+2. Need to interface with the Persistence module to complete the loading and analysis of the ContextID content
+
+### 1.3 Metric and cleaning mechanism need to solve the problem:
+
+1. When JVM memory is not enough, it can be cleaned based on memory usage and frequency of use
+
+2. Support statistics on the memory usage of each ContextID
+
+3. Support statistics on the frequency of use of each ContextID
+
+## **ContextCache Architecture**
+
+The architecture of ContextCache is shown in the following figure:
+
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png)
+
+1. ContextService: complete the provision of external interfaces, including additions, deletions, and changes;
+
+2. Cache: complete the storage of context information, map storage through ContextKey and ContextValue
+
+3. Index: The established keyword index, which stores the mapping between the keywords of the context information and the ContextKey;
+
+4. Parser: complete the keyword analysis of the context information;
+
+5. LoadModule completes the loading of information from the persistence layer when the ContextCache does not have the corresponding ContextID information;
+
+6. AutoClear: When the Jvm memory is insufficient, complete the on-demand cleaning of ContextCache;
+
+7. Listener: Metric information for the mobile phone ContextCache, such as memory usage and access times.
+
+## **ContextCache storage structure design**
+
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png)
+
+The storage structure of ContextCache is divided into three layers:
+
+**ContextCache:** stores the mapping relationship between ContextID and ContextIDValue, and can complete the recovery of ContextID according to the LRU algorithm;
+
+**ContextIDValue:** CSKeyValueContext that has stored all context information and indexes of ContextID. And count the memory and usage records of ContestID.
+
+**CSKeyValueContext:** Contains the CSInvertedIndexSet index set that stores and supports keywords according to type, and also contains the storage set CSKeyValueMapSet that stores ContextKey and ContextValue.
+
+CSInvertedIndexSet: categorize and store keyword indexes through CSType
+
+CSKeyValueMapSet: categorize and store context information through CSType
+
+## **ContextCache UML Class Diagram Design**
+
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png)
+
+## **ContextCache Timing Diagram**
+
+The following figure draws the overall process of using ContextID, KeyWord, and ContextType to check the corresponding ContextKeyValue in ContextCache.
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png)
+
+Note: The ContextIDValueGenerator will go to the persistence layer to pull the Array[ContextKeyValue] of the ContextID, and parse the ContextKeyValue key storage index and content through ContextKeyValueParser.
+
+The other interface processes provided by ContextCacheService are similar, so I won't repeat them here.
+
+## **KeyWord parsing logic**
+
+The specific entity bean of ContextValue needs to use the annotation \@keywordMethod on the corresponding get method that can be used as the keyword. For example, the getTableName method of Table must be annotated with \@keywordMethod.
+
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png)
+
+When ContextKeyValueParser parses ContextKeyValue, it scans all the annotations modified by KeywordMethod of the specific object passed in and calls the get method to obtain the returned object toString, which will be parsed through user-selectable rules and stored in the keyword collection. Rules have separators, and regular expressions
+
+Precautions:
+
+1. The annotation will be defined to the core module of cs
+
+2. The modified Get method cannot take parameters
+
+3. The toSting method of the return object of the Get method must return the keyword
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/context-service-client.md b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/context-service-client.md
new file mode 100644
index 00000000000..68fced67dae
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/context-service-client.md
@@ -0,0 +1,66 @@
+---
+title: CS Client Design
+sidebar_position: 2
+---
+
+## **CSClient design ideas and implementation**
+
+
+CSClient is a client that interacts with each microservice and CSServer group. CSClient needs to meet the following functions.
+
+1. The ability of microservices to apply for a context object from cs-server
+
+2. The ability of microservices to register context information with cs-server
+
+3. The ability of microservices to update context information to cs-server
+
+4. The ability of microservices to obtain context information from cs-server
+
+5. Certain special microservices can sniff operations that have modified context information in cs-server
+
+6. CSClient can give clear instructions when the csserver cluster fails
+
+7. CSClient needs to provide a copy of all the context information of csid1 as a new csid2 for scheduling execution
+
+> The overall approach is to send http requests through the linkis-httpclient that comes with linkis, and send requests and receive responses by implementing various Action and Result entity classes.
+
+### 1. The ability to apply for context objects
+
+To apply for a context object, for example, if a user creates a new workflow on the front end, dss-server needs to apply for a context object from dss-server. When applying for a context object, the identification information (project name, workflow name) of the workflow needs to be passed through CSClient Send it to the CSServer (the gateway should be sent to one randomly at this time, because no csid information is carried at this time), once the application context returns the correct result, it will return a csid and bind the workflow.
+
+### 2. Ability to register contextual information
+
+> The ability to register context, for example, the user uploads a resource file on the front-end page, uploads the file content to dss-server, dss-server stores the content in bml, and then needs to register the resourceid and version obtained from bml to cs-server In this case, you need to use the ability of csclient to register. The ability to register is to pass in csid and cskey
+> Register with csvalue (resourceid and version).
+
+### 3. Ability to update registered context
+
+> The ability to update contextual information. For example, if a user uploads a resource file test.jar, csserver already has registered information. If the user updates the resource file when editing the workflow, then cs-server needs to update this content Update. At this time, you need to call the updated interface of csclient
+
+### 4. The ability to get context
+
+The context information registered to csserver needs to be read when variable replacement, resource file download, and downstream nodes call upstream nodes to generate information. For example, when the engine side executes code, it needs to download bml resources. When you need to interact with csclient and csserver, get the resourceid and version of the file in bml and then download it.
+
+### 5. Certain special microservices can sniff operations that have modified context information in cs-server
+
+This operation is based on the following example. For example, a widget node has a strong linkage with the upstream sql node. The user writes a sql in the sql node, and the metadata of the sql result set is a, b, and c. Field, the widget node behind is bound to this sql, you can edit these three fields on the page, and then the user changes the sql statement, the metadata becomes a, b, c, d four fields, this When the user needs to refresh manually. We hope that if the script is changed, the widget node can automatically update the metadata. This generally uses the listener mode. For simplicity, the heartbeat mechanism can also be used for polling.
+
+### 6. CSClient needs to provide a copy of all context information of csid1 as a new csid2 for scheduling execution
+
+Once the user publishes a project, he hopes to tag all the information of the project similar to git. The resource files and custom variables here will not change anymore, but there are some dynamic information, such as the result set generated. The content of csid will still be updated. So csclient needs to provide an interface for csid1 to copy all context information for microservices to call
+
+## **Implementation of ClientListener Module**
+
+For a client, sometimes you want to know that a certain csid and cskey have changed in the cs-server as soon as possible. For example, the csclient of visualis needs to be able to know that the previous sql node has changed, then it needs to be notified , The server has a listener module, and the client also needs a listener module. For example, a client wants to be able to monitor the changes of a certain cskey of a certain csid, then he needs to register the cskey to the callbackEngine in the corresponding csserver instance, Subsequent, for example, another client changes the content of the cskey. When the first client performs a heatbeat, the callbackengine needs to notify all the cskeys that the client has listened to. In this case, the first client knows it. The content of the cskey has changed. When heatbeat returns data, we should notify all listeners registered to ContextClientListenerBus to use the on method
+
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png)
+
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png)
+
+## **Implementation of GatewayRouter**
+
+
+The Gateway plug-in implements Context forwarding. The forwarding logic of the Gateway plug-in is carried out through the GatewayRouter. It needs to be divided into two ways. The first is to apply for a context object. At this time, the information carried by the CSClient does not contain csid. For the information, the judgment logic at this time should be through the registration information of eureka, and the first request sent will randomly enter a microservice instance.
+The second case is that the content of the ContextID is carried. We need to parse the csid. The way of parsing is to obtain the information of each instance through the method of string cutting, and then use eureka to determine whether this micro-channel still exists through the instance information. Service, if it exists, send it to this microservice instance
+
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png)
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/context-service-highavailable.md b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/context-service-highavailable.md
new file mode 100644
index 00000000000..1caca467b4d
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/context-service-highavailable.md
@@ -0,0 +1,91 @@
+---
+title: CS HA Design
+sidebar_position: 3
+---
+
+## **CS HA Architecture Design**
+
+### 1, CS HA architecture summary
+
+#### (1) CS HA architecture diagram
+
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png)
+
+#### (2) Problems to be solved
+
+-HA of Context instance object
+
+-Client generates CSID request when creating workflow
+
+-List of aliases of CS Server
+
+-Unified CSID generation and analysis rules
+
+#### (3) Main design ideas
+
+① Load balancing
+
+When the client creates a new workflow, it randomly requests the HA module of a certain server to generate a new HAID with equal probability. The HAID information includes the main server information (hereinafter referred to as the main instance), and the candidate instance, where the candidate instance is The instance with the lowest load among the remaining servers, and a corresponding ContextID. The generated HAID is bound to the workflow and is persisted to the database, and then all change operation requests of the workflow will be sent to the main instance to achieve uniform load distribution.
+
+②High availability
+
+In subsequent operations, when the client or gateway determines that the main instance is unavailable, the operation request is forwarded to the standby instance for processing, thereby achieving high service availability. The HA module of the standby instance will first verify the validity of the request based on the HAID information.
+
+③Alias mechanism
+
+The alias mechanism is adopted for the machine, the Instance information contained in the HAID adopts a custom alias, and the alias mapping queue is maintained in the background. It is that the client uses HAID when interacting with other components in the background, and uses ContextID when interacting with other components in the background. When implementing specific operations, a dynamic proxy mechanism is used to convert HAID to ContextID for processing.
+
+### 2, module design
+
+#### (1) Module diagram
+
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png)
+
+#### (2) Specific modules
+
+①ContextHAManager module
+
+Provide interface for CS Server to call to generate CSID and HAID, and provide alias conversion interface based on dynamic proxy;
+
+Call the persistence module interface to persist CSID information;
+
+②AbstractContextHAManager module
+
+The abstraction of ContextHAManager can support the realization of multiple ContextHAManager;
+
+③InstanceAliasManager module
+
+RPC module provides Instance and alias conversion interface, maintains alias mapping queue, and provides alias and CS
+Server instance query; provide an interface to verify whether the host is valid;
+
+④HAContextIDGenerator module
+
+Generate a new HAID and encapsulate it into the client's agreed format and return it to the client. The HAID structure is as follows:
+
+\${length of first instance}\${length of second instance}{instance alias 1} {instance alias 2} {actual ID}, the actual ID is set as ContextID
+Key;
+
+⑤ContextHAChecker module
+
+Provide HAID verification interface. Each request received will verify whether the ID format is valid, and whether the current host is the primary instance or the secondary instance: if it is the primary instance, the verification is passed; if it is the secondary instance, verify whether the primary instance is invalid and the primary instance is invalid The verification is passed.
+
+⑥BackupInstanceGenerator module
+
+Generate a backup instance and attach it to the CSID information;
+
+⑦MultiTenantBackupInstanceGenerator interface
+
+(Reserved interface, not implemented yet)
+
+### 3. UML Class Diagram
+
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png)
+
+### 4. HA module operation sequence diagram
+
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png)
+
+CSID generated for the first time:
+The client sends a request, and the Gateway forwards it to any server. The HA module generates the HAID, including the main instance, the backup instance and the CSID, and completes the binding of the workflow and the HAID.
+
+When the client sends a change request, Gateway determines that the main Instance is invalid, and then forwards the request to the standby Instance for processing. After the instance on the standby Instance verifies that the HAID is valid, it loads the instance and processes the request.
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/context-service-listener.md b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/context-service-listener.md
new file mode 100644
index 00000000000..471732c1c84
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/context-service-listener.md
@@ -0,0 +1,37 @@
+---
+title: CS Listener Architecture
+sidebar_position: 4
+---
+## **Listener Architecture**
+
+In DSS, when a node changes its metadata information, the context information of the entire workflow changes. We expect all nodes to perceive the change and automatically update the metadata. We use the monitoring mode to achieve, and use the heartbeat mechanism to poll to maintain the metadata consistency of the context information.
+
+### **Client registration itself, CSKey registration and CSKey update process**
+
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png)
+
+The main process is as follows:
+
+1. Registration operation: The clients client1, client2, client3, and client4 register themselves and the CSKey they want to monitor with the csserver through HTPP requests. The Service service obtains the callback engine instance through the external interface, and registers the client and its corresponding CSKeys.
+
+2. Update operation: If the ClientX node updates the CSKey content, the Service service updates the CSKey cached by the ContextCache, and the ContextCache delivers the update operation to the ListenerBus. The ListenerBus notifies the specific listener to consume (that is, the ContextKeyCallbackEngine updates the CSKeys corresponding to the Client). The consumed event will be automatically removed.
+
+3. Heartbeat mechanism:
+
+All clients use heartbeat information to detect whether the value of CSKeys in ContextKeyCallbackEngine has changed.
+
+ContextKeyCallbackEngine returns the updated CSKeys value to all registered clients through the heartbeat mechanism. If there is a client's heartbeat timeout, remove the client.
+
+### **Listener UM class diagram**
+
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-02.png)
+
+Interface: ListenerManager
+
+External: Provide ListenerBus for event delivery.
+
+Internally: provide a callback engine for specific event registration, access, update, and heartbeat processing logic
+
+## **Listener callbackengine timing diagram**
+
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-03.png)
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/context-service-persistence.md b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/context-service-persistence.md
new file mode 100644
index 00000000000..233ce0fe450
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/context-service-persistence.md
@@ -0,0 +1,13 @@
+---
+title: CS Persistence Architecture
+sidebar_position: 5
+---
+
+## **CSPersistence Architecture**
+
+### Persistence UML diagram
+
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png)
+
+
+The Persistence module mainly defines ContextService persistence related operations. The entities mainly include CSID, ContextKeyValue, CSResource, and CSTable.
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/context-service-search.md b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/context-service-search.md
new file mode 100644
index 00000000000..9d65ea6012b
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/context-service-search.md
@@ -0,0 +1,132 @@
+---
+title: CS Search Architecture
+sidebar_position: 6
+---
+
+## **CSSearch Architecture**
+### **Overall architecture**
+
+As shown below:
+
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png)
+
+1. ContextSearch: The query entry, accepts the query conditions defined in the Map form, and returns the corresponding results according to the conditions.
+
+2. Building module: Each condition type corresponds to a Parser, which is responsible for converting the condition in the form of Map into a Condition object, which is implemented by calling the logic of ConditionBuilder. Conditions with complex logical relationships will use ConditionOptimizer to optimize query plans based on cost-based algorithms.
+
+3. Execution module: Filter out the results that match the conditions from the Cache. According to different query targets, there are three execution modes: Ruler, Fetcher and Match. The specific logic is described later.
+
+4. Evaluation module: Responsible for calculation of conditional execution cost and statistics of historical execution status.
+
+### **Query Condition Definition (ContextSearchCondition)**
+
+A query condition specifies how to filter out the part that meets the condition from a ContextKeyValue collection. The query conditions can be used to form more complex query conditions through logical operations.
+
+1. Support ContextType, ContextScope, KeyWord matching
+
+ 1. Corresponding to a Condition type
+
+ 2. In Cache, these should have corresponding indexes
+
+2. Support contains/regex matching mode for key
+
+ 1. ContainsContextSearchCondition: contains a string
+
+ 2. RegexContextSearchCondition: match a regular expression
+
+3. Support logical operations of or, and and not
+
+ 1. Unary operation UnaryContextSearchCondition:
+
+> Support logical operations of a single parameter, such as NotContextSearchCondition
+
+1. Binary operation BinaryContextSearchCondition:
+
+> Support the logical operation of two parameters, defined as LeftCondition and RightCondition, such as OrContextSearchCondition and AndContextSearchCondition
+
+1. Each logical operation corresponds to an implementation class of the above subclass
+
+2. The UML class diagram of this part is as follows:
+
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png)
+
+### **Construction of query conditions**
+
+1. Support construction through ContextSearchConditionBuilder: When constructing, if multiple ContextType, ContextScope, KeyWord, contains/regex matches are declared at the same time, they will be automatically connected by And logical operation
+
+2. Support logical operations between Conditions and return new Conditions: And, Or and Not (considering the form of condition1.or(condition2), the top-level interface of Condition is required to define logical operation methods)
+
+3. Support to build from Map through ContextSearchParser corresponding to each underlying implementation class
+
+### **Execution of query conditions**
+
+1. Three function modes of query conditions:
+
+ 1. Ruler: Filter out eligible ContextKeyValue sub-Arrays from an Array
+
+ 2. Matcher: Determine whether a single ContextKeyValue meets the conditions
+
+ 3. Fetcher: Filter out an Array of eligible ContextKeyValue from ContextCache
+
+2. Each bottom-level Condition has a corresponding Execution, responsible for maintaining the corresponding Ruler, Matcher, and Fetcher.
+
+### **Query entry ContextSearch**
+
+Provide a search interface, receive Map as a parameter, and filter out the corresponding data from the Cache.
+
+1. Use Parser to convert the condition in the form of Map into a Condition object
+
+2. Obtain cost information through Optimizer, and determine the order of query according to the cost information
+
+3. After executing the corresponding Ruler/Fetcher/Matcher logic through the corresponding Execution, the search result is obtained
+
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png)
+
+### **Query Optimization**
+
+1. OptimizedContextSearchCondition maintains the Cost and Statistics information of the condition:
+
+ 1. Cost information: CostCalculator is responsible for judging whether a certain Condition can calculate Cost, and if it can be calculated, it returns the corresponding Cost object
+
+ 2. Statistics information: start/end/execution time, number of input lines, number of output lines
+
+2. Implement a CostContextSearchOptimizer, whose optimize method is based on the cost of the Condition to optimize the Condition and convert it into an OptimizedContextSearchCondition object. The specific logic is described as follows:
+
+ 1. Disassemble a complex Condition into a tree structure based on the combination of logical operations. Each leaf node is a basic simple Condition; each non-leaf node is a logical operation.
+
+> Tree A as shown in the figure below is a complex condition composed of five simple conditions of ABCDE through various logical operations.
+
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png)
+
(Tree A)
+
+1. The execution of these Conditions is actually depth first, traversing the tree from left to right. Moreover, according to the exchange rules of logical operations, the left and right order of the child nodes of a node in the Condition tree can be exchanged, so all possible trees in all possible execution orders can be enumerated.
+
+> Tree B as shown in the figure below is another possible sequence of tree A above, which is exactly the same as the execution result of tree A, except that the execution order of each part has been adjusted.
+
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png)
+
(Tree B)
+
+1. For each tree, the cost is calculated from the leaf node and collected to the root node, which is the final cost of the tree, and finally the tree with the smallest cost is obtained as the optimal execution order.
+
+> The rules for calculating node cost are as follows:
+
+1. For leaf nodes, each node has two attributes: Cost and Weight. Cost is the cost calculated by CostCalculator. Weight is assigned according to the order of execution of the nodes. The current default is 1 on the left and 0.5 on the right. See how to adjust it later (the reason for assigning weight is that the conditions on the left have already been set in some cases. It can directly determine whether the entire combinatorial logic matches or not, so the condition on the right does not have to be executed in all cases, and the actual cost needs to be reduced by a certain percentage)
+
+2. For non-leaf nodes, Cost = the sum of Cost×Weight of all child nodes; the weight assignment logic is consistent with that of leaf nodes.
+
+> Taking tree A and tree B as examples, calculate the costs of these two trees respectively, as shown in the figure below, the number in the node is Cost\|Weight, assuming that the cost of the 5 simple conditions of ABCDE is 10, 100, 50 , 10, and 100. It can be concluded that the cost of tree B is less than that of tree A, which is a better solution.
+
+
+
+
+
+
+1. Use CostCalculator to measure the cost of simple conditions:
+
+ 1. The condition acting on the index: the cost is determined according to the distribution of the index value. For example, when the length of the Array obtained by condition A from the Cache is 100 and condition B is 200, then the cost of condition A is less than B.
+
+ 2. Conditions that need to be traversed:
+
+ 1. According to the matching mode of the condition itself, an initial Cost is given: For example, Regex is 100, Contains is 10, etc. (the specific values etc. will be adjusted according to the situation when they are realized)
+
+ 2. According to the efficiency of historical query, the real-time Cost is obtained after continuous adjustment on the basis of the initial Cost. Throughput per unit time
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/context-service.md b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/context-service.md
new file mode 100644
index 00000000000..af84dbe5b68
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/context-service.md
@@ -0,0 +1,58 @@
+---
+title: CS Architecture
+sidebar_position: 1
+---
+
+## **ContextService Architecture**
+
+### **Horizontal Division**
+
+Horizontally divided into three modules: Restful, Scheduler, Service
+
+#### Restful Responsibilities:
+
+ Encapsulate the request as httpjob and submit it to the Scheduler
+
+#### Scheduler Responsibilities:
+
+ Find the corresponding service through the ServiceName of the httpjob protocol to execute the job
+
+#### Service Responsibilities:
+
+ The module that actually executes the request logic, encapsulates the ResponseProtocol, and wakes up the wait thread in Restful
+
+### **Vertical Division**
+Vertically divided into 4 modules: Listener, History, ContextId, Context:
+
+#### Listener responsibilities:
+
+1. Responsible for the registration and binding of the client side (write to the database and register in the CallbackEngine)
+
+2. Heartbeat interface, return Array[ListenerCallback] through CallbackEngine
+
+#### History Responsibilities:
+Create and remove history, operate Persistence for DB persistence
+
+#### ContextId Responsibilities:
+Mainly docking with Persistence for ContextId creation, update and removal, etc.
+
+#### Context responsibility:
+
+1. For removal, reset and other methods, first operate Persistence for DB persistence, and update ContextCache
+
+2. Encapsulate the query condition and go to the ContextSearch module to obtain the corresponding ContextKeyValue data
+
+The steps for requesting access are as follows:
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png)
+
+## **UML Class Diagram**
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png)
+
+## **Scheduler thread model**
+
+Need to ensure that Restful's thread pool is not filled
+
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png)
+
+The sequence diagram is as follows:
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png)
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/overview.md b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/overview.md
new file mode 100644
index 00000000000..d44f182f297
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/context-service/overview.md
@@ -0,0 +1,128 @@
+---
+title: Overview
+sidebar_position: 0
+---
+
+## **Background**
+
+### **What is Context**
+
+All necessary information to keep a certain operation going on. For example: reading three books at the same time, the page number of each book has been turned is the context of continuing to read the book.
+
+### **Why do you need CS (Context Service)?**
+
+CS is used to solve the problem of data and information sharing across multiple systems in a data application development process.
+
+For example, system B needs to use a piece of data generated by system A. The usual practice is as follows:
+
+1. B system calls the data access interface developed by A system;
+
+2. System B reads the data written by system A into a shared storage.
+
+With CS, the A and B systems only need to interact with the CS, write the data and information that need to be shared into the CS, and read the data and information that need to be read from the CS, without the need for an external system to develop and adapt. , Which greatly reduces the call complexity and coupling of information sharing between systems, and makes the boundaries of each system clearer.
+
+## **Product Range**
+
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png)
+
+
+### Metadata context
+
+The metadata context defines the metadata specification.
+
+Metadata context relies on data middleware, and its main functions are as follows:
+
+1. Open up the relationship with the data middleware, and get all user metadata information (including Hive table metadata, online database table metadata, and other NOSQL metadata such as HBase, Kafka, etc.)
+
+2. When all nodes need to access metadata, including existing metadata and metadata in the application template, they must go through the metadata context. The metadata context records all metadata information used by the application template.
+
+3. The new metadata generated by each node must be registered with the metadata context.
+
+4. When the application template is extracted, the metadata context is abstracted for the application template (mainly, the multiple library tables used are made into \${db}. tables to avoid data permission problems) and all dependent metadata information is packaged.
+
+Metadata context is the basis of interactive workflows and the basis of application templates. Imagine: When Widget is defined, how to know the dimensions of each indicator defined by DataWrangler? How does Qualitis verify the graph report generated by Widget?
+
+### Data context
+
+The data context defines the data specification.
+
+The data context depends on data middleware and Linkis computing middleware. The main functions are as follows:
+
+1. Get through the data middleware and get all user data information.
+
+2. Get through the computing middleware and get the data storage information of all nodes.
+
+3. When all nodes need to write temporary results, they must pass through the data context and be uniformly allocated by the data context.
+
+4. When all nodes need to access data, they must pass the data context.
+
+5. The data context distinguishes between dependent data and generated data. When the application template is extracted, all dependent data is abstracted and packaged for the application template.
+
+### Resource context
+
+The resource context defines the resource specification.
+
+The resource context mainly interacts with Linkis computing middleware. The main functions are as follows:
+
+1. User resource files (such as Jar, Zip files, properties files, etc.)
+
+2. User UDF
+
+3. User algorithm package
+
+4. User script
+
+### Environmental context
+
+The environmental context defines the environmental specification.
+
+The main functions are as follows:
+
+1. Operating System
+
+2. Software, such as Hadoop, Spark, etc.
+
+3. Package dependencies, such as Mysql-JDBC.
+
+### Object context
+
+The runtime context is all the context information retained when the application template (workflow) is defined and executed.
+
+It is used to assist in defining the workflow/application template, prompting and perfecting all necessary information when the workflow/application template is executed.
+
+The runtime workflow is mainly used by Linkis.
+
+
+## **CS Architecture Diagram**
+
+![](/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png)
+
+## **Architecture Description:**
+
+### 1. Client
+The entrance of external access to CS, Client module provides HA function;
+[Enter Client Architecture Design](context-service-client.md)
+
+### 2. Service Module
+Provide a Restful interface to encapsulate and process CS requests submitted by the client;
+[Enter Service Architecture Design](context-service.md)
+
+### 3. ContextSearch
+The context query module provides rich and powerful query capabilities for the client to find the key-value key-value pairs of the context;
+[Enter ContextSearch architecture design](context-service-search.md)
+
+### 4. Listener
+The CS listener module provides synchronous and asynchronous event consumption capabilities, and has the ability to notify the Client in real time once the Zookeeper-like Key-Value is updated;
+[Enter Listener architecture design](context-service-listener.md)
+
+### 5. ContextCache
+The context memory cache module provides the ability to quickly retrieve the context and the ability to monitor and clean up JVM memory usage;
+[Enter ContextCache architecture design](context-service-cache.md)
+
+### 6. HighAvailable
+Provide CS high availability capability;
+[Enter HighAvailable architecture design](context-service-highavailable.md)
+
+### 7. Persistence
+The persistence function of CS;
+[Enter Persistence architecture design](context-service-persistence.md)
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/datasource-manager.md b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/datasource-manager.md
new file mode 100644
index 00000000000..0e61a115774
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/datasource-manager.md
@@ -0,0 +1,140 @@
+---
+title: Data Source Management Service Architecture
+sidebar_position: 5
+---
+## Background
+
+Exchangis0.x and Linkis0.x in earlier versions both have integrated data source modules. In order to manage the ability to reuse data sources, Linkis reconstructs the data source module based on linkis-datasource (refer to related documents), and converts the data source Management unpacks into data source management services and metadata management services。
+
+This article mainly involves the DataSource Manager Server data source management service, which provides the following functions:
+
+1)、Linkis unified management service startup and deployment, does not increase operation and maintenance costs, reuse Linkis service capabilities;
+
+2)、Provide management services of graphical interface through Linkis Web. The interface provides management services such as new data source, data source query, data source update, connectivity test and so on;
+
+3)、 the service is stateless, multi-instance deployment, so that the service is highly available. When the system is deployed, multiple instances can be deployed. Each instance provides services independently to the outside world without interfering with each other. All information is stored in the database for sharing.
+
+4)、Provide full life cycle management of data sources, including new, query, update, test, and expiration management.
+
+5)、Multi-version data source management, historical data sources will be saved in the database, and data source expiration management is provided.
+
+6)、The Restful interface provides functions, a detailed list: data source type query, data source detailed information query, data source information query based on version, data source version query, get data source parameter list, multi-dimensional data source search, get data source environment query and Update, add data source, data source parameter configuration, data source expiration setting, data source connectivity test.
+
+## Architecture Diagram
+
+![datasource Architecture diagram](/Images/Architecture/datasource/linkis-datasource-server.png)
+
+## Architecture Description
+
+1、The service is registered in the Linkis-Eureak-Service service and managed in a unified manner with other Linkis microservices. The client can obtain the data source management service by connecting the Linkis-GateWay-Service service and the service name data-source-manager.
+
+2、The interface layer provides other applications through the Restful interface, providing additions, deletions, and changes to data sources and data source environments, data source link and dual link tests, data source version management and expiration operations;
+
+3、The Service layer is mainly for the service management of the database and the material library, and permanently retains the relevant information of the data source;
+
+4、The link test of the data source is done through the linkis metastore server service, which now provides the mysql\es\kafka\hive service
+
+### Core Process
+
+1、To create a new data source, firstly, the user of the new data source will be obtained from the request to determine whether the user is valid. The next step will be to verify the relevant field information of the data source. The data source name and data source type cannot be empty. The data source name is used to confirm whether the data source exists. If it does not exist, it will be inserted in the database, and the data source ID number will be returned.
+
+2、 To update the data source, firstly, the user of the new data source will be obtained from the request to determine whether the user is valid. The next step will be to verify the relevant field information of the new data source. The data source name and data source type cannot be empty. It will confirm whether the data source exists according to the data source ID number. If it does not exist, an exception will be returned. If it exists, it will be further judged whether the user has update permission for the data source. The user is the administrator or the owner of the data source. Only have permission to update. If you have permission, the data source will be updated and the data source ID will be returned.
+
+3、 To update the data source parameters, firstly, the user of the new data source will be obtained from the request to determine whether the user is valid, and the detailed data source information will be obtained according to the passed parameter data source ID, and then it will be determined whether the user is the owner of the changed data source or not. For the administrator, if there is any, the modified parameters will be further verified, and the parameters will be updated after passing, and the versionId will be returned.
+
+## Entity Object
+
+| Class Name | Describe |
+| ---------------------------- | ------------------------------------------------------------ |
+| DataSourceType | Indicates the type of data source |
+| DataSourceParamKeyDefinition | Declare data source property configuration definitions |
+| DataSource | Data source object entity class, including permission tags and attribute configuration definitions |
+| DataSourceEnv | Data source environment object entity class, which also contains attribute configuration definitions |
+| DataSourceParameter | Data source specific parameter configuration |
+| DatasourceVersion | Data source version details |
+
+## **Database Design**
+
+##### Database Diagram:
+
+![](/Images-zh/Architecture/datasource/dn-db.png)
+
+##### Data Table Definition:
+
+Table:linkis_ps_dm_datatsource <-->Object:DataSource
+
+| Serial Number | Column | Describe |
+| ------------- | -------------------- | -------------------------------------- |
+| 1 | id | Data source ID |
+| 2 | datasource_name | Data source name |
+| 3 | datasource_desc | Data source detailed description |
+| 4 | datasource_type_id | Data source type ID |
+| 5 | create_identify | create identify |
+| 6 | create_system | System for creating data sources |
+| 7 | parameter | Data source parameters |
+| 8 | create_time | Data source creation time |
+| 9 | modify_time | Data source modification time |
+| 10 | create_user | Data source create user |
+| 11 | modify_user | Data source modify user |
+| 12 | labels | Data source label |
+| 13 | version_id | Data source version ID |
+| 14 | expire | Whether the data source is out of date |
+| 15 | published_version_id | Data source release version number |
+
+Table Name:linkis_ps_dm_datasource_type <-->Object:DataSourceType
+
+| Serial Number | Column | Describe |
+| ------------- | ----------- | ------------------------------ |
+| 1 | id | Data source type ID |
+| 2 | name | Data source type name |
+| 3 | description | Data source type description |
+| 4 | option | Type of data source |
+| 5 | classifier | Data source type classifier |
+| 6 | icon | Data source image display path |
+| 7 | layers | Data source type hierarchy |
+
+Table:linkis_ps_dm_datasource_env <-->Object:DataSourceEnv
+
+| Serial Number | Column | Describe |
+| ------------- | ------------------ | ------------------------------------- |
+| 1 | id | Data source environment ID |
+| 2 | env_name | Data source environment name |
+| 3 | env_desc | Data source environment description |
+| 4 | datasource_type_id | Data source type ID |
+| 5 | parameter | Data source environment parameters |
+| 6 | create_time | Data source environment creation time |
+| 7 | create_user | Data source environment create user |
+| 8 | modify_time | Data source modification time |
+| 9 | modify_user | Data source modify user |
+
+Table:linkis_ps_dm_datasource_type_key <-->Object:DataSourceParamKeyDefinition
+
+| Serial Number | Column | Describe |
+| ------------- | ------------------- | -------------------------------------- |
+| 1 | id | Key-value type ID |
+| 2 | data_source_type_id | Data source type ID |
+| 3 | key | Data source parameter key value |
+| 4 | name | Data source parameter name |
+| 5 | default_value | Data source parameter default value |
+| 6 | value_type | Data source parameter type |
+| 7 | scope | Data source parameter range |
+| 8 | require | Is the data source parameter required? |
+| 9 | description | Data source parameter description |
+| 10 | value_regex | Regular data source parameters |
+| 11 | ref_id | Data source parameter association ID |
+| 12 | ref_value | Data source parameter associated value |
+| 13 | data_source | Data source |
+| 14 | update_time | update time |
+| 15 | create_time | Create Time |
+
+Table:linkis_ps_dm_datasource_version <-->Object:DatasourceVersion
+
+| Serial Number | Column | Describe |
+| ------------- | ------------- | ---------------------------------------- |
+| 1 | version_id | Data source version ID |
+| 2 | datasource_id | Data source ID |
+| 3 | parameter | The version parameter of the data source |
+| 4 | comment | comment |
+| 5 | create_time | Create Time |
+| 6 | create_user | Create User |
+
diff --git a/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/metadata-manager.md b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/metadata-manager.md
new file mode 100644
index 00000000000..a4b136af970
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/metadata-manager.md
@@ -0,0 +1,39 @@
+---
+title: Data Source Management Service Architecture
+sidebar_position: 6
+---
+## Background
+
+Exchangis0.x and Linkis0.x in earlier versions both have integrated data source modules. In order to manage the ability to reuse data sources, Linkis reconstructs the data source module based on linkis-datasource (refer to related documents), and converts the data source Management is unpacked into data source management services and metadata management services.
+
+This article mainly involves the MetaData Manager Server data source management service, which provides the following functions:
+
+1)、Linkis unified management service startup and deployment, does not increase operation and maintenance costs, and reuses Linkis service capabilities;
+
+2)、The service is stateless and deployed in multiple instances to achieve high service availability. When the system is deployed, multiple instances can be deployed. Each instance provides services independently to the outside world without interfering with each other. All information is stored in the database for sharing.
+
+3)、Provides full life cycle management of data sources, including new, query, update, test, and expiration management.
+
+4)、Multi-version data source management, historical data sources will be saved in the database, and data source expiration management is provided.
+
+5)、The Restful interface provides functions, a detailed list: database information query, database table information query, database table parameter information query, and data partition information query.
+
+## Architecture Diagram
+
+![Data Source Architecture Diagram](/Images-zh/Architecture/datasource/meta-server.png)
+
+## Architecture Description
+
+1、The service is registered in the Linkis-Eureak-Service service and managed in a unified manner with other Linkis microservices. The client can obtain the data source management service by connecting the Linkis-GateWay-Service service and the service name metamanager.
+
+2、The interface layer provides database\table\partition information query to other applications through the Restful interface;
+
+3、In the Service layer, the data source type is obtained in the data source management service through the data source ID number, and the specific supported services are obtained through the type. The first supported service is mysql\es\kafka\hive;
+
+### Core Process
+
+1、 The client enters the specified data source ID and obtains information through the restful interface. For example, to query the database list with the data source ID of 1, the url is `http:///metadatamanager/dbs/1`,
+
+2、 According to the data source ID, access the data source service `` through RPC to obtain the data source type;
+
+3、 According to the data source type, load the corresponding Service service [hive\es\kafka\mysql], perform the corresponding operation, and then return;
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/overview.md b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/overview.md
new file mode 100644
index 00000000000..47215a64d72
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/overview.md
@@ -0,0 +1,96 @@
+---
+title: Overview
+sidebar_position: 1
+---
+
+PublicEnhencementService (PS) architecture design
+=====================================
+
+PublicEnhancementService (PS): Public enhancement service, a module that provides functions such as unified configuration management, context service, physical library, data source management, microservice management, and historical task query for other microservice modules.
+
+![](/Images/Architecture/PublicEnhencementArchitecture.png)
+
+Introduction to the second-level module:
+==============
+
+BML material library
+---------
+
+It is the linkis material management system, which is mainly used to store various file data of users, including user scripts, resource files, third-party Jar packages, etc., and can also store class libraries that need to be used when the engine runs.
+
+| Core Class | Core Function |
+|-----------------|------------------------------------|
+| UploadService | Provide resource upload service |
+| DownloadService | Provide resource download service |
+| ResourceManager | Provides a unified management entry for uploading and downloading resources |
+| VersionManager | Provides resource version marking and version management functions |
+| ProjectManager | Provides project-level resource management and control capabilities |
+
+Unified configuration management
+-------------------------
+
+Configuration provides a "user-engine-application" three-level configuration management solution, which provides users with the function of configuring custom engine parameters under various access applications.
+
+| Core Class | Core Function |
+|----------------------|--------------------------------|
+| CategoryService | Provides management services for application and engine catalogs |
+| ConfigurationService | Provides a unified management service for user configuration |
+
+ContextService context service
+------------------------
+
+ContextService is used to solve the problem of data and information sharing across multiple systems in a data application development process.
+
+| Core Class | Core Function |
+|---------------------|------------------------------------------|
+| ContextCacheService | Provides a cache service for context information |
+| ContextClient | Provides the ability for other microservices to interact with the CSServer group |
+| ContextHAManager | Provide high-availability capabilities for ContextService |
+| ListenerManager | The ability to provide a message bus |
+| ContextSearch | Provides query entry |
+| ContextService | Implements the overall execution logic of the context service |
+
+Datasource data source management
+--------------------
+
+Datasource provides the ability to connect to different data sources for other microservices.
+
+| Core Class | Core Function |
+|-------------------|--------------------------|
+| datasource-server | Provide the ability to connect to different data sources |
+
+InstanceLabel microservice management
+-----------------------
+
+InstanceLabel provides registration and labeling functions for other microservices connected to linkis.
+
+| Core Class | Core Function |
+|-----------------|--------------------------------|
+| InsLabelService | Provides microservice registration and label management functions |
+
+Jobhistory historical task management
+----------------------
+
+Jobhistory provides users with linkis historical task query, progress, log display related functions, and provides a unified historical task view for administrators.
+
+| Core Class | Core Function |
+|------------------------|----------------------|
+| JobHistoryQueryService | Provide historical task query service |
+
+Variable user-defined variable management
+--------------------------
+
+Variable provides users with functions related to the storage and use of custom variables.
+
+| Core Class | Core Function |
+|-----------------|-------------------------------------|
+| VariableService | Provides functions related to the storage and use of custom variables |
+
+UDF user-defined function management
+---------------------
+
+UDF provides users with the function of custom functions, which can be introduced by users when writing code.
+
+| Core Class | Core Function |
+|------------|------------------------|
+| UDFService | Provide user-defined function service |
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/public-service.md b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/public-service.md
new file mode 100644
index 00000000000..fc635e01fa5
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/feature/public-enhancement-services/public-service.md
@@ -0,0 +1,23 @@
+---
+title: Public Service
+sidebar_position: 2
+---
+## **Background**
+Why do we need to add public enhanced capabilities after we use Linkis as a unified gateway or JobServer? This is after we actually developed multiple upper-level application tools, and found that if a UDF and variable debugging were defined in the IDE tool, after publishing to the scheduling tool, these UDFs and variables need to be redefined again. When some dependent jar packages, configuration files, etc. change, two places also need to be modified.
+Aiming at these issues like the common context across upper-layer application tools, after we realized the unified entry of tasks as Linkis, we wondered whether Linkis could provide this public enhancement capability, and provide some common features that can be used by multiple application tools. The ability to reuse. Therefore, a layer of public enhanced service PES is designed at the Linkis layer.
+
+
+## **Architecture diagram**
+
+![Diagram](/Images/Architecture/linkis-publicService-01.png)
+
+## **Architecture Introduction**
+
+The capabilities are now provided:
+
+- Provide unified data source capability: data sources are defined and managed uniformly at the Linkis layer, and application tools only need to use the data source name, and no longer need to maintain the connection information of the corresponding data source. And the meaning of the data source is the same between different tools. And it provides the query ability of the metadata of the corresponding data source.
+- Provide public UDF capabilities: Unify the definition specifications and semantics of UDF and small functions, so that multiple tools can be used when defined in one place.
+- The ability to provide a unified context: support the transfer of information between tasks, including the transfer of variables, result sets, and resource files between multiple tasks, and provide the ability to transfer context between tasks.
+- The ability to provide unified materials: Provide unified materials, support shared access to these materials among multiple tools, and materials support storage of various file types, and support version control.
+- Ability to provide unified configuration and variables: Provides unified configuration capabilities to support templated configuration of different engine parameter templates, custom variables, built-in commonly used system variables and time format variables, etc.
+- Ability to provide public error codes: Provide unified error code capabilities, classify and code crops of commonly used computing storage engines and knowledge bases, and provide a convenient SDK for calling.
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/overview.md b/versioned_docs/version-1.4.0/architecture/overview.md
new file mode 100644
index 00000000000..8bc8eefde50
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/overview.md
@@ -0,0 +1,22 @@
+---
+title: Overview
+sidebar_position: 0
+---
+
+Linkis 1.0 divides all microservices into three categories: public enhancement services, computing governance services, and microservice governance services. The following figure shows the architecture of Linkis 1.0.
+
+![Linkis1.0 Architecture Figure](/Images/Architecture/Linkis1.0-architecture.png)
+
+The specific responsibilities of each category are as follows:
+
+1. Public enhancement services are the material library services, context services, data source services and public services that Linkis 0.X has provided.
+2. The microservice governance services are Spring Cloud Gateway, Eureka and Open Feign already provided by Linkis 0.X, and Linkis 1.0 will also provide support for Nacos
+3. Computing governance services are the core focus of Linkis 1.0, from submission, preparation to execution, overall three stages to comprehensively upgrade Linkis' ability to perform control over user tasks.
+
+The following is a directory listing of Linkis1.0 architecture documents:
+
+1. For documents related to Linkis 1.0 public enhancement services, please read [Public Enhancement Services](feature/public-enhancement-services/overview.md).
+
+2. For documents related to Linkis1.0 microservice governance, please read [Microservice Governance](service-architecture/overview.md).
+
+3. For related documentation on computing governance services provided by Linkis1.0, please read [Computation Governance Services](feature/computation-governance-services/overview.md).
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/service-architecture/_category_.json b/versioned_docs/version-1.4.0/architecture/service-architecture/_category_.json
new file mode 100644
index 00000000000..2f24e1bb970
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/service-architecture/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Service Architecture",
+ "position": 5.0
+}
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/service-architecture/gateway.md b/versioned_docs/version-1.4.0/architecture/service-architecture/gateway.md
new file mode 100644
index 00000000000..278162be277
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/service-architecture/gateway.md
@@ -0,0 +1,39 @@
+---
+title: Gateway Design
+sidebar_position: 2
+---
+
+## Gateway Architecture Design
+
+#### Brief
+The Gateway is the primary entry point for Linkis to accept client and external requests, such as receiving job execution requests, and then forwarding the execution requests to specific eligible Entrance services.
+The bottom layer of the entire architecture is implemented based on "SpringCloudGateway". The upper layer is superimposed with module designs related to Http request parsing, session permissions, label routing and WebSocket multiplex forwarding. The overall architecture can be seen as follows.
+### Architecture Diagram
+
+![Gateway diagram of overall architecture](/Images/Architecture/Gateway/gateway_server_global.png)
+
+#### Architecture Introduction
+- gateway-core: Gateway's core interface definition module, mainly defines the "GatewayParser" and "GatewayRouter" interfaces, corresponding to request parsing and routing according to the request; at the same time, it also provides the permission verification tool class named "SecurityFilter".
+- spring-cloud-gateway: This module integrates all dependencies related to "SpringCloudGateway", process and forward requests of the HTTP and WebSocket protocol types respectively.
+- gateway-server-support: The driver module of Gateway, relies on the spring-cloud-gateway module to implement "GatewayParser" and "GatewayRouter" respectively, among which "DefaultLabelGatewayRouter" provides the function of label routing.
+- gateway-httpclient-support: Provides a client-side generic class for Http to access Gateway services, which can be implemented based on more.
+- instance-label: External instance label module, providing service interface named "InsLabelService" which used to create routing labels and associate with application instances.
+
+The detailed design involved is as follows:
+
+#### 1、Request Routing And Forwarding (With Label Information)
+First, after the dispatcher of "SpringCloudGateway", the request enters the filter list of the gateway, and then enters the two main logic of "GatewayAuthorizationFilter" and "SpringCloudGatewayWebsocketFilter".
+The filter integrates "DefaultGatewayParser" and "DefaultGatewayRouter" classes. From Parser to Router, execute the corresponding parse and route methods.
+"DefaultGatewayParser" and "DefaultGatewayRouter" classes also contain custom Parser and Router, which are executed in the order of priority.
+Finally, the service instance selected by the "DefaultGatewayRouter" is handed over to the upper layer for forwarding.
+Now, we take the job execution request forwarding with label information as an example, and draw the following flowchart:
+![Gateway Request Routing](/Images/Architecture/Gateway/gateway_server_dispatcher.png)
+
+
+#### 2、WebSocket Connection Forwarding Management
+By default, "Spring Cloud Gateway" only routes and forwards WebSocket request once, and cannot perform dynamic switching.
+But under the Linkis's gateway architecture, each information interaction will be accompanied by a corresponding uri address to guide routing to different backend services.
+In addition to the "WebSocketService" which is responsible for connecting with the front-end and the client,
+and the "WebSocketClient" which is responsible for connecting with the backend service, a series of "GatewayWebSocketSessionConnection" lists are cached in the middle.
+A "GatewayWebSocketSessionConnection" represents the connection between a session and multiple backend service instances.
+![Gateway WebSocket Forwarding](/Images/Architecture/Gateway/gatway_websocket.png)
diff --git a/versioned_docs/version-1.4.0/architecture/service-architecture/overview.md b/versioned_docs/version-1.4.0/architecture/service-architecture/overview.md
new file mode 100644
index 00000000000..70511115a6a
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/service-architecture/overview.md
@@ -0,0 +1,37 @@
+---
+title: Overview
+sidebar_position: 0
+---
+
+## **Background**
+
+Microservice governance includes three main microservices: Gateway, Eureka and Open Feign.
+It is used to solve Linkis's service discovery and registration, unified gateway, request forwarding, inter-service communication, load balancing and other issues.
+At the same time, Linkis 1.0 will also provide the supporting for Nacos; the entire Linkis is a complete microservice architecture and each business progress requires multiple microservices to complete.
+
+## **Architecture diagram**
+
+![](/Images/Architecture/linkis-microservice-gov-01.png)
+
+## **Architecture Introduction**
+
+1. Linkis Gateway
+As the gateway entrance of Linkis, Linkis Gateway is mainly responsible for request forwarding, user access authentication and WebSocket communication.
+The Gateway of Linkis 1.0 also added Label-based routing and forwarding capabilities.
+A WebSocket routing and forwarder is implemented by Spring Cloud Gateway in Linkis, it is used to establish a WebSocket connection with the client.
+After the connection is established, it will automatically analyze the client's WebSocket request and determine which backend microservice the request should be forward to through the rules,
+then the request is forwarded to the corresponding backend microservice instance.
+ [Linkis Gateway](gateway.md)
+
+2. Linkis Eureka
+Mainly responsible for service registration and discovery. Eureka consists of multiple instances(service instances). These service instances can be divided into two types: Eureka Server and Eureka Client.
+For ease of understanding, we divide Eureka Client into Service Provider and Service Consumer. Eureka Server provides service registration and discovery.
+The Service Provider registers its own service with Eureka, so that service consumers can find it.
+The Service Consumer obtains a listed of registered services from Eureka, so that they can consume services.
+
+3. Linkis has implemented a set of its own underlying RPC communication schema based on Feign. As the underlying communication solution, Linkis RPC integrates the SDK into the microservices in need.
+A microservice can be both the request caller and the request receiver.
+As the request caller, the Receiver of the target microservice will be requested through the Sender.
+As the request receiver, the Receiver will be provided to process the request sent by the Sender in order to complete the synchronous response or asynchronous response.
+
+![](/Images/Architecture/linkis-microservice-gov-03.png)
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/service-architecture/service_isolation.md b/versioned_docs/version-1.4.0/architecture/service-architecture/service_isolation.md
new file mode 100644
index 00000000000..a31321bf6ac
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/service-architecture/service_isolation.md
@@ -0,0 +1,197 @@
+---
+title: Service Isolation Design
+sidebar_position: 9
+---
+
+## 1. General
+### 1.1 Requirements Background
+ Linkis now performs load balancing based on the ribbon when it forwards services in the Gateway, but in some cases, there are some important business tasks that want to achieve service level isolation, if the service is based on the ribbon There will be problems in equilibrium. For example, tenant A wants his tasks to be routed to a specific Linkis-CG-Entrance service, so that when other instances are abnormal, the Entrance of service A will not be affected.
+In addition, tenants and isolation of support services can also quickly isolate an abnormal service and support scenarios such as grayscale upgrades.
+
+### 1.2 Target
+1. Support forwarding the service according to the routing label by parsing the label of the request
+2. Tag Registration and Modification of Support Services
+
+## 2. Design
+ This feature adds two modules, linkis-mg-gateway and instance-label, which are mainly modified points, designed to add the forwarding logic of Gateway, and instance-label to support services and labels register.
+
+### 2.1 Technical Architecture
+ The overall technical architecture mainly modifies the point. The RestFul request needs to carry label parameter information such as routing label, and then the corresponding label will be parsed when the Gateway forwards to complete the route forwarding of the interface. The whole is shown in the figure below
+![arc](/Images/Architecture/Gateway/service_isolation_arc.png)
+
+A few notes:
+1. If there are multiple corresponding services marked with the same roteLabel, it will be forwarded randomly
+2. If the corresponding routeLabel does not have a corresponding service, the interface fails directly
+3. If the interface does not have a routeLabel, based on the original forwarding logic, it will not route to the service marked with a specific label
+
+### 2.2 Business Architecture
+ This feature is mainly to complete the Restful tenant isolation and forwarding function. The modules designed by the function point are as follows:
+
+| Component name | First-level module | Second-level module | Function point |
+|---|---|---|---|
+| Linkis | MG | Gateway| Parse the route label in the restful request parameters, and complete the forwarding function of the interface according to the route label|
+| Linkis | PS | InstanceLabel| InstanceLabel service, completes the association between services and labels|
+
+## 3. Module Design
+### 3.1 Core execution flow
+[Input] The input is the restful request requesting Gatway, and only the request with the roure label to be used in the parameter will be processed.
+[Processing process] The Gateway will determine whether the request has a corresponding RouteLabel, and if it exists, it will be forwarded based on the RouteLabel.
+The call sequence diagram is as follows:
+
+![Time](/Images/Architecture/Gateway/service_isolation_time.png)
+
+
+
+## 4. DDL:
+```sql
+DROP TABLE IF EXISTS `linkis_ps_instance_label`;
+CREATE TABLE `linkis_ps_instance_label` (
+ `id` int(20) NOT NULL AUTO_INCREMENT,
+ `label_key` varchar(32) COLLATE utf8_bin NOT NULL COMMENT 'string key',
+ `label_value` varchar(255) COLLATE utf8_bin NOT NULL COMMENT 'string value',
+ `label_feature` varchar(16) COLLATE utf8_bin NOT NULL COMMENT 'store the feature of label, but it may be redundant',
+ `label_value_size` int(20) NOT NULL COMMENT 'size of key -> value map',
+ `update_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'update unix timestamp',
+ `create_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'update unix timestamp',
+ PRIMARY KEY (`id`),
+ UNIQUE KEY `label_key_value` (`label_key`,`label_value`)
+) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
+
+DROP TABLE IF EXISTS `linkis_ps_instance_info`;
+CREATE TABLE `linkis_ps_instance_info` (
+ `id` int(11) NOT NULL AUTO_INCREMENT,
+ `instance` varchar(128) COLLATE utf8_bin DEFAULT NULL COMMENT 'structure like ${host|machine}:${port}',
+ `name` varchar(128) COLLATE utf8_bin DEFAULT NULL COMMENT 'equal application name in registry',
+ `update_time` datetime DEFAULT CURRENT_TIMESTAMP COMMENT 'update unix timestamp',
+ `create_time` datetime DEFAULT CURRENT_TIMESTAMP COMMENT 'create unix timestamp',
+ PRIMARY KEY (`id`),
+ UNIQUE KEY `instance` (`instance`)
+) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
+
+DROP TABLE IF EXISTS `linkis_ps_instance_label_relation`;
+CREATE TABLE `linkis_ps_instance_label_relation` (
+ `id` int(20) NOT NULL AUTO_INCREMENT,
+ `label_id` int(20) DEFAULT NULL COMMENT 'id reference linkis_ps_instance_label -> id',
+ `service_instance` varchar(128) NOT NULL COLLATE utf8_bin COMMENT 'structure like ${host|machine}:${port}',
+ `update_time` datetime DEFAULT CURRENT_TIMESTAMP COMMENT 'update unix timestamp',
+ `create_time` datetime DEFAULT CURRENT_TIMESTAMP COMMENT 'create unix timestamp',
+ PRIMARY KEY (`id`)
+) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
+
+````
+## 5. How to use:
+
+### 5.1 add route label for entrance
+
+````
+echo "spring.eureka.instance.metadata-map.route=et1" >> $LINKIS_CONF_DIR/linkis-cg-entrance.properties
+sh $LINKIS_HOME/sbin/linkis-damemon.sh restart cg-entrance
+````
+
+![Time](/Images/Architecture/Gateway/service_isolation_time.png)
+
+### 5.2 Use route label
+submit task:
+````
+url:/api/v1/entrance/submit
+{
+ "executionContent": {"code": "echo 1", "runType": "shell"},
+ "params": {"variable": {}, "configuration": {}},
+ "source": {"scriptPath": "ip"},
+ "labels": {
+ "engineType": "shell-1",
+ "userCreator": "peacewong-IDE",
+ "route": "et1"
+ }
+}
+````
+will be routed to a fixed service:
+````
+{
+ "method": "/api/entrance/submit",
+ "status": 0,
+ "message": "OK",
+ "data": {
+ "taskID": 45158,
+ "execID": "exec_id018030linkis-cg-entrancelocalhost:9205IDE_peacewong_shell_0"
+ }
+}
+````
+
+or linkis-cli:
+
+````
+sh bin/linkis-cli -submitUser hadoop -engineType shell-1 -codeType shell -code "whoami" -labelMap route=et1 --gatewayUrl http://127.0.0.1:9101
+````
+
+### 5.3 Use non-existing label
+submit task:
+````
+url:/api/v1/entrance/submit
+{
+ "executionContent": {"code": "echo 1", "runType": "shell"},
+ "params": {"variable": {}, "configuration": {}},
+ "source": {"scriptPath": "ip"},
+ "labels": {
+ "engineType": "shell-1",
+ "userCreator": "peacewong-IDE",
+ "route": "et1"
+ }
+}
+````
+
+will get the error
+````
+{
+ "method": "/api/rest_j/v1/entrance/submit",
+ "status": 1,
+ "message": "GatewayErrorException: errCode: 11011 ,desc: Cannot route to the corresponding service, URL: /api/rest_j/v1/entrance/submit RouteLabel: [{\"stringValue\":\"et2\",\" labelKey\":\"route\",\"feature\":null,\"modifiable\":true,\"featureKey\":\"feature\",\"empty\":false}] ,ip: localhost ,port: 9101 ,serviceKind: linkis-mg-gateway",
+ "data": {
+ "data": "{\r\n \"executionContent\": {\"code\": \"echo 1\", \"runType\": \"shell\"},\r\n \"params \": {\"variable\": {}, \"configuration\": {}},\r\n \"source\": {\"scriptPath\": \"ip\"},\r\ n \"labels\": {\r\n \"engineType\": \"shell-1\",\r\n \"userCreator\": \"peacewong-IDE\",\r\n \" route\": \"et2\"\r\n }\r\n}"
+ }
+}
+````
+
+### 5.4 without label
+submit task:
+````
+url:/api/v1/entrance/submit
+{
+ "executionContent": {"code": "echo 1", "runType": "shell"},
+ "params": {"variable": {}, "configuration": {}},
+ "source": {"scriptPath": "ip"},
+ "labels": {
+ "engineType": "shell-1",
+ "userCreator": "peacewong-IDE"
+ }
+}
+````
+
+````
+
+will route to untagged entranceservices
+{
+ "method": "/api/entrance/submit",
+ "status": 0,
+ "message": "OK",
+ "data": {
+ "taskID": 45159,
+ "execID": "exec_id018018linkis-cg-entrancelocalhost2:9205IDE_peacewong_shell_0"
+ }
+}
+
+````
+
+## 6. Non-functional design:
+
+### 6.1 Security
+No security issues are involved, restful requires login authentication
+
+### 6.2 Performance
+It has little impact on Gateway forwarding performance, and caches the corresponding label and instance data
+
+### 6.3 Capacity
+not involving
+
+### 6.4 High Availability
+not involving
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/architecture/task-flow.md b/versioned_docs/version-1.4.0/architecture/task-flow.md
new file mode 100644
index 00000000000..115bb5c4f18
--- /dev/null
+++ b/versioned_docs/version-1.4.0/architecture/task-flow.md
@@ -0,0 +1,188 @@
+---
+title: Task Flow Description
+sidebar_position: 0.1
+---
+
+
+> Linkis task execution is the core function of Linkis. It calls to Linkis's computing governance service, public enhancement service, and three-tier services of microservice governance. Now it supports the execution of tasks of OLAP, OLTP, Streaming and other engine types. This article will discuss OLAP The process of task submission, preparation, execution, and result return of the type engine is introduced.
+
+## Keywords:
+LinkisMaster: The management service in the computing governance service layer of Linkis mainly includes several management and control services such as AppManager, ResourceManager, and LabelManager. Formerly known as LinkisManager service.
+
+Entrance: The entry service in the computing governance service layer, which completes the functions of task scheduling, status control, task information push, etc.
+
+Orchestrator: Linkis' orchestration service provides powerful orchestration and computing strategy capabilities to meet the needs of multiple application scenarios such as multi-active, active-standby, transaction, replay, current limiting, heterogeneous and mixed computing. At this stage, Orchestrator is relied on by the Entrance service.
+
+EngineConn (EC): Engine connector, responsible for accepting tasks and submitting them to underlying engines such as Spark, hive, Flink, Presto, trino, etc. for execution.
+
+EngineConnManager (ECM): Linkis' EC process management service, responsible for controlling the life cycle of EngineConn (start, stop).
+
+LinkisEnginePluginServer: This service is responsible for managing the startup materials and configuration of each engine, and also provides the startup command acquisition of each EngineConn, as well as the resources required by each EngineConn.
+
+PublicEnhencementService (PES): A public enhancement service, a module that provides functions such as unified configuration management, context service, material library, data source management, microservice management, and historical task query for other microservice modules.
+
+## 1. Linkis interactive task execution architecture
+### 1.1, Task execution thinking
+ Before the existing Linkis 1.0 task execution architecture, it has undergone many evolutions. From the very beginning, various FullGC caused the service to crash when there were many users, to how the scripts developed by users support multi-platform , multi-tenancy, strong control, high concurrent operation, we encountered the following problems:
+1. How to support tens of thousands of concurrent tenants and isolate each other?
+2. How to support context unification, user-defined UDFs, custom variables, etc. to support the use of multiple systems?
+3. How to support high availability so that the tasks submitted by users can run normally?
+4. How to support the underlying engine log, progress, and status of the task to be pushed to the front end in real time?
+5. How to support multiple types of tasks to submit sql, python, shell, scala, java, etc.
+
+### 1.2, Linkis task execution design
+ Based on the above five questions, Linkis divides the OLTP task into four stages, which are:
+1. Submission stage: The APP is submitted to the CG-Entrance service of Linkis to the completion of the persistence of the task (PS-JobHistory) and various interceptor processing of the task (dangerous syntax, variable substitution, parameter checking) and other steps, and become a producer Consumer concurrency control;
+2. Preparation stage: The task is scheduled by the Scheduler in Entrance to the Orchestrator module for task arrangement, and completes the EngineConn application to the LinkisMaster. During this process, the tenant's resources will be managed and controlled;
+3. Execution stage: The task is submitted from Orchestrator to EngineConn for execution, and EngineConn specifically submits the underlying engine for execution, and pushes the task information to the caller in real time;
+4. Result return stage: return results to the caller, support json and io streams to return result sets
+ The overall task execution architecture of Linkis is shown in the following figure:
+ ![arc](/Images/Architecture/Job_submission_preparation_and_execution_process/linkis_job_arc.png)
+
+## 2. Introduction to the task execution process
+ First of all, let's give a brief introduction to the processing flow of OLAP tasks. An overall execution flow of the task is shown in the following figure:
+![flow](/Images/Architecture/Job_submission_preparation_and_execution_process/linkis_job_flow.png)
+
+ The whole task involves all the services of all computing governance. After the task is forwarded to Linkis's population service Entrance through the Gateway, it will perform multi-level scheduling (producer-consumer mode) through the label of the task. The FIFO mode completes task scheduling and execution. Entrance then submits the task to Orchestrator for task scheduling and submission. Orchestrator will complete the EC application to LinkisMaster. During this process, resource management and engine version selection will be performed through the task Label. EC. Orchestrator then submits the orchestrated task to the EC for execution. The EC will push the job log, progress, resource usage and other information to the Entrance service and push it to the caller. Next, we will give a brief introduction to the execution process of the task based on the above figure and the four stages of the task (submit, prepare, execute, and return).
+
+
+### 2.1 Job submission stage
+ Job submission phase Linkis supports multiple types of tasks: SQL, Python, Shell, Scala, Java, etc., supports different submission interfaces, and supports Restful/JDBC/Python/Shell and other submission interfaces. Submitting tasks mainly includes task code, labels, parameters and other information. The following is an example of RestFul:
+Initiate a Spark Sql task through the Restfu interface
+````JSON
+"method": "/api/rest_j/v1/entrance/submit",
+"data": {
+ "executionContent": {
+ "code": "select * from table01",
+ "runType": "sql"
+ },
+ "params": {
+ "variable": {// task variable
+ "testvar": "hello"
+ },
+ "configuration": {
+ "runtime": {// task runtime params
+ "jdbc.url": "XX"
+ },
+ "startup": { // ec start up params
+ "spark.executor.cores": "4"
+ }
+ }
+ },
+ "source": { //task source information
+ "scriptPath": "file:///tmp/hadoop/test.sql"
+ },
+ "labels": {
+ "engineType": "spark-2.4.3",
+ "userCreator": "hadoop-IDE"
+ }
+}
+````
+1. The task will first be submitted to Linkis's gateway linkis-mg-gateway service. Gateway will forward it to the corresponding Entrance service according to whether the task has a routeLabel. If there is no RouteLabel, it will be forwarded to an Entrance service randomly.
+2. After Entrance receives the corresponding job, it will call the RPC of the JobHistory module in the PES to persist the job information, and parse the parameters and code to replace the custom variables, and submit them to the scheduler (default FIFO scheduling) ) The scheduler will group tasks by tags, and tasks with different tags do not affect scheduling.
+3. After Entrance is consumed by the FIFO scheduler, it will be submitted to the Orchestrator for orchestration and execution, and the submission phase of the task is completed.
+ A brief description of the main classes involved:
+````
+EntranceRestfulApi: Controller class of entry service, operations such as task submission, status, log, result, job information, task kill, etc.
+EntranceServer: task submission entry, complete task persistence, task interception analysis (EntranceInterceptors), and submit to the scheduler
+EntranceContext: Entrance's context holding class, including methods for obtaining scheduler, task parsing interceptor, logManager, persistence, listenBus, etc.
+FIFOScheduler: FIFO scheduler for scheduling tasks
+EntranceExecutor: The scheduled executor, after the task is scheduled, it will be submitted to the EntranceExecutor for execution
+EntranceJob: The job task scheduled by the scheduler, and the JobRequest submitted by the user is parsed through the EntranceParser to generate a one-to-one correspondence with the JobRequest
+````
+The task status is now queued
+
+### 2.2 Job preparation stage
+ Entrance's scheduler will generate different consumers to consume tasks according to the Label in the Job. When the task is consumed and modified to Running, it will enter the preparation state, and the task will be prepared after the corresponding task. Phase begins. It mainly involves the following services: Entrance, LinkisMaster, EnginepluginServer, EngineConnManager, and EngineConn. The following services will be introduced separately.
+#### 2.2.1 Entrance steps:
+1. The consumer (FIFOUserConsumer) consumes the supported concurrent number configured by the corresponding tag, and schedules the task consumption to the Orchestrator for execution
+2. First, Orchestrator arranges the submitted tasks. For ordinary hive and Spark single-engine tasks, it is mainly task parsing, label checking and verification. For multi-data source mixed computing scenarios, different tasks will be split and submitted to Different data sources for execution, etc.
+3. In the preparation phase, another important thing for the Orchestrator is to request the LinkisMaster to obtain the EngineConn for executing the task. If LinkisMaster has a corresponding EngineConn that can be reused, it will return directly, if not, create an EngineConn.
+4. Orchestrator gets the task and submits it to EngineConn for execution. The preparation phase ends and the job execution phase is entered.
+ A brief description of the main classes involved:
+
+````
+## Entrance
+FIFOUserConsumer: The consumer of the scheduler, which will generate different consumers according to the tags, such as IDE-hadoop and spark-2.4.3. Consume submitted tasks. And control the number of tasks running at the same time, configure the number of concurrency through the corresponding tag: wds.linkis.rm.instance
+DefaultEntranceExecutor: The entry point for task execution, which initiates a call to the orchestrator: callExecute
+JobReq: The task object accepted by the scheduler, converted from EntranceJob, mainly including code, label information, parameters, etc.
+OrchestratorSession: Similar to SparkSession, it is the entry point of the orchestrator. Normal singleton.
+Orchestration: The return object of the JobReq orchestrated by the OrchestratorSession, which supports execution and printing of execution plans, etc.
+OrchestrationFuture: Orchestration selects the return of asynchronous execution, including common methods such as cancel, waitForCompleted, and getResponse
+Operation: An interface used to extend operation tasks. Now LogOperation for obtaining logs and ProgressOperation for obtaining progress have been implemented.
+
+## Orchestrator
+CodeLogicalUnitExecTask: The execution entry of code type tasks. After the task is finally scheduled and run, the execute method of this class will be called. First, it will request EngineConn from LinkisMaster and then submit for execution.
+DefaultCodeExecTaskExecutorManager: EngineConn responsible for managing code types, including requesting and releasing EngineConn
+ComputationEngineConnManager: Responsible for LinkisMaster to connect, request and release ENgineConn
+````
+
+#### 2.2.2 LinkisMaster steps:
+
+1. LinkisMaster receives the request EngineConn request from the Entrance service for processing
+2. Determine if there is an EngineConn that can be reused by the corresponding Label, and return directly if there is
+3. If not, enter the process of creating EngineConn:
+- First select the appropriate EngineConnManager service through Label.
+- Then get the resource type and resource usage of this request EngineConn by calling EnginePluginServer,
+- According to the resource type and resource, determine whether the corresponding Label still has resources, if so, enter the creation, otherwise throw a retry exception
+- Request the EngineConnManager of the first step to start EngineConn
+- Wait for the EngineConn to be idle, return the created EngineConn, otherwise judge whether the exception can be retried
+
+4. Lock the created EngineConn and return it to Entrance. Note that it will receive the corresponding request ID after sending the EC request for the asynchronous request Entrance. After the LinkisMaster request is completed, it will actively pass the corresponding Entrance service.
+
+A brief description of the main classes involved:
+````
+## LinkisMaster
+EngineAskEngineService: LinkisMaster is responsible for processing the engine request processing class. The main logic judges whether there is an EngineConn that can be reused by calling EngineReuseService, otherwise calling EngineCreateService to create an EngineConn
+EngineCreateService: Responsible for creating EngineConn, the main steps are:
+
+
+##LinkisEnginePluginServer
+EngineConnLaunchService: Provides ECM to obtain the startup information of the corresponding engine type EngineConn
+EngineConnResourceFactoryService: Provided to LinkisMaster to obtain the resources needed to start EngineConn corresponding to this task
+EngineConnResourceService: Responsible for managing engine materials, including refreshing and refreshing all
+
+## EngineConnManager
+AbstractEngineConnLaunchService: Responsible for starting the request to start the EngineConn by accepting the LinkisMaster request, and completing the start of the EngineConn engine
+ECMHook: It is used to process the pre and post operations before and after EngineConn is started. For example, hive UDF Jar is added to the classPath started by EngineConn.
+````
+
+
+It should be noted here that if the user has an available idle engine, the four steps 1, 2, 3, and 4 will be skipped;
+
+### 2.3 Job execution phase
+ When the orchestrator in the Entrance service gets the EngineConn, it enters the execution phase. CodeLogicalUnitExecTask will submit the task to the EngineConn for execution, and the EngineConn will create different executors through the corresponding CodeLanguageLabel for execution. The main steps are as follows:
+1. CodeLogicalUnitExecTask submits tasks to EngineConn via RPC
+2. EngineConn determines whether there is a corresponding CodeLanguageLabel executor, if not, create it
+3. Submit to Executor for execution, and execute by linking to the specific underlying engine execution, such as Spark submitting sql, pyspark, and scala tasks through sparkSession
+4. The task status flow is pushed to the Entrance service in real time
+5. By implementing log4jAppender, SendAppender pushes logs to Entrance service via RPC
+6. Push task progress and resource information to Entrance in real time through timed tasks
+
+A brief description of the main classes involved:
+````
+ComputationTaskExecutionReceiver: The service class used by the Entrance server orchestrator to receive all RPC requests from EngineConn, responsible for receiving progress, logs, status, and result sets pushed to the last caller through the ListenerBus mode
+TaskExecutionServiceImpl: The service class for EngineConn to receive all RPC requests from Entrance, including task execution, status query, task Kill, etc.
+ComputationExecutor: specific task execution parent class, such as Spark is divided into SQL/Python/Scala Executor
+ComputationExecutorHook: Hook before and after Executor creation, such as initializing UDF, executing default UseDB, etc.
+EngineConnSyncListener: ResultSetListener/TaskProgressListener/TaskStatusListener is used to monitor the progress, result set, and progress of the Executor during the execution of the task.
+SendAppender: Responsible for pushing logs from EngineConn to Entrance
+````
+### 2.4 Job result push stage
+ This stage is relatively simple and is mainly used to return the result set generated by the task in EngineConn to the Client. The main steps are as follows:
+1. First, when EngineConn executes the task, the result set will be written, and the corresponding path will be obtained by writing to the file system. Of course, memory cache is also supported, and files are written by default.
+2. EngineConn returns the corresponding result set path and the number of result sets to Entrance
+3. Entrance calls JobHistory to update the result set path information to the task table
+4. Client obtains the result set path through task information and reads the result set
+ A brief description of the main classes involved:
+````
+EngineExecutionContext: responsible for creating the result set and pushing the result set to the Entrance service
+ResultSetWriter: Responsible for writing result sets to filesystems that support linkis-storage support, and now supports both local and HDFS. Supported result set types, table, text, HTML, image, etc.
+JobHistory: Stores all the information of the task, including status, result path, indicator information, etc. corresponding to the entity class in the DB
+ResultSetReader: The key class for reading the result set
+````
+
+## 3. Summary
+ Above we mainly introduced the entire execution process of the OLAP task of the Linkis Computing Governance Service Group CGS. According to the processing process of the task request, the task is divided into four parts: submit, prepare, execute, and return the result stage. CGS is mainly designed and implemented according to these 4 stages, serves these 4 stages, and provides powerful and flexible capabilities for each stage. In the submission stage, it mainly provides a common interface, receives tasks submitted by upper-layer application tools, and provides basic parsing and interception capabilities; in the preparation stage, it mainly completes the parsing and scheduling of tasks through the orchestrator Orchestrator and LinkisMaster, and does Resource control, and the creation of EngineConn; in the execution stage, the connection with the underlying engine is actually completed through the engine connector EngineConn. Usually, each user needs to start a corresponding underlying engine connector EC to connect to a different underlying engine. . The computing task is submitted to the underlying engine for actual execution through EC, and information such as status, log, and result is obtained, and; in the result return stage, the result information of the task execution is returned, and various return modes are supported, such as: file Streams, JSON, JDBC, etc. The overall timing diagram is as follows:
+
+![time](/Images/Architecture/Job_submission_preparation_and_execution_process/linkis_job_time.png)
diff --git a/versioned_docs/version-1.4.0/auth/_category_.json b/versioned_docs/version-1.4.0/auth/_category_.json
new file mode 100644
index 00000000000..fc663db2a7a
--- /dev/null
+++ b/versioned_docs/version-1.4.0/auth/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Security Authentication",
+ "position": 6.0
+}
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/auth/kerberos.md b/versioned_docs/version-1.4.0/auth/kerberos.md
new file mode 100644
index 00000000000..5a1ecea8eb8
--- /dev/null
+++ b/versioned_docs/version-1.4.0/auth/kerberos.md
@@ -0,0 +1,97 @@
+---
+title: Kerberos
+sidebar_position: 5
+---
+
+## Kerberos authentication
+
+## Scenario 1 HDFS storage
+If the hadoop cluster is used, such as the file used to store the result set
+```shell script
+# Result set logs and other file paths, used to store the result set files of the Job wds.linkis.filesystem.hdfs.root.path(linkis.properties)
+HDFS_USER_ROOT_PATH=hdfs:///tmp/linkis
+```
+And kerberos authentication is enabled, corresponding kerberos configuration is required
+
+Modify the corresponding configuration of `linkis.properties` as follows
+```properties
+#Whether the kerberos authentication mode is enabled
+wds.linkis.keytab.enable=true
+#keytab places the directory, which stores the files of username.keytab of multiple users
+wds.linkis.keytab.file=/appcom/keytab/
+#Whether to bring principle client authentication, the default value is false
+wds.linkis.keytab.host.enabled=false
+#principle authentication needs to bring the client IP
+wds.linkis.keytab.host=127.0.0.1
+```
+Restart the service after modification
+
+
+## Scenario 2 HDFS storage kerberos proxy authentication
+
+Hadoop2.0 version began to support the ProxyUser mechanism. The meaning is to use the user authentication information of User A to access the hadoop cluster in the name of User B.
+For the server, it is considered that User B is accessing the cluster at this time, and the corresponding authentication of access requests (including the permissions of the HDFS file system and the permissions of YARN submitting task queues) is performed by User B.
+User A is considered a superuser.
+
+The main difference from Scenario 1 is that it can solve the problem that each user needs to generate a keytab file. If kerberos proxy authentication is set, the proxy user's keytab file can be used for authentication.
+Modify the corresponding configuration of `linkis.properties` as follows
+
+```properties
+#Whether the kerberos authentication mode is enabled
+wds.linkis.keytab.enable=true
+#keytab places the directory, which stores the files of username.keytab of multiple users
+wds.linkis.keytab.file=/appcom/keytab/
+#Whether to bring principle client authentication, the default value is false
+wds.linkis.keytab.host.enabled=false
+#principle authentication needs to bring the client IP
+wds.linkis.keytab.host=127.0.0.1
+
+#Enable kerberos proxy authentication
+wds.linkis.keytab.proxyuser.enable=true
+
+#Use superuser to verify user authentication information
+wds.linkis.keytab.proxyuser.superuser=hadoop
+
+
+
+```
+Restart the service after modification
+
+## Scenario 3 Queue manager checks yarn resource information
+![yarn-normal](/Images-zh/auth/yarn-normal.png)
+Will access the REST API interface provided by Yarn to provide ResourceManager
+If the ResourceManager of yarn has enabled kerberos authentication, you need to configure kerberos-related authentication information
+
+Database table linkis_cg_rm_external_resource_provider
+Insert yarn data information
+```sql
+INSERT INTO `linkis_cg_rm_external_resource_provider`
+(`resource_type`, `name`, `labels`, `config`) VALUES
+('Yarn', 'sit', NULL,
+'
+ {
+ "rmWebAddress": "http://xx.xx.xx.xx:8088",
+ "hadoopVersion": "2.7.2",
+ "authorEnable": false,
+ "user":"hadoop","pwd":"123456",
+ "kerberosEnable":@YARN_KERBEROS_ENABLE,
+ "principalName": "@YARN_PRINCIPAL_NAME",
+ "keytabPath": "@YARN_KEYTAB_PATH"
+ "krb5Path": "@YARN_KRB5_PATH"
+ }
+'
+);
+
+```
+After the update, because the cache is used in the program, if you want to take effect immediately, you need to restart the `linkis-cg-linkismanager` service
+
+```shell script
+sh sbin/linkis-daemon.sh restart cg-linkismanager
+```
+
+
+
+## Scenario 4 The hive data source in the data source function
+
+If the hive data source that needs to be connected and the corresponding hive cluster environment has kerberos authentication enabled, you need to upload the kerberos and keytab authentication file information when configuring the cluster environment.
+![image](/Images-zh/auth/dsm-kerberos.png)
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/auth/ldap.md b/versioned_docs/version-1.4.0/auth/ldap.md
new file mode 100644
index 00000000000..844a8095e24
--- /dev/null
+++ b/versioned_docs/version-1.4.0/auth/ldap.md
@@ -0,0 +1,50 @@
+---
+title: LDAP
+sidebar_position: 1
+---
+> LDAP (Lightweight Directory Access Protocol) configuration, after the default installation and deployment, only supports configured static user and password login (only one can be configured), if you need to support multi-user login, you can use LDAP
+
+## 1. Implementation logic introduction
+
+The default way to configure `linkis-mg-gateway.properties`
+
+```properties
+#default username
+wds.linkis.admin.user=hadoop
+#default password
+wds.linkis.admin.password=123456
+```
+
+`org.apache.linkis.gateway.security.UserPwdAbstractUserRestful#tryLogin` during login request processing,
+If the login user name/user password is inconsistent with the configured default value, LDAP mode will be used.
+LDAP core processing `org.apache.linkis.gateway.security.LDAPUserRestful#login` is authenticated by calling jdk general ldap tool class.
+`javax.naming.ldap.InitialLdapContext#InitialLdapContext(java.util.Hashtable,?>, javax.naming.ldap.Control[])`
+
+
+## 2. How to use
+
+> The premise is that there is an available LDAP service
+
+### 2.1 Step1 Enable ladp login password verification method
+
+Modify `linkis-mg-gateway.properties` configuration
+
+Fill in LDAP related parameters
+```properties
+##LDAP
+#ldap service address
+wds.linkis.ldap.proxy.url=ldap://localhost:1389/
+#Directory Name(DN) Directory composition of ldap
+wds.linkis.ldap.proxy.baseDN==dc=linkis,dc=org
+#Username formatting Generally, no configuration is required
+wds.linkis.ldap.proxy.userNameFormat=
+```
+### 2.2 Step2 Restart the service of linkis-mg-gateway
+
+After modifying the configuration, you need to restart the `linkis-mg-gateway` service `sh sbin/linkis-daemon.sh start mg-mgtaeway` to take effect
+
+## 3 Notes
+
+- The authentication type uses the simple mode in `java.naming.security.authentication` (security type, three values: none, simple or strong.)
+
+- For the introduction of ldap, please refer to [LDAP directory server introduction] (https://juejin.cn/post/6844903857311449102)
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/auth/proxy.md b/versioned_docs/version-1.4.0/auth/proxy.md
new file mode 100644
index 00000000000..65187a0e89f
--- /dev/null
+++ b/versioned_docs/version-1.4.0/auth/proxy.md
@@ -0,0 +1,57 @@
+---
+title: proxy authentication
+sidebar_position: 4
+---
+
+
+> This method allows the login user to be different from the actual user. The main function is to control that the user must be a real-name user when logging in, but a non-real-name user when actually using the big data platform. It is convenient to verify and control permissions.
+> For example: when linkis executes the task submitted by the user, the linkis main process service will switch to the corresponding user through sudo -u ${submit user}, and then execute the corresponding engine start command,
+> This requires creating a corresponding system user for each ${submit user} in advance, and configuring related environment variables. For new users, a series of environment initialization preparations are required,
+> Frequent user changes will increase the cost of operation and maintenance, and there are too many users, it is impossible to configure resources for a single user, and resources cannot be well controlled. If A proxy can be implemented for the specified proxy user to execute, the execution entry can be uniformly converged to solve the problem of needing to initialize the environment.
+
+## 1. Implementation logic introduction
+
+
+- Login users: users who directly log in to the system through username and password
+- Proxy user: The user who actually performs operations as a login user is called a proxy user, and the proxy login user performs related operations
+
+For login cookie processing, parse out the login user and proxy user
+
+```html
+The key of the proxy user's cookie is: linkis_user_session_proxy_ticket_id_v1
+Login user cookie: linkis_user_session_ticket_id_v1
+
+```
+The relevant interface of linkis can identify the proxy user information based on the UserName information, and use the proxy user to perform various operations. And record the audit log, including the user's task execution operation, download operation
+When the task is submitted for execution, the entrance entry service modifies the executed user as the proxy user
+
+## 2. How to use
+
+### 2.1 Step1 Turn on proxy mode
+Specify the following parameters in `linkis.properties`:
+```shell script
+# Turn on proxy mode
+ wds.linkis.gateway.conf.enable.proxy.user=true
+ # Specify the proxy configuration file
+ wds.linkis.gateway.conf.proxy.user.config=proxy.properties
+```
+
+
+In the conf directory, create a `proxy.properties` file with the following content:
+```shell script
+# The format is as follows:
+ ${LOGIN_USER}=${PROXY_USER}
+ # For example:
+ enjoyyin=hadoop
+```
+If the existing proxy mode cannot meet your needs, you can also further modify: `org.apache.linkis.gateway.security.ProxyUserUtils`.
+
+### 2.2 Step2 Restart the service of linkis-mg-gateway
+
+After modifying the configuration, you need to restart the `linkis-mg-gateway` service `sh sbin/linkis-daemon.sh start mg-mgtaeway` to take effect
+
+## 3 Notes
+
+- Users are divided into proxy users and non-proxy users. Proxy users cannot be proxied to other users for execution
+- It is necessary to control the list of login users and system users who can be proxied, prohibit any proxy, and avoid uncontrollable permissions. It is best to support the configuration of the database table, and it can be directly modified to take effect without restarting the service
+- A separate record log file contains the operations of the proxy user, such as proxy execution, function update, etc. PublicService proxy user operations are all recorded in the log, which is convenient for auditing
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/auth/test.md b/versioned_docs/version-1.4.0/auth/test.md
new file mode 100644
index 00000000000..1ac134498d5
--- /dev/null
+++ b/versioned_docs/version-1.4.0/auth/test.md
@@ -0,0 +1,76 @@
+---
+title: Password-Free
+sidebar_position: 3
+---
+> In some scenarios, in order to facilitate development and debugging, and to access pages and interfaces conveniently, you can enable test mode configuration for secret-free authentication
+
+## 1. Implementation logic introduction
+
+Control through unified authentication processing filter: `org.apache.linkis.server.security.SecurityFilter`
+
+configuration item
+```properties
+# Whether to enable test mode
+wds.linkis.test.mode=true
+# Simulated user name for test mode
+wds.linkis.test.user=hadoop
+```
+Implemented pseudocode
+```scala
+val BDP_TEST_USER = CommonVars("wds.linkis.test.user", "")
+val IS_TEST_MODE = CommonVars("wds. linkis. test. mode", false)
+
+if (IS_TEST_MODE. getValue) {
+ logger.info("test mode! login for uri: " + request.getRequestURI)
+ // Set the login user information to the user specified in the configuration
+ SecurityFilter.setLoginUser(response, BDP_TEST_USER)
+ true
+}
+```
+
+## 2. How to use
+
+### 2.1 Step1 Open the test mode
+Directly modify the configuration file `linkis.properties` (effective for all linkis services), modify the corresponding configuration as follows
+```shell script
+# Whether to enable test mode
+wds.linkis.test.mode=true
+# Simulated user name for test mode
+wds.linkis.test.user=hadoop
+```
+
+If you only need to enable the test mode of a certain service, you can modify the corresponding service configuration item.
+For example: only enable the test mode of `entrance` service
+Directly modify the configuration file `linkis-cg-entrance.properties` (effective for the entry service of linkis), modify the corresponding configuration as follows
+```shell script
+# Whether to enable test mode
+wds.linkis.test.mode=true
+# Simulated user name for test mode
+wds.linkis.test.user=hadoop
+```
+
+### 2.2 Step2 Restart the corresponding service
+
+After modifying the configuration, you need to restart the service to take effect
+
+
+### 2.3 Step3 request verification
+
+After successfully restarting the service, you can directly request the http interface that originally required authentication, and you can request normally without additional authentication.
+The management console can also access the content page without login authentication
+
+
+## 3 Notes
+
+### 3.1 Value setting of wds.linkis.test.user
+Because some interfaces will perform permission verification of user roles, such as: [Search historical EC information] interface: `/api/rest_j/v1/linkisManager/ecinfo/ecrHistoryList`
+The roles are:
+
+|role name | permission description | configuration item | default value |
+| -------- | -------- | ----- |----- |
+|Administrator role|The highest authority, has all authority operations|`wds.linkis.governance.station.admin`|`hadoop`|
+|Historical task role|Compared with ordinary users, you can also view all task list information of other users|`wds.linkis.jobhistory.admin`|`hadoop`|
+|Normal role|Default role|||
+
+For tests in different scenarios, the set value of `wds.linkis.test.user` will be different and needs to be set according to the actual scenario.
+If you need to access all interfaces, you need to configure it to the same value as `wds.linkis.governance.station.admin`, usually `hadoop`
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/auth/token.md b/versioned_docs/version-1.4.0/auth/token.md
new file mode 100644
index 00000000000..aaac0e3f0a0
--- /dev/null
+++ b/versioned_docs/version-1.4.0/auth/token.md
@@ -0,0 +1,113 @@
+---
+title: Token
+sidebar_position: 2
+---
+
+> Usually when the third-party system calls the linkis service, it usually authenticates through token
+
+## 1. Implementation logic introduction
+
+Control through unified authentication processing filter: `org.apache.linkis.server.security.SecurityFilter`
+
+Implemented pseudocode
+```scala
+
+val TOKEN_KEY = "Token-Code"
+val TOKEN_USER_KEY = "Token-User"
+
+/* TokenAuthentication.isTokenRequest by judging the request request:
+ 1. Whether the request header contains TOKEN_KEY and TOKEN_USER_KEY: getHeaders.containsKey(TOKEN_KEY) && getHeaders.containsKey(TOKEN_USER_KEY)
+ 2. Or request whether TOKEN_KEY and TOKEN_USER_KEY are included in the cookies: getCookies.containsKey(TOKEN_KEY) &&getCookies.containsKey(TOKEN_USER_KEY)
+*/
+
+if (TokenAuthentication.isTokenRequest(gatewayContext)) {
+ /* Perform token authentication
+ 1. Confirm whether to enable the token authentication configuration item `wds.linkis.gateway.conf.enable.token.auth`
+ 2. Extract the token tokenUser host information for authentication and verify the validity
+ */
+ TokenAuthentication. tokenAuth(gatewayContext)
+ } else {
+ //Common username and password authentication
+}
+```
+Available tokens and corresponding ip-related information data are stored in the table `linkis_mg_gateway_auth_token`,
+see [table analysis description] (../development/table/all#16-linkis_mg_gateway_auth_token) for details, non-real-time update,
+Periodically `wds.linkis.token.cache.expire.hour` (default interval 12 hours) is refreshed into the service memory
+
+
+## 2. How to use
+
+### 2.1 New Token
+
+Management console `Basic Data Management > Token Management` to add
+
+```text
+Name: token name corresponds to Token-Code, such as: TEST-AUTH
+User: The username corresponding to the token, that is, the perceived requesting user, will be used for log auditing. If there is no limit, it can be configured as *
+Host: The host that can be accessed will perform the IP verification and filtering of the requester. If there is no limit, it can be configured as *
+Valid days: If it is permanently valid, configure it as -1
+```
+
+### 2.2 Native way
+The constructed http request method needs to add `Token-Code`, `Token-User` parameters in the request header,
+
+#### Example
+
+Request address:
+`http://127.0.0.1:9001/api/rest_j/v1/entrance/submit`
+
+body parameter:
+```json
+{
+ "executionContent": {"code": "sleep 5s;echo pwd", "runType": "shell"},
+ "params": {"variable": {}, "configuration": {}},
+ "source": {"scriptPath": "file:///mnt/bdp/hadoop/1.hql"},
+ "labels": {
+ "engineType": "shell-1",
+ "userCreator": "hadoop-IDE",
+ "executeOnce": "false"
+ }
+}
+```
+
+Request header header:
+```text
+Content-Type: application/json
+Token-Code: BML-AUTH
+Token-User: hadoop
+```
+
+### 2.3 The client uses token authentication
+
+The client authentication methods provided by linkis all support the Token strategy mode `new TokenAuthenticationStrategy()`
+
+For details, please refer to [SDK method](../user-guide/sdk-manual)
+
+#### Example
+```java
+// 1. build config: linkis gateway url
+ DWSClientConfig clientConfig = ((DWSClientConfigBuilder) (DWSClientConfigBuilder.newBuilder()
+ .addServerUrl("http://127.0.0.1:9001/") //set linkis-mg-gateway url: http://{ip}:{port}
+ .connectionTimeout(30000) //connectionTimeOut
+ .discoveryEnabled(false) //disable discovery
+ .discoveryFrequency(1, TimeUnit.MINUTES) // discovery frequency
+ .loadbalancerEnabled(true) // enable loadbalance
+ .maxConnectionSize(5) // set max Connection
+ .retryEnabled(false) // set retry
+ .readTimeout(30000) //set read timeout
+ .setAuthenticationStrategy(new TokenAuthenticationStrategy()) // AuthenticationStrategy Linkis auth Token
+ .setAuthTokenKey("Token-Code") // set token key
+ .setAuthTokenValue("DSM-AUTH") // set token value
+ .setDWSVersion("v1") //linkis rest version v1
+ .build();
+```
+
+## 3 Notes
+
+### 3.1 token configuration
+Supported tokens, the corresponding available users/applicable requester ip are controlled by the table `linkis_mg_gateway_auth_token`,
+the loading is not updated in real time, and the caching mechanism is used
+
+### 3.2 Administrator permission token
+For the restriction of high-risk operations, the token of the administrator role is required to operate,
+and the format of the administrator token is `admin-xxx`
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/deployment/_category_.json b/versioned_docs/version-1.4.0/deployment/_category_.json
new file mode 100644
index 00000000000..4e2eb893599
--- /dev/null
+++ b/versioned_docs/version-1.4.0/deployment/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Deployment",
+ "position": 3.0
+}
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/deployment/deploy-cluster.md b/versioned_docs/version-1.4.0/deployment/deploy-cluster.md
new file mode 100644
index 00000000000..1e6ab05948b
--- /dev/null
+++ b/versioned_docs/version-1.4.0/deployment/deploy-cluster.md
@@ -0,0 +1,173 @@
+---
+title: Cluster Deployment
+sidebar_position: 1.1
+---
+
+The stand-alone deployment method of Linkis is simple. Too many processes on the same server will put too much pressure on the server. In order to ensure high service availability in the production environment, it is recommended to use split deployment.
+The choice of deployment plan is related to the company's user scale, user usage habits, and the number of simultaneous users of the cluster. Generally speaking, we will use the number of simultaneous users who use Linkis and the user's preference for the execution engine to make the choice of deployment method. .
+
+## 1. Computational model reference for multi-node deployment
+
+Each microservice of Linkis supports a multi-active deployment solution. Of course, different microservices play different roles in the system. Some microservices are called frequently and resources will be under high load.
+**On the machine where EngineConnManager is installed, since the user's engine process will be started, the memory load of the machine will be relatively high, and the load of other types of microservices on the machine will be relatively low.
+** For this type of microservice, we recommend starting multiple distributed deployments. The total resources dynamically used by Linkis can be calculated as follows.
+
+**EngineConnManager** uses total resources
+= total memory + total cores
+= **Number of people online at the same time \* (memory occupied by all types of engines) \*Maximum concurrent number of single user + number of people online at the same time \*
+(The number of cores occupied by all types of engines) \*The maximum number of concurrency for a single user**
+
+E.g:
+```html
+
+When only spark, hive, and python engines are used and the maximum number of concurrency for a single user is 1, the number of concurrent users is 50.
+The driver memory of spark is 1G, the memory of hive client is 1G, and the python client is 1G. Each engine uses 1 core
+
+Total resources used by EngineConnManager (ECM)
+= 50 * (1+1+1) G *1 + 50 * (1+1+1) core *1
+= 150G memory + 150 CPU cores
+```
+
+During distributed deployment, the memory occupied by the microservice itself can be calculated according to each 2G. For a large number of users, it is recommended to increase the memory of ps-publicservice to 6G, and it is recommended to reserve 10G of memory as a buffer.
+
+The following configuration assumes **Each user starts two engines at the same time as an example**, **For a machine with 64G memory**, the reference configuration is as follows:
+
+### 1.1 The number of people online at the same time is 10-50
+**EngineConnManager** Total resources used = total memory + total cores =
+** Simultaneous online users \* (All types of engines occupy memory) \* Maximum concurrent number of single user + simultaneous online users \*
+(The number of cores occupied by all types of engines) \*The maximum number of concurrency for a single user**
+
+Total memory: simultaneous online users 50 * single engine 1G memory * each user starts two engines at the same time 2 = 100G memory
+
+> **Server Configuration Recommended** 4 servers, named as S1, S2, S3, S4
+
+| Service | Host name | Remark |
+|----------------------|-----------|------------------|
+| cg-engineconnmanager | S1、S2(共128G)| Deploy each machine individually |
+| Other services | S3、S4 | Eureka High Availability Deployment |
+
+### 1.2 The number of people online at the same time is 50-100
+
+Total memory: number of people online at the same time 100 * single engine 1G memory * each user starts two engines at the same time 2 = 200G memory
+
+> **Server configuration recommendation**: 6 servers named S1, S2, S3, S4, S5, S6
+
+| Service | Host name | Remark |
+|--------------------|-----------|------------- ----|
+| cg-engineconnmanager | S1-S4 (total 256G) | Deploy each machine separately |
+| Other services | S5, S6 | Eureka high availability deployment |
+
+### 1.3 Simultaneous users 100-300
+
+
+Total memory: 300 people online at the same time * 1G memory for a single engine * Each user starts two engines at the same time 2 = 600G memory
+
+**Server configuration recommendation**: 12 servers, named S1, S2..S12 respectively
+
+| Service | Host name | Remark |
+|--------------------|-----------|------------- ----|
+| cg-engineconnmanager | S1-S10 (total 640G) | Each machine is deployed separately |
+| Other services | S11, S12 | Eureka high availability deployment |
+
+### 1.4 Simultaneous users 300-500
+
+> **Server configuration recommendation**: 20 servers, named S1, S2..S20 respectively
+
+| Service | Host name | Remark |
+|--------------------|-----------|------------- -------------------------------------------------- ---------------------------------|
+| cg-engineconnmanager | S1-S18 | Each machine is deployed separately |
+| Other services | S19, S20 | Eureka high-availability deployment, some microservices can consider expansion if the request volume is tens of thousands, and the current active-active deployment can support thousands of users in the line |
+
+### 1.5 The number of simultaneous users is more than 500
+> Estimated based on 800 people online at the same time
+> **Server configuration recommendation**: 34 servers, named S1, S2..S34
+
+| Service | Host name | Remark |
+|--------------------|-----------|------------- -------------------------------------------------- ---------------------------------|
+| cg-engineconnmanager | S1-S32 | Each machine is deployed separately |
+| Other services | S33, S34 | Eureka high-availability deployment, some microservices can consider expansion if the request volume is tens of thousands, and the current active-active deployment can support thousands of users in the line |
+
+## 2. Process of distributed deployment
+
+>All services of Linkis support distributed and multi-cluster deployment. It is recommended to complete stand-alone deployment on one machine before distributed deployment, and ensure the normal use of Linkis functions.
+
+At present, the one-click installation script does not have good support for distributed deployment, and manual adjustment and deployment are required. For the specific distributed deployment, you can refer to the following steps, assuming that the user has completed the single-machine deployment on machine A.
+
+
+### 2.1 Environment preparation for distributed deployment
+Like server A, server B needs basic environment preparation, please refer to [Linkis environment preparation](deploy-quick#3-linkis%E7%8E%AF%E5%A2%83%E5%87%86%E5% A4%87)
+
+**Network Check**
+
+Check whether the service machines that need distributed deployment are connected to each other, and you can use the ping command to check
+```
+ping IP
+```
+
+**Permission check**
+
+Check whether there is a hadoop user on each machine and whether the hadoop user has sudo authority.
+
+**Required Environmental Checks**
+
+Each linkis service depends on some basic environments before starting or when tasks are executed. Please check the basic environment of each machine according to the table below. For specific inspection methods, refer to [Linkis environment preparation] (deploy-quick#3-linkis%E7%8E%AF%E5 %A2%83%E5%87%86%E5%A4%87)
+
+|Service Name|Dependency Environment|
+|-|-|
+|mg-eureka|Java|
+|mg-gateway|Java|
+|ps-publicservice|Java、Hadoop|
+|cg-linkismanager|Java|
+|cg-entrance|Java|
+|cg-engineconnmanager|Java, Hive, Spark, Python, Shell|
+
+
+Note: If you need to use other non-default engines, you also need to check whether the environment of the corresponding engine on the machine where the cg-engineconnmanager service is located is OK. The engine environment can refer to each [engine in use](https://linkis.apache.org/zh- CN/docs/latest/engine-usage/overview) to check the pre-work.
+
+### 2.2 Eureka multi-active configuration adjustment
+
+Modify the Eureka configuration file on machine A, add the Eureka configuration addresses of all machines, and let the Eureka services register with each other.
+On server A, make the following configuration changes, taking two Eureka clusters as an example.
+
+```
+Modify $LINKIS_HOME/conf/application-eureka.yml and $LINKIS_HOME/conf/application-linkis.yml configuration
+
+eureka:
+ client:
+ serviceUrl:
+ defaultZone: http:/eurekaIp1:port1/eureka/,http:/eurekaIp2:port2/eureka/
+
+
+Modify $LINKIS_HOME/conf/linkis.properties configuration
+
+wds.linkis.eureka.defaultZone=http:/eurekaIp1:port1/eureka/,http:/eurekaIp2:port2/eureka/
+```
+
+### 2.3 Synchronization of installation materials
+Create the same directory `$LINKIS_HOME` on all other machines as on machine A. On server A, package the successfully installed directory `$LINKIS_HOME` of linkis, then copy and decompress it to the same directory on other machines.
+At this point, if the `sbin/linkis-start-all.sh` script is executed to start all services on server A and other machines, then all services have n instances, where n is the number of machines. You can visit the eureka service display page `http:/eurekaIp1:port1, or http:/eurekaIp2:port2` to view.
+
+### 2.4 Adjust startup script
+According to the actual situation, determine the Linkis service that needs to be deployed on each machine,
+For example, the microservice `linkis-cg-engineconnmanager` will not be deployed on server A,
+Then modify the one-click start-stop script of server A, `sbin/linkis-start-all.sh`, `sbin/linkis-stop-all.sh`, and comment out the start-stop commands related to the `cg-engineconnmanager` service
+```html
+sbin/linkis-start-all.sh
+#linkis-cg-linkismanage
+#SERVER_NAME="cg-linkismanager"
+#SERVER_IP=$MANAGER_INSTALL_IP
+#startApp
+
+sbin/linkis-stop-all.sh
+#linkis-cg-engineconnmanager(ecm)
+#SERVER_NAME="cg-engineconnmanager"
+#SERVER_IP=$ENGINECONNMANAGER_INSTALL_IP
+#stopApp
+
+```
+
+## 3. Notes
+- When deploying separately, it is recommended to keep the installation directory of linkis consistent to facilitate unified management and control, and it is best to keep the relevant configuration files consistent
+- If some servers and ports are occupied by other applications and cannot be used, you need to adjust the service port
+- The multi-active deployment of mg-gateway currently does not support distributed login sessions, so a user’s request needs to be sent to the same gateway instance, which can be supported by nginx’s ip hash load balancing method
+- The one-key start-stop script should be adjusted according to the actual situation. For microservices that are no longer deployed on the notebook server, the corresponding start-stop commands need to be commented out in the one-key start script.
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/deployment/deploy-console.md b/versioned_docs/version-1.4.0/deployment/deploy-console.md
new file mode 100644
index 00000000000..32d27162cf5
--- /dev/null
+++ b/versioned_docs/version-1.4.0/deployment/deploy-console.md
@@ -0,0 +1,126 @@
+---
+title: Console Deployment
+sidebar_position: 1.2
+---
+```
+The linkis web service uses nginx as a static resource server. The access request process is as follows :
+Linkis Management Console request -> Nginx ip:port-> Linkis-gateway ip:port-> Other services
+```
+Linkis 1.0 provides a Linkis Console, which provides functions such as displaying Linkis' global history, modifying user parameters, managing ECM and microservices, etc. Before deploying the front-end management console, you need to deploy the Linkis back-end. Linkis deployment manual See: [Linkis Deployment Manual](deploy-quick.md)
+
+## 1. Preparation
+
+1. Download the web installation package from the release page of Linkis ([click here to enter the download page](https://linkis.apache.org/download/main/)), apache-linkis-xxx-incubating-web-bin. tar.gz
+Manually decompress: tar -xvf apache-linkis-x.x.x-incubating-web-bin.tar.gz
+
+The decompression directory is as follows.
+```
+├── config.sh
+├── dist
+├── install.sh
+├── LICENSE
+├── licenses
+└── NOTICE
+```
+
+## 2. Deployment
+ There are two deployment methods, automated deployment and manual deployment
+
+### 2.1 Automated deployment
+#### 2.1.1 modify config.sh file (use vim or nano)
+
+```$xslt
+#Configuring front-end ports
+linkis_port="8088"
+
+#URL of the backend linkis gateway
+linkis_url="http://localhost:9001"
+
+#linkis ip address, replace `127.0.0.1` to real ip address if neccssary
+linkis_ipaddr=127.0.0.1
+```
+
+#### 2.1.2 execute deployment script
+
+ ```shell
+ #sudo permission is required to install nginx
+ sudo sh install.sh
+ ```
+
+After execution, you can directly access it in Google browser: ```http://linkis_ipaddr:linkis_port``` where linkis_port is the port configured in config.sh, and linkis_ipaddr is the IP of the installation machine
+
+If the access fails: You can check the installation log which step went wrong
+
+### 2.2 Manual deployment
+1. Install Nginx: ```sudo yum install nginx -y```
+
+2. Modify the configuration file: sudo vi /etc/nginx/conf.d/linkis.conf
+Add the following content:
+```
+server {
+ listen 8080;# access port
+ server_name localhost;
+ #charset koi8-r;
+ #access_log /var/log/nginx/host.access.log main;
+ location / {
+ root /appcom/Install/linkis/dist; # The directory where the front-end package is decompressed
+ index index.html index.html;
+ }
+
+ location /api {
+ proxy_pass http://192.168.xxx.xxx:9001; # ip port of linkis-gateway service
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header x_real_ipP $remote_addr;
+ proxy_set_header remote_addr $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_http_version 1.1;
+ proxy_connect_timeout 4s;
+ proxy_read_timeout 600s;
+ proxy_send_timeout 12s;
+ proxy_set_header Upgrade $http_upgrade;
+ proxy_set_header Connection upgrade;
+ }
+ #error_page 404 /404.html;
+ # redirect server error pages to the static page /50x.html
+ #
+ error_page 500 502 503 504 /50x.html;
+ location = /50x.html {
+ root /usr/share/nginx/html;
+ }
+ }
+
+```
+
+3. Copy the front-end package to the corresponding directory: ```/appcom/Install/linkis/dist; # The directory where the front-end package is decompressed ```
+
+4. Start the service ```sudo systemctl restart nginx```
+
+5. After execution, you can directly access it in Google browser: ```http://nginx_ip:nginx_port```
+
+## 3. Common problems
+
+(1) Upload file size limit
+
+```
+sudo vi /etc/nginx/nginx.conf
+```
+
+Change upload size
+
+```
+client_max_body_size 200m
+```
+
+ (2) Interface timeout
+
+```
+sudo vi /etc/nginx/conf.d/linkis.conf
+```
+
+
+Change interface timeout
+
+```
+proxy_read_timeout 600s
+```
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/deployment/deploy-quick.md b/versioned_docs/version-1.4.0/deployment/deploy-quick.md
new file mode 100644
index 00000000000..71ebace6f7c
--- /dev/null
+++ b/versioned_docs/version-1.4.0/deployment/deploy-quick.md
@@ -0,0 +1,786 @@
+---
+title: Stand-alone deployment
+sidebar_position: 1
+---
+
+## 1. First-time installation preparations
+
+### 1.1 Linux server
+
+**Hardware Requirements**
+Install nearly 6 linkis microservices, at least 3G memory. The default jvm -Xmx memory size of each microservice is 512M (if the memory is not enough, you can try to reduce it to 256/128M, and you can also increase it if the memory is enough).
+
+
+### 1.2 Add deployment user
+
+>Deployment user: The starting user of the linkis core process, and this user will be the administrator by default. The corresponding administrator login password will be generated during the deployment process, located in `conf/linkis-mg-gateway .properties`file
+Linkis supports specifying users for submission and execution. The linkis main process service will switch to the corresponding user through `sudo -u ${linkis-user}`, and then execute the corresponding engine start command, so the user of the engine `linkis-engine` process is the executor of the task (so the deployment The user needs to have sudo authority, and it is password-free).
+
+Take hadoop users as an example (Many configuration users in linkis use hadoop users by default. It is recommended that first-time installers use hadoop users, otherwise many unexpected errors may be encountered during the installation process):
+
+First check whether there is already a hadoop user in the system, if it already exists, just authorize it directly, if not, create a user first, and then authorize.
+
+Check if hadoop user already exists
+```shell script
+$ id hadoop
+uid=2001(hadoop) gid=2001(hadoop) groups=2001(hadoop)
+```
+
+If it does not exist, you need to create a hadoop user and join the hadoop user group
+```shell script
+$ sudo useradd hadoop -g hadoop
+$ vi /etc/sudoers
+#Secret-free configuration
+hadoop ALL=(ALL) NOPASSWD: NOPASSWD: ALL
+```
+
+The following operations are performed under the hadoop user
+
+
+
+## 2. Configuration modification
+
+### 2.1 Installation package preparation
+
+- Method 1: From the official website [download address] (https://linkis.apache.org/zh-CN/download/main): https://linkis.apache.org/zh-CN/download/main
+, download the corresponding installation package (project installation package and management console installation package).
+- Method 2: Compile the project installation package and console installation package according to [Linkis Compilation and Packaging](../development/build) and [Front-end Console Compilation](../development/build-console).
+
+After uploading the installation package `apache-linkis-xxx-bin.tar.gz`, decompress the installation package
+
+```shell script
+$ tar -xvf apache-linkis-xxx-bin.tar.gz
+```
+
+The directory structure after decompression is as follows
+```shell script
+-rw-r--r-- 1 hadoop hadoop 518192043 Jun 20 09:50 apache-linkis-xxx-bin.tar.gz
+drwxrwxr-x 2 hadoop hadoop 4096 Jun 20 09:56 bin //execute environment check and install script
+drwxrwxr-x 2 hadoop hadoop 4096 Jun 20 09:56 deploy-config // Deployment dependent DB and other environment configuration information
+drwxrwxr-x 4 hadoop hadoop 4096 Jun 20 09:56 docker
+drwxrwxr-x 4 hadoop hadoop 4096 Jun 20 09:56 helm
+-rwxrwxr-x 1 hadoop hadoop 84732 Jan 22 2020 LICENSE
+drwxr-xr-x 2 hadoop hadoop 20480 Jun 20 09:56 licenses
+drwxrwxr-x 7 hadoop hadoop 4096 Jun 20 09:56 linkis-package // actual software package, including lib/service startup script tool/db initialization script/microservice configuration file, etc.
+-rwxrwxr-x 1 hadoop hadoop 119503 Jan 22 2020 NOTICE
+-rw-r--r-- 1 hadoop hadoop 11959 Jan 22 2020 README_CN.md
+-rw-r--r-- 1 hadoop hadoop 12587 Jan 22 2020 README.md
+
+```
+
+### 2.2 Configure database information
+
+`vim deploy-config/linkis-env.sh`
+
+```shell script
+# Select linkis business database type, default mysql
+# If using postgresql, please change to postgresql
+# Note: The current configuration only applies to linkis>=1.4.0
+dbType=mysql
+```
+
+`vim deploy-config/db.sh`
+
+```shell script
+# Linkis's own business database information - mysql
+MYSQL_HOST=xx.xx.xx.xx
+MYSQL_PORT=3306
+MYSQL_DB=linkis_test
+MYSQL_USER=test
+MYSQL_PASSWORD=xxxxx
+
+# Linkis's own business database information - postgresql
+# Note: The following configuration is only applicable to linkis>=1.4.0
+PG_HOST=xx.xx.xx.xx
+PG_PORT=5432
+PG_DB=linkis_test
+PG_SCHEMA=linkis_test
+PG_USER=test
+PG_PASSWORD=123456
+
+# Provide the DB information of the Hive metadata database. If the hive engine is not involved (or just a simple trial), it is not necessary to configure
+#Mainly used together with scriptis, if not configured, it will try to get it through the configuration file in $HIVE_CONF_DIR by default
+HIVE_META_URL="jdbc:mysql://10.10.10.10:3306/hive_meta_demo?useUnicode=true&characterEncoding=UTF-8"
+HIVE_META_USER=demo # User of the HiveMeta metabase
+HIVE_META_PASSWORD=demo123 # Password of the HiveMeta metabase
+```
+
+
+### 2.3 Configure basic variables
+
+The file is located at `deploy-config/linkis-env.sh`.
+
+#### Deploy User
+```shell script
+deployUser=hadoop #The user who executes the deployment is the user created in step 1.2
+```
+
+#### Basic directory configuration (optional)
+:::caution Caution
+Determine whether it needs to be adjusted according to the actual situation, and you can choose to use the default value
+:::
+
+
+```shell script
+
+# Specify the directory path used by the user, which is generally used to store the user's script files and log files, etc., and is the user's workspace. The corresponding configuration file configuration item is wds.linkis.filesystem.root.path(linkis.properties)
+WORKSPACE_USER_ROOT_PATH=file:///tmp/linkis
+
+# The result set log and other file paths are used to store the result set file of the Job wds.linkis.resultSet.store.path(linkis-cg-entrance.properties) //If the configuration of HDFS_USER_ROOT_PATH is not configured
+RESULT_SET_ROOT_PATH=file:///tmp/linkis
+
+# Result set log and other file paths, used to store the result set file of Job wds.linkis.filesystem.hdfs.root.path(linkis.properties)
+HDFS_USER_ROOT_PATH=hdfs:///tmp/linkis
+
+# To store the working path of the execution engine, a local directory wds.linkis.engineconn.root.dir(linkis-cg-engineconnmanager.properties) where the deployment user has write permissions is required
+ENGINECONN_ROOT_PATH=/appcom/tmp
+```
+
+#### Yarn's ResourceManager address
+
+:::caution Caution
+If you need to use the Spark engine, you need to configure
+:::
+
+```shell script
+
+#You can check whether it can be accessed normally by visiting http://xx.xx.xx.xx:8088/ws/v1/cluster/scheduler interface
+YARN_RESTFUL_URL=http://xx.xx.xx.xx:8088
+```
+When executing the spark task, you need to use the ResourceManager of yarn. Linkis defaults that permission verification is not enabled. If the ResourceManager has enabled password permission verification, please install and deploy.
+Modify the database table `linkis_cg_rm_external_resource_provider` to insert yarn data information, for details, please refer to [Check whether the yarn address is configured correctly] (#811-Check whether the yarn address is configured correctly)
+
+#### Basic component environment information
+
+:::caution Caution
+It can be configured through the user's system environment variables. If it is configured through the system environment variables, it can be commented out directly without configuration in the deploy-config/linkis-env.sh configuration file.
+:::
+
+```shell script
+##If you do not use Hive, Spark and other engines and do not rely on Hadoop, you do not need to configure the following environment variables
+
+#HADOOP
+HADOOP_HOME=/appcom/Install/hadoop
+HADOOP_CONF_DIR=/appcom/config/hadoop-config
+
+#Hive
+HIVE_HOME=/appcom/Install/hive
+HIVE_CONF_DIR=/appcom/config/hive-config
+
+#Spark
+SPARK_HOME=/appcom/Install/spark
+SPARK_CONF_DIR=/appcom/config/spark-config
+```
+
+
+#### LDAP login configuration (optional)
+
+:::caution Caution
+The default is to use a static user and password. The static user is the deployment user. The static password will randomly generate a password string during deployment and store it in `${LINKIS_HOME}/conf/linkis-mg-gateway.properties`(>=1.0. 3 version).
+:::
+
+
+```shell script
+#LDAP configuration, by default Linkis only supports deployment user login, if you need to support multi-user login, you can use LDAP, you need to configure the following parameters:
+#LDAP_URL=ldap://localhost:1389/
+#LDAP_BASEDN=dc=webank,dc=com
+```
+
+
+#### JVM memory configuration (optional)
+>Microservice starts jvm memory configuration, which can be adjusted according to the actual situation of the machine. If the machine has less memory resources, you can try to reduce it to 256/128M
+```shell script
+## java application default jvm memory
+export SERVER_HEAP_SIZE="512M"
+```
+
+#### Installation directory configuration (optional)
+> Linkis will eventually be installed in this directory, if not configured, it will be in the same directory as the current installation package by default
+
+```shell script
+##The decompression directory and the installation directory need to be inconsistent
+LINKIS_HOME=/appcom/Install/LinkisInstall
+```
+
+#### No HDFS mode deployment (optional >1.1.2 version support)
+
+> Deploy the Linkis service in an environment without HDFS to facilitate lighter learning, use and debugging. Deploying in HDFS mode does not support tasks such as hive/spark/flink engines
+
+Modify `linkis-env.sh` file, modify the following content
+```bash
+#Use [file://] path pattern instead of [hdfs://] pattern
+WORKSPACE_USER_ROOT_PATH=file:///tmp/linkis/
+HDFS_USER_ROOT_PATH=file:///tmp/linkis
+RESULT_SET_ROOT_PATH=file:///tmp/linkis
+
+export ENABLE_HDFS=false
+export ENABLE_HIVE=false
+export ENABLE_SPARK=false
+```
+
+#### kerberos authentication (optional)
+
+> Linkis does not enable kerberos authentication by default. If the hive cluster used enables kerberos authentication, the following parameters need to be configured.
+
+Modify the `linkis-env.sh` file, the modified content is as follows
+```bash
+#HADOOP
+HADOOP_KERBEROS_ENABLE=true
+HADOOP_KEYTAB_PATH=/appcom/keytab/
+```
+
+### 2.4 Configure Token
+The file is located in `bin/install.sh`
+
+Linkis 1.3.2 version has changed the Token value to 32-bit random generation to ensure system security. For details, please refer to [Token Change Description](https://linkis.apache.org/zh-CN/docs/1.3.2/ feature/update-token/).
+
+Using randomly generated Token, you will encounter a lot of Token verification failure problems when connecting with [WDS other components](https://github.com/WeDataSphere/DataSphereStudio/blob/master/README-ZH.md) for the first time. It is recommended to install it for the first time When not using random generated Token, modify the following configuration to true.
+
+```
+DEBUG_MODE=true
+```
+
+### 2.5 Precautions
+
+**Full installation**
+
+For the full installation of the new version of Linkis, the install.sh script will automatically process the configuration file and keep the database Token consistent. Therefore, the Token of the Linkis service itself does not need to be modified. Each application can query and use the new token through the management console.
+
+**version upgrade**
+
+When the version is upgraded, the database Token is not modified, so there is no need to modify the configuration file and application Token.
+
+**Token expiration issue**
+
+When the Token token is invalid or has expired, you can check whether the Token is configured correctly. You can query the Token through the management console ==> Basic Data Management ==> Token Management.
+
+**Python version issue**
+After Linkis is upgraded to 1.4.0, the default Spark version is upgraded to 3.x, which is not compatible with python2. Therefore, if you need to use the pyspark function, you need to make the following modifications.
+1. Map python2 commands to python3
+```
+sudo ln -snf /usr/bin/python3 /usr/bin/python2
+```
+2. Spark engine connector configuration $LINKIS_HOME/lib/linkis-engineconn-plugins/spark/dist/3.2.1/conf/linkis-engineconn.properties Add the following configuration to specify the python installation path
+```
+pyspark.python3.path=/usr/bin/python3
+```
+
+## 3. Install and start
+
+### 3.1 Execute the installation script:
+
+```bash
+ sh bin/install.sh
+```
+
+The install.sh script will ask you if you want to initialize the database and import metadata. If you choose to initialize, the table data in the database will be cleared and reinitialized.
+
+**You must choose to clear the database for the first installation**
+
+:::tip note
+- If an error occurs, and it is not clear what command to execute to report the error, you can add the -x parameter `sh -x bin/install.sh` to print out the log of the shell script execution process, which is convenient for locating the problem.
+- Permission problem: `mkdir: cannot create directory 'xxxx': Permission denied`, please confirm whether the deployment user has read and write permissions for this path.
+:::
+
+The prompt for successful execution is as follows:
+```shell script
+`Congratulations! You have installed Linkis xxx successfully, please use sh /data/Install/linkis/sbin/linkis-start-all.sh to start it!
+Your default account password is [hadoop/5e8e312b4]`
+```
+
+### 3.2 Add mysql driver package
+
+:::caution Caution
+Because the mysql-connector-java driver is under the GPL2.0 agreement, it does not meet the license policy of the Apache open source agreement. Therefore, starting from version 1.0.3, the official deployment package of the Apache version provided does not have mysql-connector-java-xxxjar by default. Dependency package (**If you install it through the integrated family bucket material package, you don’t need to add it manually**), you need to add dependencies to the corresponding lib package yourself when installing and deploying. You can check whether it exists in the corresponding directory. If it does not exist, you need to add it.
+
+:::
+
+Download mysql driver Take version 8.0.28 as an example: [Download link](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.28/mysql-connector-java-8.0.28. jar)
+
+Copy the mysql driver package to the lib package
+```
+cp mysql-connector-java-8.0.28.jar ${LINKIS_HOME}/lib/linkis-spring-cloud-services/linkis-mg-gateway/
+cp mysql-connector-java-8.0.28.jar ${LINKIS_HOME}/lib/linkis-commons/public-module/
+```
+### 3.3 Add postgresql driver package (optional)
+If you choose to use postgresql as the business database, you need to manually add the postgresql driver
+Download postgresql driver Take version 42.5.4 as an example: [Download link](https://repo1.maven.org/maven2/org/postgresql/postgresql/42.5.4/postgresql-42.5.4.jar)
+Copy the postgresql driver package to the lib package
+```
+cp postgresql-42.5.4.jar ${LINKIS_HOME}/lib/linkis-spring-cloud-services/linkis-mg-gateway/
+cp postgresql-42.5.4.jar ${LINKIS_HOME}/lib/linkis-commons/public-module/
+```
+### 3.4 Configuration adjustment (optional)
+> The following operations are related to the dependent environment. According to the actual situation, determine whether the operation is required
+
+#### 3.4.1 Yarn authentication
+
+When executing spark tasks, you need to use the ResourceManager of yarn, which is controlled by the configuration item `YARN_RESTFUL_URL=http://xx.xx.xx.xx:8088`.
+When performing installation and deployment, the `YARN_RESTFUL_URL=http://xx.xx.xx.xx:8088` information will be updated to `linkis_cg_rm_external_resource_provider` in the database table. By default, access to yarn resources does not require authorization verification.
+If the resourcemanager of yarn has enabled the password authentication, please modify the yarn data information generated in the database table `linkis_cg_rm_external_resource_provider` after installation and deployment,
+For details, please refer to [Check whether the yarn address is configured correctly] (#811-Check whether the yarn address is configured correctly).
+
+#### 3.4.2 session
+If you are an upgrade to Linkis. Deploy DSS or other projects at the same time, but the version of linkis introduced in other software is <1.1.1 (mainly in the lib package, the linkis-module-xxxjar package of Linkis that depends on it <1.1.1), you need to modify the `$ {LINKIS_HOME}/conf/linkis.properties` file.
+```shell
+echo "wds.linkis.session.ticket.key=bdp-user-ticket-id" >> linkis.properties
+```
+
+#### 3.4.3 S3 mode
+> Currently supports storing engine execution logs and results to the S3 file system
+>
+> Note: linkis does not adapt permissions to S3, so it cannot perform authorization operations on it
+
+`vim $LINKIS_HOME/conf/linkis.properties`
+```shell script
+# s3 file system
+linkis.storage.s3.access.key=xxx
+linkis.storage.s3.secret.key=xxx
+linkis.storage.s3.endpoint=http://xxx.xxx.xxx.xxx:xxx
+linkis.storage.s3.region=xxx
+linkis.storage.s3.bucket=xxx
+```
+
+`vim $LINKIS_HOME/conf/linkis-cg-entrance.properties`
+```shell script
+wds.linkis.entrance.config.log.path=s3:///linkis/logs
+wds.linkis.resultSet.store.path=s3:///linkis/results
+```
+
+### 3.5 Start the service
+```shell script
+sh sbin/linkis-start-all.sh
+```
+
+### 3.6 Modification of configuration after installation
+After the installation is complete, if you need to modify the configuration (the configuration needs to be adjusted due to port conflicts or some configuration problems), you can re-execute the installation, or modify the configuration `${LINKIS_HOME}/conf/*properties` file of the corresponding service, Restart the corresponding service, such as: `sh sbin/linkis-daemon.sh start ps-publicservice`.
+
+
+### 3.7 Check whether the service starts normally
+Visit the eureka service page (http://eurekaip:20303),
+By default, 6 Linkis microservices will be started, and the linkis-cg-engineconn service in the figure below will only be started for running tasks.
+![Linkis1.0_Eureka](./images/eureka.png)
+
+```shell script
+LINKIS-CG-ENGINECONNMANAGER Engine Management Service
+LINKIS-CG-ENTRANCE computing governance entry service
+LINKIS-CG-LINKISMANAGER Computing Governance Management Service
+LINKIS-MG-EUREKA Microservice Registry Service
+LINKIS-MG-GATEWAY Gateway Service
+LINKIS-PS-PUBLICSERVICE Public Service
+```
+
+Note: In Linkis 1.3.1, LINKIS-PS-CS, LINKIS-PS-DATA-SOURCE-MANAGER, LINKIS-PS-METADATAMANAGER services have been merged into LINKIS-PS-PUBLICSERVICE, and LINKIS-CG-ENGINEPLUGIN services have been merged into LINKIS -CG-LINKISMANAGER.
+
+If any service is not started, you can check the detailed exception log in the corresponding log/${service name}.log file.
+
+### 3.8 Configure Token
+
+Linkis's original default Token is fixed and the length is too short, posing security risks. Therefore, Linkis 1.3.2 changes the original fixed Token to random generation, and increases the length of the Token.
+
+New Token format: application abbreviation - 32-bit random number, such as BML-928a721518014ba4a28735ec2a0da799.
+
+Token may be used in the Linkis service itself, such as executing tasks through Shell, uploading BML, etc., or it may be used in other applications, such as DSS, Qualitis and other applications to access Linkis.
+
+#### View Token
+**View via SQL statement**
+```sql
+select * from linkis_mg_gateway_auth_token;
+```
+**View via Admin Console**
+
+Log in to the management console -> basic data management -> token management
+![](/Images/deployment/token-list.png)
+
+#### Check Token configuration
+
+When the Linkis service itself uses Token, the Token in the configuration file must be consistent with the Token in the database. Match by applying the short name prefix.
+
+$LINKIS_HOME/conf/linkis.properties file Token configuration
+
+```
+linkis.configuration.linkisclient.auth.token.value=BML-928a721518014ba4a28735ec2a0da799
+wds.linkis.client.common.tokenValue=BML-928a721518014ba4a28735ec2a0da799
+wds.linkis.bml.auth.token.value=BML-928a721518014ba4a28735ec2a0da799
+wds.linkis.context.client.auth.value=BML-928a721518014ba4a28735ec2a0da799
+wds.linkis.errorcode.auth.token=BML-928a721518014ba4a28735ec2a0da799
+
+wds.linkis.client.test.common.tokenValue=LINKIS_CLI-215af9e265ae437ca1f070b17d6a540d
+
+wds.linkis.filesystem.token.value=WS-52bce72ed51741c7a2a9544812b45725
+wds.linkis.gateway.access.token=WS-52bce72ed51741c7a2a9544812b45725
+
+wds.linkis.server.dsm.auth.token.value=DSM-65169e8e1b564c0d8a04ee861ca7df6e
+```
+
+$LINKIS_HOME/conf/linkis-cli/linkis-cli.properties file Token configuration
+```
+wds.linkis.client.common.tokenValue=BML-928a721518014ba4a28735ec2a0da799
+```
+
+When other applications use Token, they need to modify their Token configuration to be consistent with the Token in the database.
+
+## 4. Install the web front end
+The web side uses nginx as the static resource server, and the access request process is:
+`Linkis management console request->nginx ip:port->linkis-gateway ip:port->other services`
+
+### 4.1 Download the front-end installation package and decompress it
+```shell script
+tar -xvf apache-linkis-xxx-web-bin.tar.gz
+```
+
+### 4.2 Modify configuration config.sh
+```shell script
+#Access the port of the management console
+linkis_port="8188"
+
+#linkis-mg-gateway service address
+linkis_url="http://localhost:9020"
+```
+
+### 4.3 Execute the deployment script
+
+```shell script
+# nginx needs sudo permission to install
+sudo sh install.sh
+```
+After installation, the nginx configuration file of linkis is in `/etc/nginx/conf.d/linkis.conf` by default
+The log files of nginx are in `/var/log/nginx/access.log` and `/var/log/nginx/error.log`
+An example of the generated nginx configuration file of the linkis management console is as follows:
+```nginx
+
+ server {
+ listen 8188;# If the access port is occupied, it needs to be modified
+ server_name localhost;
+ #charset koi8-r;
+ #access_log /var/log/nginx/host.access.log main;
+ location / {
+ root /appcom/Install/linkis-web/dist; # static file directory
+ index index.html index.html;
+ }
+ location /ws {
+ proxy_pass http://localhost:9020;#The address of the backend Linkis
+ proxy_http_version 1.1;
+ proxy_set_header Upgrade $http_upgrade;
+ proxy_set_header Connection upgrade;
+ }
+
+ location /api {
+ proxy_pass http://localhost:9020; #The address of the backend Linkis
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header x_real_ipP $remote_addr;
+ proxy_set_header remote_addr $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_http_version 1.1;
+ proxy_connect_timeout 4s;
+ proxy_read_timeout 600s;
+ proxy_send_timeout 12s;
+ proxy_set_header Upgrade $http_upgrade;
+ proxy_set_header Connection upgrade;
+ }
+
+ #error_page 404 /404.html;
+ # redirect server error pages to the static page /50x.html
+ #
+ error_page 500 502 503 504 /50x.html;
+ location = /50x.html {
+ root /usr/share/nginx/html;
+ }
+ }
+```
+
+If you need to modify the port or static resource directory, etc., please modify the `/etc/nginx/conf.d/linkis.conf` file and execute the `sudo nginx -s reload` command
+:::caution Caution
+- At present, the visualis function is not integrated. During the installation process, if you are prompted to fail to install linkis/visualis, you can ignore it.
+- Check whether nginx starts normally: check whether the nginx process exists `ps -ef |grep nginx`.
+- Check whether the configuration of nginx is correct `sudo nginx -T`.
+- If the port is occupied, you can modify the service port `/etc/nginx/conf.d/linkis.conf`listen port value started by nginx, save and restart.
+- If there is an interface 502 when accessing the management console, or `Unexpected token < in JSON at position 0` is abnormal, please confirm whether the linkis-mg-gateway is started normally. If it is started normally, check the linkis-mg-gateway configured in the nginx configuration file Whether the service address is correct.
+:::
+
+### 4.4 Log in to the management console
+
+Browser login `http://xx.xx.xx.xx:8188/#/login`
+Username/password can be checked in `{LINKIS_HOME}/conf/linkis-mg-gateway.properties`.
+```shell script
+wds.linkis.admin.user= #user
+wds.linkis.admin.password= #password
+
+```
+
+## 5. Verify basic functions
+> Verify the corresponding engine tasks according to actual needs
+
+```
+#The version number of the engineType stitching of the engine must match the actual one. The following example is the default version number
+#shell engine tasks
+sh bin/linkis-cli -submitUser hadoop -engineType shell-1 -codeType shell -code "whoami"
+
+#hive engine tasks
+sh bin/linkis-cli -submitUser hadoop -engineType hive-3.1.3 -codeType hql -code "show tables"
+
+#spark engine tasks
+sh bin/linkis-cli -submitUser hadoop -engineType spark-3.2.1 -codeType sql -code "show tables"
+
+#python engine tasks
+sh bin/linkis-cli -submitUser hadoop -engineType python-python2 -codeType python -code 'print("hello, world!")'
+```
+If the verification fails, please refer to [Step 8] for troubleshooting.
+
+## 6. Installation of development tool IDE (Scriptis) (optional)
+After installing the Scripti tool, you can write SQL, Pyspark, HiveQL and other scripts online on the web page. For detailed instructions, see [Tool Scriptis Installation and Deployment] (integrated/install-scriptis).
+
+## 7. Supported engines
+
+### 7.1 Engine adaptation list
+
+Please note: the separate installation package of Linkis only includes Python, Shell, Hive, and Spark by default. If there are other engine usage scenarios (such as jdbc/flink/sqoop, etc.), you can install them manually. For details, please refer to [EngineConnPlugin Engine Plugin installation documentation](install-engineconn).
+
+The list of supported engines adapted to this version is as follows:
+
+| Engine type| Adaptation situation| Whether the official installation package contains |
+|---------------|-------------------|------|
+| Python | >=1.0.0 Adapted | Contains |
+| Shell | >=1.0.0 adapted | contains |
+| Hive | >=1.0.0 adapted | contains |
+| Spark | >=1.0.0 adapted | contains |
+| Pipeline | >=1.0.0 Adapted | **Not Included** |
+| JDBC | >=1.0.0 Adapted | **Not Included** |
+| Flink | >=1.0.0 Adapted | **Excludes** |
+| openLooKeng | >=1.1.1 Adapted | **Not Included** |
+| Sqoop | >=1.1.2 Adapted | **Not Included** |
+| Trino | >=1.3.2 Adapted | **Excluded** |
+| Presto | >=1.3.2 Adapted | **Excluded** |
+| Elasticsearch | >=1.3.2 Adapted | **Excludes** |
+| Seatunnel | >=1.3.2 Adapted | **Not Included** |
+| Impala | >=1.4.0 Adapted | **Excludes** |
+
+
+
+### 7.2 View deployed engines
+
+#### Method 1: View the engine lib package directory
+
+```
+$ tree linkis-package/lib/linkis-engineconn-plugins/ -L 3
+linkis-package/lib/linkis-engineconn-plugins/
+├──hive
+│ ├── dist
+│ │ └── 3.1.3 #version is 3.1.3 engineType is hive-3.1.3
+│ └── plugin
+│ └── 3.1.3
+├── python
+│ ├── dist
+│ │ └── python2
+│ └── plugin
+│ └── python2 #version is python2 engineType is python-python2
+├── shell
+│ ├── dist
+│ │ └── 1
+│ └── plugin
+│ └── 1
+└── spark
+ ├── dist
+ │ └── 3.2.1
+ └── plugin
+ └── 3.2.1
+```
+
+#### Method 2: View the database table of linkis
+```shell script
+select * from linkis_cg_engine_conn_plugin_bml_resources
+```
+
+
+## 8. Troubleshooting guidelines for common abnormal problems
+### 8.1. Yarn queue check
+
+>If you need to use the spark/hive/flink engine
+
+After logging in, check whether the yarn queue resources can be displayed normally (click the button in the lower right corner of the page) (you need to install the front end first).
+
+Normal as shown in the figure below:
+![yarn-normal](images/yarn-normal.png)
+
+If it cannot be displayed: You can adjust it according to the following guidelines
+
+#### 8.1.1 Check whether the yarn address is configured correctly
+Database table `linkis_cg_rm_external_resource_provider``
+Insert yarn data information
+```sql
+INSERT INTO `linkis_cg_rm_external_resource_provider`
+(`resource_type`, `name`, `labels`, `config`) VALUES
+('Yarn', 'sit', NULL,
+'{\r\n"rmWebAddress": "http://xx.xx.xx.xx:8088",\r\n"hadoopVersion": "3.3.4",\r\n"authorEnable":false, \r\n"user":"hadoop",\r\n"pwd":"123456"\r\n}'
+);
+
+config field attribute
+
+"rmWebAddress": "http://xx.xx.xx.xx:8088", #Need to bring http and port
+"hadoopVersion": "3.3.4",
+"authorEnable":true, //Whether authentication is required You can verify the username and password by visiting http://xx.xx.xx.xx:8088 in the browser
+"user": "user", //username
+"pwd": "pwd"//password
+
+```
+After the update, because the cache is used in the program, if you want to take effect immediately, you need to restart the linkis-cg-linkismanager service.
+```shell script
+sh sbin/linkis-daemon.sh restart cg-linkismanager
+```
+
+#### 8.1.2 Check whether the yarn queue exists
+Exception information: `desc: queue ide is not exists in YARN.` indicates that the configured yarn queue does not exist and needs to be adjusted.
+
+Modification method: `linkis management console/parameter configuration>global settings>yarn queue name [wds.linkis.rm.yarnqueue]`, modify a yarn queue that can be used, and the yarn queue to be used can be found at `rmWebAddress:http:// xx.xx.xx.xx:8088/cluster/scheduler`.
+
+View available yarn queues
+- View yarn queue address: http://ip:8888/cluster/scheduler
+
+### 8.2 Check whether the engine material resources are uploaded successfully
+
+```sql
+#Log in to the linkis database
+select * from linkis_cg_engine_conn_plugin_bml_resources
+```
+
+Normally as follows:
+![bml](images/bml.png)
+
+Check whether the material record of the engine exists (if there is an update, check whether the update time is correct)
+
+- If it does not exist or is not updated, first try to manually refresh the material resource (see [Engine Material Resource Refresh](install-engineconn#23-engine refresh) for details).
+- Use `log/linkis-cg-linkismanager.log` to check the specific reason for the failure of the material. In many cases, it may be caused by the lack of permission in the hdfs directory.
+- Check whether the gateway address configuration is correct. The configuration item `wds.linkis.gateway.url` in `conf/linkis.properties`.
+
+The material resources of the engine are uploaded to the hdfs directory by default as `/apps-data/${deployUser}/bml`.
+
+```shell script
+hdfs dfs -ls /apps-data/hadoop/bml
+#If there is no such directory, please manually create the directory and grant ${deployUser} read and write permissions
+hdfs dfs -mkdir /apps-data
+hdfs dfs -chown hadoop:hadoop /apps-data
+```
+
+### 8.3 Login password problem
+
+Linkis uses static users and passwords by default. Static users are deployment users. Static passwords will randomly generate a password string during deployment and store it in
+
+`${LINKIS_HOME}/conf/linkis-mg-gateway.properties` (>=version 1.0.3).
+
+### 8.4 version compatibility issues
+
+The engine supported by linkis by default, and the compatibility relationship with dss can be viewed in [this document](https://github.com/apache/linkis/blob/master/README.md).
+
+
+### 8.5 How to locate server-side exception logs
+
+Linkis has many microservices. If you are not familiar with the system, sometimes you cannot locate the specific module that has an exception. You can search through the global log.
+```shell script
+tail -f log/* |grep -5n exception (or tail -f log/* |grep -5n ERROR)
+less log/* |grep -5n exception (or less log/* |grep -5n ERROR)
+```
+
+
+### 8.6 Execution engine task exception troubleshooting
+
+** step1: Find the startup deployment directory of the engine**
+
+- Method 1: If it is displayed in the execution log, you can view it on the management console as shown below:
+![engine-log](images/engine-log.png)
+- Method 2: If not found in method 1, you can find the `wds.linkis.engineconn.root.dir` parameter configured in `conf/linkis-cg-engineconnmanager.properties`, and this value is the directory where the engine starts and deploys. Subdirectories are segregated by user of the execution engine
+
+```shell script
+# If you don't know the taskid, you can sort by time and choose ll -rt /appcom/tmp/${executed user}/${date}/${engine}/
+cd /appcom/tmp/${user executed}/${date}/${engine}/${taskId}
+```
+The directory is roughly as follows
+```shell script
+conf -> /appcom/tmp/engineConnPublicDir/6a09d5fb-81dd-41af-a58b-9cb5d5d81b5a/v000002/conf #engine configuration file
+engineConnExec.sh #generated engine startup script
+lib -> /appcom/tmp/engineConnPublicDir/45bf0e6b-0fa5-47da-9532-c2a9f3ec764d/v000003/lib #engine-dependent packages
+logs #Related logs of engine startup execution
+```
+
+**step2: Check the log of the engine**
+```shell script
+less logs/stdout
+```
+
+**step3: Try to execute the script manually (if needed)**
+You can debug by trying to execute the script manually
+```
+sh -x engineConnExec.sh
+```
+
+### 8.7 How to modify the port of the registration center eureka
+Sometimes when the eureka port is occupied by other services and the default eureka port cannot be used, it is necessary to modify the eureka port. Here, the modification of the eureka port is divided into two cases: before the installation and after the installation.
+
+1. Modify the eureka port of the registration center before performing the installation
+```
+1. Enter the decompression directory of apache-linkis-xxx-bin.tar.gz
+2. Execute vi deploy-config/linkis-env.sh
+3. Modify EUREKA_PORT=20303 to EUREKA_PORT=port number
+```
+2. Modify the registry eureka port after installation
+```
+1. Enter the ${LINKIS_HOME}/conf directory
+
+2. Execute grep -r 20303 ./* , the query results are as follows:
+ ./application-eureka.yml: port: 20303
+ ./application-eureka.yml: defaultZone: http://ip:20303/eureka/
+ ./application-linkis.yml: defaultZone: http://ip:20303/eureka/
+ ./linkis-env.sh:EUREKA_PORT=20303
+ ./linkis.properties:wds.linkis.eureka.defaultZone=http://ip:20303/eureka/
+
+3. Change the port at the corresponding location to a new port, and restart all services sh restart sbin/linkis-start-all.sh
+```
+
+
+### 8.8 Notes for CDH adaptation version
+
+CDH itself is not an official standard hive/spark package. When adapting, it is best to modify the hive/spark version dependencies in the linkis source code and recompile and deploy.
+For details, please refer to the CDH adaptation blog post
+[[Linkis1.0——Installation and stepping in the CDH5 environment]](https://mp.weixin.qq.com/s/__QxC1NoLQFwme1yljy-Nw)
+[[DSS1.0.0+Linkis1.0.2——Trial record in CDH5 environment]](https://mp.weixin.qq.com/s/9Pl9P0hizDWbbTBf1yzGJA)
+[[DSS1.0.0 and Linkis1.0.2 - Summary of JDBC engine-related issues]](https://mp.weixin.qq.com/s/vcFge4BNiEuW-7OC3P-yaw)
+[[DSS1.0.0 and Linkis1.0.2——Summary of issues related to Flink engine]](https://mp.weixin.qq.com/s/VxZ16IPMd1CvcrvHFuU4RQ)
+
+### 8.9 Debugging of Http interface
+
+- Method 1 can enable [Guide to Free Login Mode](/docs/latest/api/login-api/#2 Login-free configuration)
+- In method 2 postman, the cookie value of successful login on the request header
+ The cookie value can be obtained after successful login on the browser side
+ ![bml](images/bml-cookie.png)
+
+```shell script
+Cookie: bdp-user-ticket-id=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+```
+- Method 3 http request header to add a static Token token
+ Token is configured in conf/linkis.properties
+ Such as: TEST-AUTH=hadoop,root,user01
+```shell script
+Token-Code: TEST-AUTH
+Token-User: hadoop
+```
+
+### 8.10 Troubleshooting process for abnormal problems
+
+First, check whether the service/environment is started normally according to the above steps, and then check the basic problems according to some scenarios listed above.
+
+[QA document](https://docs.qq.com/doc/DSGZhdnpMV3lTUUxq) Find out if there is a solution, link: https://docs.qq.com/doc/DSGZhdnpMV3lTUUxq
+See if you can find a solution by searching the contents of the issue.
+![issues](images/issues.png)
+Through the official website document search, for some questions, you can search keywords on the official website, such as searching for "deployment". (If 404 appears, please refresh the browser)
+![search](images/search.png)
+
+
+## 9. How to obtain relevant information
+Linkis official website documents are constantly being improved, and you can view related documents on this official website.
+
+Related blog posts are linked below.
+- Linkis' technical blog collection https://github.com/apache/linkis/issues/1233
+- Public account technical blog post https://mp.weixin.qq.com/mp/homepage?__biz=MzI4MDkxNzUxMg==&hid=1&sn=088cbf2bbed1c80d003c5865bc92ace8&scene=18
+- Official website documentation https://linkis.apache.org/zh-CN/docs/latest/about/introduction
+- bili technology sharing video https://space.bilibili.com/598542776?spm_id_from=333.788.b_765f7570696e666f.2
+
diff --git a/versioned_docs/version-1.4.0/deployment/images/bml-cookie.png b/versioned_docs/version-1.4.0/deployment/images/bml-cookie.png
new file mode 100644
index 00000000000..67dc0be98f8
Binary files /dev/null and b/versioned_docs/version-1.4.0/deployment/images/bml-cookie.png differ
diff --git a/versioned_docs/version-1.4.0/deployment/images/bml.png b/versioned_docs/version-1.4.0/deployment/images/bml.png
new file mode 100644
index 00000000000..ca9290f945d
Binary files /dev/null and b/versioned_docs/version-1.4.0/deployment/images/bml.png differ
diff --git a/versioned_docs/version-1.4.0/deployment/images/engine-log.png b/versioned_docs/version-1.4.0/deployment/images/engine-log.png
new file mode 100644
index 00000000000..3a4c5ee0fb3
Binary files /dev/null and b/versioned_docs/version-1.4.0/deployment/images/engine-log.png differ
diff --git a/versioned_docs/version-1.4.0/deployment/images/eureka.png b/versioned_docs/version-1.4.0/deployment/images/eureka.png
new file mode 100644
index 00000000000..3b3f24a4b0e
Binary files /dev/null and b/versioned_docs/version-1.4.0/deployment/images/eureka.png differ
diff --git a/versioned_docs/version-1.4.0/deployment/images/issues.png b/versioned_docs/version-1.4.0/deployment/images/issues.png
new file mode 100644
index 00000000000..b84a711452c
Binary files /dev/null and b/versioned_docs/version-1.4.0/deployment/images/issues.png differ
diff --git a/versioned_docs/version-1.4.0/deployment/images/search.png b/versioned_docs/version-1.4.0/deployment/images/search.png
new file mode 100644
index 00000000000..632763a293b
Binary files /dev/null and b/versioned_docs/version-1.4.0/deployment/images/search.png differ
diff --git a/versioned_docs/version-1.4.0/deployment/images/yarn-normal.png b/versioned_docs/version-1.4.0/deployment/images/yarn-normal.png
new file mode 100644
index 00000000000..fefc3110524
Binary files /dev/null and b/versioned_docs/version-1.4.0/deployment/images/yarn-normal.png differ
diff --git a/versioned_docs/version-1.4.0/deployment/install-engineconn.md b/versioned_docs/version-1.4.0/deployment/install-engineconn.md
new file mode 100644
index 00000000000..236757f125b
--- /dev/null
+++ b/versioned_docs/version-1.4.0/deployment/install-engineconn.md
@@ -0,0 +1,84 @@
+---
+title: Installation EngineConn Plugin
+sidebar_position: 3
+---
+
+> This article mainly introduces the use of Linkis EngineConnPlugins, mainly from the aspects of compilation and installation.
+
+## 1. Compilation and packaging of EngineConnPlugins
+
+After Linkis 1.0, the engine is managed by EngineConnManager, and the EngineConnPlugin (ECP) supports real-time effectiveness.
+In order to facilitate the EngineConnManager to be loaded into the corresponding EngineConnPlugin by labels, it needs to be packaged according to the following directory structure (take hive as an example):
+```
+hive engine home directory, must be the name of the engine
+│ ├── dist # Dependency and configuration required for engine startup, different versions of the engine need to be in this directory to prevent the corresponding version directory
+│ │ └── 2.3.3 # Engine version
+│ │ └── conf # Configuration file directory required by the engine
+│ │ └── lib # Dependency package required by EngineConnPlugin
+│ ├── plugin #EngineConnPlugin directory, this directory is used for engine management service package engine startup command and resource application
+│ └── 2.3.3 # Engine version
+│ └── linkis-engineplugin-hive-1.0.0.jar #Engine module package (only need to place a separate engine package)
+```
+If you are adding a new engine, you can refer to hive's assembly configuration method, source code directory: linkis-engineconn-pluginshive/src/main/assembly/distribution.xml
+## 2. Engine Installation
+### 2.1 Plugin package installation
+1.First, confirm the dist directory of the engine: wds.linkis.engineconn.home (get the value of this parameter from ${LINKIS_HOME}/conf/linkis.properties), this parameter is used by EngineConnPluginServer to read the configuration file that the engine depends on And third-party Jar packages. If the parameter (wds.linkis.engineconn.dist.load.enable=true) is set, the engine in this directory will be automatically read and loaded into the Linkis BML (material library).
+
+2.Second, confirm the engine Jar package directory:
+wds.linkis.engineconn.plugin.loader.store.path, which is used by EngineConnPluginServer to read the actual implementation Jar of the engine.
+
+It is highly recommended specifying **wds.linkis.engineconn.home and wds.linkis.engineconn.plugin.loader.store.path as** the same directory, so that you can directly unzip the engine ZIP package exported by maven into this directory, such as: Place it in the ${LINKIS_HOME}/lib/linkis-engineconn-plugins directory.
+
+```
+${LINKIS_HOME}/lib/linkis-engineconn-plugins:
+└── hive
+ └── dist
+ └── plugin
+└── spark
+ └── dist
+ └── plugin
+```
+
+If the two parameters do not point to the same directory, you need to place the dist and plugin directories separately, as shown in the following example:
+
+```
+## dist directory
+${LINKIS_HOME}/lib/linkis-engineconn-plugins/dist:
+└── hive
+ └── dist
+└── spark
+ └── dist
+## plugin directory
+${LINKIS_HOME}/lib/linkis-engineconn-plugins/plugin:
+└── hive
+ └── plugin
+└── spark
+ └── plugin
+```
+### 2.2 Configuration modification of management console (optional)
+
+The configuration of the Linkis1.0 management console is managed according to the engine label. If the new engine has configuration parameters, you need to insert the corresponding configuration parameters in the Configuration, and you need to insert the parameters in three tables:
+
+```
+linkis_configuration_config_key: Insert the key and default values of the configuration parameters of the engin
+linkis-manager_label: Insert engine label such as hive-1.2.1
+linkis_configuration_category: Insert the catalog relationship of the engine
+linkis_configuration_config_value: Insert the configuration that the engine needs to display
+```
+
+If it is an existing engine and a new version is added, you can modify the version of the corresponding engine in the linkis_configuration_dml.sql file for execution
+
+### 2.3 Engine refresh
+
+1. The engine supports real-time refresh. After the engine is placed in the corresponding directory, Linkis1.0 provides a method to load the engine without shutting down the server, and just send a request to the linkis-engineconn-plugin-server service through the restful interface, that is, the actual deployment of the service Ip+port, the request interface is http://ip:port/api/rest_j/v1/rpc/receiveAndReply, the request method is POST, the request body is {"method":"/enginePlugin/engineConn/refreshAll"}.
+
+2. Restart refresh: the engine catalog can be forced to refresh by restarting
+
+```
+### cd to the sbin directory, restart linkis-engineconn-plugin-server
+cd /Linkis1.0.0/sbin
+## Execute linkis-daemon script
+sh linkis-daemon.sh restart linkis-engine-plugin-server
+```
+
+3.Check whether the engine refresh is successful: If you encounter problems during the refresh process and need to confirm whether the refresh is successful, you can check whether the last_update_time of the linkis_engine_conn_plugin_bml_resources table in the database is the time when the refresh is triggered.
diff --git a/versioned_docs/version-1.4.0/deployment/integrated/_category_.json b/versioned_docs/version-1.4.0/deployment/integrated/_category_.json
new file mode 100644
index 00000000000..1b016691ad9
--- /dev/null
+++ b/versioned_docs/version-1.4.0/deployment/integrated/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Integrated",
+ "position": 9
+}
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/deployment/integrated/install-scriptis.md b/versioned_docs/version-1.4.0/deployment/integrated/install-scriptis.md
new file mode 100644
index 00000000000..80f92b3be3a
--- /dev/null
+++ b/versioned_docs/version-1.4.0/deployment/integrated/install-scriptis.md
@@ -0,0 +1,193 @@
+---
+title: Installation Scriptis Tool
+sidebar_position: 4.1
+---
+
+## 1 Introduction
+
+> After Apache Linkis >= 1.1.1 and DSS >= 1.1.0, scriptis can be deployed separately and used in conjunction with Linkis. Using the interactive analysis function of scriptis, you can write SQL, Pyspark, HiveQL, etc. online on web pages Scripts are submitted to Linkis for execution and support features such as UDFs, functions, resource management and custom variables. This article will introduce how to deploy the web component-scriptis separately, and use Apache Linkis through the scriptis web page.
+
+
+Prerequisite: The Linkis service (backend and management desk service) has been successfully installed and can be used normally. The deployment process of Linkis can be found in [Quick Deployment of Apache Linkis](../deploy-quick.md)
+
+Example description:
+
+- The address of the linkis-gateway service is 10.10.10.10 and the port is 9001
+- Linkis console nginx is deployed on 10.10.10.10 port 8080
+
+## 2. Environment preparation
+
+> Requires installation on first use
+
+### 2.1 Install node.js
+```shell script
+Download node.js and install it. Download address: http://nodejs.cn/download/ (It is recommended to use node v16 version) This step only needs to be executed for the first time
+````
+### 2.2 Install learn
+```shell script
+#Wait for the installation to complete, the installation of liarn only needs to be executed when it is used for the first time
+npm install lerna -g
+````
+
+## 3 Compile and deploy
+### 3.1 Get scriptis code
+> scriptis is a pure front-end project, integrated as a component in the DSS web code component, we only need to compile the DSS web project with a separate scriptis module
+
+```shell script
+#Download >=dss 1.1.0 via git to compile script components
+git clone -b branch-1.1.0 https://github.com/WeBankFinTech/DataSphereStudio
+# Or directly download the zip package and unzip it
+https://github.com/WeBankFinTech/DataSphereStudio/archive/refs/heads/branch-1.1.0.zip
+
+# enter the web directory
+cd DataSphereStudio/web
+
+#This step is only required for the first use
+lerna init
+
+#Add dependencies Note: This is not through npm install but lerna bootstrap needs to be installed first learn This step only needs to be executed for the first time
+lerna bootstrap
+````
+
+### 3.2 Running the project locally (optional)
+
+> If you don't need to run the debug view locally, you can skip this step
+
+#### 3.2.1 Configure linkis-gateway service address configuration
+
+If you start the service locally, you need to configure the backend linkis-gateway service address in the code, in the `.env` file in the `web/packages/dss/` directory,
+No configuration is required when packaging and deploying
+```shell script
+// Backend linkis-gatway service address
+VUE_APP_HOST=http://10.10.10.10:9001
+VUE_APP_MN_CONFIG_PREFIX=http://10.10.10.10:9001/api/rest_j/v1
+````
+#### 3.2.2 Running the scriptis module
+
+```shell script
+cd DataSphereStudio/web
+# run scriptis component
+npm run serve --module=scriptis --micro_module=scriptis
+````
+
+Open the browser and access the application script through the link `http://localhost:8080` (the default port for local requests is 8080), because it will request the remote linkis-gatway service interface, which will cause cross-domain problems, chrome browser To solve cross-domain problems, please refer to [Solving Chrome Cross-Domain Problems](https://www.jianshu.com/p/56b1e01e6b6a)
+
+
+## 4 Packaging & deploying scriptis
+
+### 4.1 Packaging
+```shell script
+#Specify scriptis module
+cd DataSphereStudio/web
+
+#After the command is executed successfully, a folder named `dist` will appear in the web directory, which is the component resource code of the packaged scriptis. We need to deploy the front-end resource to the nginx server where linkis-web is located
+npm run build --module=scriptis --micro_module=scriptis
+````
+
+### 4.2 Deployment
+
+Upload the static resources compiled in step 3.1 to the server where the Linkis console is located, and store them in `/data/Install/scriptis-web/dist/`,
+In the nginx server configuration where Linkis console is installed, add scriptis static resource access rules. The nginx configuration deployed by Linkis console is generally located in `/etc/nginx/conf.d/linkis.conf`
+
+```shell script
+ location /scripts {
+ alias /data/Install/scriptis-web/dist/ ;
+ index index.html ;
+}
+````
+
+sudo vim `/etc/nginx/conf.d/linkis.conf`
+
+```shell script
+server {
+ listen 8080;# access port
+ server_name localhost;
+ #charset koi8-r;
+ #access_log /var/log/nginx/host.access.log main;
+
+ location / {
+ root /appcom/Install/linkis-web/dist/; # static file directory
+ index index.html;
+ }
+
+ location /scriptis { #scriptis resources are prefixed with scriptis to distinguish them from the linkis console
+ alias /data/Install/scriptis-web/dist/ ; #nginx scriptis static file storage path (customizable)
+ index index.html ;
+ }
+
+ ......
+
+location /api {
+ proxy_pass http://10.10.10.10:9001; address of #gatway
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header x_real_ipP $remote_addr;
+ proxy_set_header remote_addr $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_http_version 1.1;
+ proxy_connect_timeout 4s;
+ proxy_read_timeout 600s;
+ proxy_send_timeout 12s;
+ proxy_set_header Upgrade $http_upgrade;
+ proxy_set_header Connection upgrade;
+ }
+
+ #error_page 404 /404.html;
+ # redirect server error pages to the static page /50x.html
+ #
+ error_page 500 502 503 504 /50x.html;
+ location = /50x.html {
+ root /usr/share/nginx/html;
+ }
+ }
+
+````
+After modifying the configuration, reload the nginx configuration
+
+```shell script
+sudo nginx -s reload
+````
+
+Note the difference between root and alias in the location configuration block in nginx
+- The processing result of root is: root path + location path.
+- The result of alias processing is: replace the location path with the alias path.
+- alias is the definition of a directory alias, root is the definition of the topmost directory
+
+## 5 scriptis usage steps
+
+### 5.1 Log in to Linkis console normally
+```shell script
+#http://10.10.10.10:8080/#/
+http://nginxIp:port/#/
+````
+Because access to scriptis requires login verification, you need to log in first, obtain and cache cookies.
+
+### 5.2 Access the scriptis page after successful login
+
+```shell script
+#http://10.10.10.10:8080/scriptis/#/home
+http://nginxIp:port/scriptis/#/home
+````
+`nginxIp`: The ip of the nginx server deployed by the Linkis console, `port`: the port number of the nginx configuration startup, `scriptis` is the location address of the nginx configuration for requesting the static files of the scriptis project (can be customized)
+
+### 5.3 Using scriptis
+
+Take creating a new sql query task as an example.
+
+
+step1 Create a new script Select the script type as sql type
+
+![Rendering](/Images-zh/deployment/scriptis/new_script.png)
+
+step2 Enter the statement to be queried
+
+![Rendering](/Images-zh/deployment/scriptis/test_statement.png)
+
+step3 run
+
+![Rendering](/Images-zh/deployment/scriptis/running_results.png)
+
+
+shep4 View Results
+
+![Rendering](/Images-zh/deployment/scriptis/design_sketch.png)
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/deployment/integrated/involve-knife4j.md b/versioned_docs/version-1.4.0/deployment/integrated/involve-knife4j.md
new file mode 100644
index 00000000000..dc2fd9316da
--- /dev/null
+++ b/versioned_docs/version-1.4.0/deployment/integrated/involve-knife4j.md
@@ -0,0 +1,64 @@
+---
+title: Involve Knife4j
+sidebar_position: 5.2
+---
+
+## 1.Knife4j introduced
+knife4j is an enhanced solution for generating API documentation for the Java MVC framework integration Swapper, formerly known as swagger-bootstrap-ui, named knife4j in the hope that it will be as small, lightweight, and powerful as a dagger! Its underlying layer is the encapsulation of Springfox, which is used in the same way as Springfox, but the interface document UI is optimized.
+
+**Core functionality:**
+
+- Document Description: According to the specification of Swagger, the description of the interface document is listed in detail, including the interface address, type, request example, request parameter, response example, response parameter, response code and other information, and the use of the interface is clear at a glance.
+- Online debugging: Provides the powerful function of online interface joint debugging, automatically parses the current interface parameters, and includes form verification, and the call parameters can return the interface response content, headers, response time, response status codes and other information to help developers debug online.
+## 2.Linkis integrates knif4j
+### 2.1 Start knif4j in test mode
+Modify the application-linkis.yml file setting to knife4j.production=false
+```shell
+knife4j:
+ enable: true
+ production: false
+```
+Modify the linkis.properties file to open test mode
+```shell
+wds.linkis.test.mode=true
+wds.linkis.test.user=hadoop
+```
+After restarting all services, you can access the knife4j page via http://ip:port/api/rest_j/v1/doc .html
+```shell
+http://ip:port/api/rest_j/v1/doc.html
+```
+### 2.2 Start knif4j in normal mode
+Modify the application-linkis.yml file setting to knife4j.production=false
+```shell
+knife4j:
+ enable: true
+ production: false
+```
+Modify the linkis.properties file to add wds.linkis.server.user.restful.uri.pass.auth
+```shell
+wds.linkis.server.user.restful.uri.pass.auth=/api/rest_j/v1/doc.html,/api/rest_j/v1/swagger-resources,/api/rest_j/v1/webjars,/api/rest_j/v1/v2/api-docs
+```
+After restarting all services, you can access the knife4j page via http://ip:port/api/rest_j/v1/doc .html
+```shell
+http://ip:port/api/rest_j/v1/doc.html
+```
+Since identity authentication is required when knife4j debugs each interface, the following cookie information needs to be manually added to the browser.
+```shell
+#User login ticket-id
+bdp-user-ticket-id=
+#Workspace ID
+workspaceId=
+#Internal request switch
+dataworkcloud_inner_request=true
+```
+Take the Chrome browser as an example
+![](/Images/deployment/knife4j/Knife4j_addcookie.png)
+## 3.Go to the Knife4j page
+Access knife4j page via http://ip:port/api/rest_j/v1/doc.html
+![](/Images/deployment/knife4j/Knife4j_home.png)
+Click the interface name to display detailed interface documentation
+![](/Images/deployment/knife4j/Knife4j_interface.png)
+Click "Debug" and enter parameters to debug the interface
+![](/Images/deployment/knife4j/Knife4j_debug.png)
+
+For detailed usage guidelines, please visit the knife4j official website to view:[https://doc.xiaominfo.com/knife4j/](https://doc.xiaominfo.com/knife4j/)
diff --git a/versioned_docs/version-1.4.0/deployment/integrated/involve-prometheus.md b/versioned_docs/version-1.4.0/deployment/integrated/involve-prometheus.md
new file mode 100644
index 00000000000..9bb5e06bff9
--- /dev/null
+++ b/versioned_docs/version-1.4.0/deployment/integrated/involve-prometheus.md
@@ -0,0 +1,325 @@
+---
+title: Involve Prometheus
+sidebar_position: 5.1
+---
+This article describes how to enable Prometheus to monitor all running Linkis services.
+
+## 1. Introduction to Prometheus
+
+### 1.1 What is Prometheus
+
+Prometheus, a Cloud Native Computing Foundation project, is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts when specified conditions are observed.
+
+In the context of microservice, it provides the service discovery feature, enabling to find targets dynamically from service register center, like Eureka, Consul, etc, and pull the metrics from API endpoint over http protocol.
+
+### 1.2 Prometheus Architecture
+
+This diagram illustrates the architecture of Prometheus and some of its ecosystem components:
+
+![](https://prometheus.io/assets/architecture.png)
+
+Prometheus scrapes metrics from instrumented jobs, either directly or via an intermediary push gateway for short-lived jobs. It stores all scraped samples locally and runs rules over this data to either aggregate and record new time series from existing data or generate alerts. Grafana or other API consumers can be used to visualize the collected data.
+
+![](/Images/deployment/monitoring/prometheus_architecture.jpg)
+
+In the context of Linkis, we will use Eureka (Service Discover)SD in Prometheus to retrieve scrape targets using the Eureka REST API. And Prometheus will periodically check the REST endpoint and create a target for every app instance.
+
+## 2. How to Enable Prometheus
+
+### 2.1 Enable Prometheus when installing Linkis
+
+Modify the configuration item `PROMETHEUS_ENABLE` in linkis-env.sh of Linkis.
+
+```bash
+export PROMETHEUS_ENABLE=true
+````
+After running the `install.sh`, it's expected to see the configuration related to `prometheus` is appended inside the following files:
+
+```yaml
+## application-linkis.yml ##
+
+eureka:
+ instance:
+ metadata-map:
+ prometheus.path: ${prometheus.path:${prometheus.endpoint}}
+...
+management:
+ endpoints:
+ web:
+ exposure:
+ include: refresh,info,health,metrics,prometheus
+````
+
+```yaml
+## application-eureka.yml ##
+
+eureka:
+ instance:
+ metadata-map:
+ prometheus.path: ${prometheus.path:/actuator/prometheus}
+...
+management:
+ endpoints:
+ web:
+ exposure:
+ include: refresh,info,health,metrics,prometheus
+````
+
+```yaml
+## linkis.properties ##
+...
+wds.linkis.prometheus.enable=true
+wds.linkis.server.user.restful.uri.pass.auth=/api/rest_j/v1/actuator/prometheus,
+...
+````
+Then inside each computation engine, like spark, flink or hive, it's needed to add the same configuration **manually**.
+```yaml
+## linkis-engineconn.properties ##
+...
+wds.linkis.prometheus.enable=true
+wds.linkis.server.user.restful.uri.pass.auth=/api/rest_j/v1/actuator/prometheus,
+...
+````
+### 2.2 Enable Prometheus after installation
+Modify`${LINKIS_HOME}/conf/application-linkis.yml`, add `prometheus` as exposed endpoints.
+```yaml
+## application-linkis.yml ##
+management:
+ endpoints:
+ web:
+ exposure:
+ #Add prometheus
+ include: refresh,info,health,metrics,prometheus
+```
+Modify`${LINKIS_HOME}/conf/application-eureka.yml`, add `prometheus` as exposed endpoints.
+```yaml
+## application-eureka.yml ##
+management:
+ endpoints:
+ web:
+ exposure:
+ #Add prometheus
+ include: refresh,info,health,metrics,prometheus
+````
+Modify`${LINKIS_HOME}/conf/linkis.properties`, remove the comment `#` before `prometheus.enable`
+```yaml
+## linkis.properties ##
+...
+wds.linkis.prometheus.enable=true
+...
+```
+
+### 2.3 Start Linkis
+
+```bash
+$ bash linkis-start-all.sh
+````
+
+After start the services, it's expected to access the prometheus endpoint of each microservice in the Linkis, for example, http://linkishost:9103/api/rest_j/v1/actuator/prometheus.
+
+:::caution 注意
+The prometheus endpoint of gateway/eureka don't include the prefix `api/rest_j/v1`, and the complete endpoint will be http://linkishost:9001/actuator/prometheus
+:::
+
+## 3. Demo for Deploying the Prometheus, Alertmanager and Grafana
+Usually the monitoring setup for a cloud native application will be deployed on kubernetes with service discovery and high availability (e.g. using a kubernetes operator like Prometheus Operator). To quickly prototype dashboards and experiment with different metric type options (e.g. histogram vs gauge) you may need a similar setup locally. This sector explains how to setup locally a Prometheus/Alert Manager and Grafana monitoring stack with Docker Compose.
+
+First, lets define a general component of the stack as follows:
+
+- An Alert Manager container that exposes its UI at 9093 and read its configuration from alertmanager.conf
+- A Prometheus container that exposes its UI at 9090 and read its configuration from prometheus.yml and its list of alert rules from alert_rules.yml
+- A Grafana container that exposes its UI at 3000, with list of metrics sources defined in grafana_datasources.yml and configuration in grafana_config.ini
+
+- The following docker-compose.yml file summaries the configuration of all those components:
+
+````yaml
+## docker-compose.yml ##
+version: "3"
+networks:
+ default:
+ external: true
+ name: my-network
+services:
+ prometheus:
+ image: prom/prometheus:latest
+ container_name: prometheus
+ volumes:
+ - ./config/prometheus.yml:/etc/prometheus/prometheus.yml
+ - ./config/alertrule.yml:/etc/prometheus/alertrule.yml
+ - ./prometheus/prometheus_data:/prometheus
+ command:
+ - '--config.file=/etc/prometheus/prometheus.yml'
+ ports:
+ - "9090:9090"
+
+ alertmanager:
+ image: prom/alertmanager:latest
+ container_name: alertmanager
+ volumes:
+ - ./config/alertmanager.yml:/etc/alertmanager/alertmanager.yml
+ ports:
+ - "9093:9093"
+
+ grafana:
+ image: grafana/grafana:latest
+ container_name: grafana
+ environment:
+ - GF_SECURITY_ADMIN_PASSWORD=123456
+ - GF_USERS_ALLOW_SIGN_UP=false
+ volumes:
+ - ./grafana/provisioning/dashboards:/etc/grafana/provisioning/dashboards
+ - ./grafana/provisioning/datasources:/etc/grafana/provisioning/datasources
+ - ./grafana/grafana_data:/var/lib/grafana
+ ports:
+ - "3000:3000"
+````
+Second, to define some alerts based on metrics in Prometheus, you can group then into an alert_rules.yml, so you could validate those alerts are properly triggered in your local setup before configuring them in the production instance.
+As an example, the following configration convers the usual metrics used to monitor Linkis services.
+- a. Down instance
+- b. High Cpu for each JVM instance (>80%)
+- c. High Heap memory for each JVM instance (>80%)
+- d. High NonHeap memory for each JVM instance (>80%)
+- e. High Waiting thread for each JVM instance (100)
+
+```yaml
+## alertrule.yml ##
+groups:
+ - name: LinkisAlert
+ rules:
+ - alert: LinkisNodeDown
+ expr: last_over_time(up{job="linkis", application=~"LINKISI.*", application!="LINKIS-CG-ENGINECONN"}[1m])== 0
+ for: 15s
+ labels:
+ severity: critical
+ service: Linkis
+ instance: "{{ $labels.instance }}"
+ annotations:
+ summary: "instance: {{ $labels.instance }} down"
+ description: "Linkis instance(s) is/are down in last 1m"
+ value: "{{ $value }}"
+
+ - alert: LinkisNodeCpuHigh
+ expr: system_cpu_usage{job="linkis", application=~"LINKIS.*"} >= 0.8
+ for: 1m
+ labels:
+ severity: warning
+ service: Linkis
+ instance: "{{ $labels.instance }}"
+ annotations:
+ summary: "instance: {{ $labels.instance }} cpu overload"
+ description: "CPU usage is over 80% for over 1min"
+ value: "{{ $value }}"
+
+ - alert: LinkisNodeHeapMemoryHigh
+ expr: sum(jvm_memory_used_bytes{job="linkis", application=~"LINKIS.*", area="heap"}) by(instance) *100/sum(jvm_memory_max_bytes{job="linkis", application=~"LINKIS.*", area="heap"}) by(instance) >= 50
+ for: 1m
+ labels:
+ severity: warning
+ service: Linkis
+ instance: "{{ $labels.instance }}"
+ annotations:
+ summary: "instance: {{ $labels.instance }} memory(heap) overload"
+ description: "Memory usage(heap) is over 80% for over 1min"
+ value: "{{ $value }}"
+
+ - alert: LinkisNodeNonHeapMemoryHigh
+ expr: sum(jvm_memory_used_bytes{job="linkis", application=~"LINKIS.*", area="nonheap"}) by(instance) *100/sum(jvm_memory_max_bytes{job="linkis", application=~"LINKIS.*", area="nonheap"}) by(instance) >= 60
+ for: 1m
+ labels:
+ severity: warning
+ service: Linkis
+ instance: "{{ $labels.instance }}"
+ annotations:
+ summary: "instance: {{ $labels.instance }} memory(nonheap) overload"
+ description: "Memory usage(nonheap) is over 80% for over 1min"
+ value: "{{ $value }}"
+
+ - alert: LinkisWaitingThreadHigh
+ expr: jvm_threads_states_threads{job="linkis", application=~"LINKIS.*", state="waiting"} >= 100
+ for: 1m
+ labels:
+ severity: warning
+ service: Linkis
+ instance: "{{ $labels.instance }}"
+ annotations:
+ summary: "instance: {{ $labels.instance }} waiting threads is high"
+ description: "waiting threads is over 100 for over 1min"
+ value: "{{ $value }}"
+```
+**Note**: Since once the service instance is shutdown, it will not be one of the target of Prometheus Eureka SD, and `up` metrics will not return any data after a short time. Thus we will collect if the `up=0` in the last one minute to determine whether the service is alive or not.
+
+Third, and most importantly define Prometheus configuration in prometheus.yml file. This will defines:
+
+- the global settings like scrapping interval and rules evaluation interval
+- the connection information to reach AlertManager and the rules to be evaluated
+- the connection information to application metrics endpoint.
+This is an example configration file for Linkis:
+````yaml
+## prometheus.yml ##
+# my global config
+global:
+ scrape_interval: 30s # By default, scrape targets every 15 seconds.
+ evaluation_interval: 30s # By default, scrape targets every 15 seconds.
+alerting:
+ alertmanagers:
+ - static_configs:
+ - targets: ['alertmanager:9093']
+# Load and evaluate rules in this file every 'evaluation_interval' seconds.
+rule_files:
+ - "alertrule.yml"
+
+# A scrape configuration containing exactly one endpoint to scrape:
+# Here it's Prometheus itself.
+scrape_configs:
+ - job_name: 'prometheus'
+ static_configs:
+ - targets: ['localhost:9090']
+ - job_name: linkis
+ eureka_sd_configs:
+ # the endpoint of your eureka instance
+ - server: {{linkis-host}}:20303/eureka
+ relabel_configs:
+ - source_labels: [__meta_eureka_app_name]
+ target_label: application
+ - source_labels: [__meta_eureka_app_instance_metadata_prometheus_path]
+ action: replace
+ target_label: __metrics_path__
+ regex: (.+)
+````
+Forth, the following configuration defines how alerts will be sent to external webhook.
+```yaml
+## alertmanager.yml ##
+global:
+ resolve_timeout: 5m
+
+route:
+ receiver: 'webhook'
+ group_by: ['alertname']
+
+ # How long to wait to buffer alerts of the same group before sending a notification initially.
+ group_wait: 1m
+ # How long to wait before sending an alert that has been added to a group for which there has already been a notification.
+ group_interval: 5m
+ # How long to wait before re-sending a given alert that has already been sent in a notification.
+ repeat_interval: 12h
+
+receivers:
+- name: 'webhook'
+ webhook_configs:
+ - send_resolved: true
+ url: {{your-webhook-url}}
+
+````
+
+Finally, after defining all the configuration file as well as the docker compose file we can start the monitoring stack with `docker-compose up`
+
+## 4. Result display
+On Prometheus page, it's expected to see all the Linkis service instances as shown below:
+![](/Images/deployment/monitoring/prometheus_screenshot.jpg)
+
+When the Grafana is accessible, you need to import the prometheus as datasource in Grafana, and import the dashboard template with id 11378, which is normally used for springboot service(2.1+).
+Then you can view one living dashboard of Linkis there.
+
+![](/Images/deployment/monitoring/grafana_screenshot.jpg)
+
+You can also try to integrate the Prometheus alter manager with your own webhook, where you can see if the alter message is fired.
diff --git a/versioned_docs/version-1.4.0/deployment/integrated/involve-skywalking.md b/versioned_docs/version-1.4.0/deployment/integrated/involve-skywalking.md
new file mode 100644
index 00000000000..d476ab6b3a9
--- /dev/null
+++ b/versioned_docs/version-1.4.0/deployment/integrated/involve-skywalking.md
@@ -0,0 +1,150 @@
+---
+title: Involve SkyWaling
+sidebar_position: 5.0
+---
+This article describes how to enable SkyWalking when starting the Linkis service to facilitate subsequent distributed trace and troubleshooting.
+
+## 1. Introduction to SkyWalking
+
+### 1.1 What is SkyWalking
+
+SkyWalking is an open source observability platform used to collect, analyze, aggregate and visualize data from services and cloud native infrastructures. SkyWalking provides an easy way to maintain a clear view of your distributed systems, even across Clouds. It is a modern APM, specially designed for cloud native, container based distributed systems.
+
+### 1.2 SkyWalking Architecture
+
+The following figure is the overall architecture of SkyWalking.
+
+![](/Images/deployment/skywalking/SkyWalking_Architecture.png)
+
+SkyWalking is logically split into four parts: Probes, Platform backend, Storage and UI.
+- **Probe**s collect data and reformat them for SkyWalking requirements (different probes support different sources).
+- **Platform backend** supports data aggregation, analysis and streaming process covers traces, metrics, and logs.
+- **Storage** houses SkyWalking data through an open/plugable interface. You can choose an existing implementation, such as ElasticSearch, H2, MySQL, TiDB, InfluxDB, or implement your own. Patches for new storage implementors welcome!
+- **UI** is a highly customizable web based interface allowing SkyWalking end users to visualize and manage SkyWalking data.
+
+Using SkyWalking in Linkis requires that the user already has the Backend service and the corresponding Storage. The Probe can be integrated when the Linkis service is started. There are three main ways of Probe integration:
+
+- **Language based native agent**. These agents run in target service user spaces, such as a part of user codes. For example, the SkyWalking Java agent uses the `-javaagent` command line argument to manipulate codes in runtime, where `manipulate` means to change and inject user’s codes. Another kind of agents uses certain hook or intercept mechanism provided by target libraries. As you can see, these agents are based on languages and libraries.
+- **Service Mesh probes**. Service Mesh probes collect data from sidecar, control plane in service mesh or proxy. In the old days, proxy is only used as an ingress of the whole cluster, but with the Service Mesh and sidecar, we can now perform observability functions.
+- **3rd-party instrument library**. SkyWalking accepts many widely used instrument libraries data formats. It analyzes the data, transfers it to SkyWalking’s formats of trace, metrics or both. This feature starts with accepting Zipkin span data. See [Receiver for Zipkin traces](https://skywalking.apache.org/docs/main/latest/en/setup/backend/zipkin-trace) for more information.
+
+We used **Language based native agent** when Linkis integrated SkyWalking, that is, the method of java agent. Below we will show you how to enable SkyWalking in Linkis service.
+
+
+## 2. Deploy the SkyWalking backend
+The SkyWalking backend is a prerequisite for enabling SkyWalk. The following is a brief demonstration of how to install the SkyWalking backend.
+
+First download SkyWalking APM from SkyWalking's [Downloads](https://skywalking.apache.org/downloads/) page.
+
+![](/Images/deployment/skywalking/SkyWalking_APM_Download.png)
+
+After downloading, unzip it directly, and we get the following directory structure.
+```bash
+$ ls
+bin config config-examples LICENSE licenses logs NOTICE oap-libs README.txt tools webapp
+````
+
+The backend uses the H2 in-memory database as the backend storage by default, and does not need to modify the configuration. Start as follows.
+
+Start Backend
+```bash
+$ bash bin/startup.sh
+````
+
+Start WebApp
+```bash
+$ bash bin/webappService.sh
+````
+
+The UI starts on port 8080 by default. You can also modify the listening port by modifying the webapp.yml file in the webapp directory.
+````yaml
+server:
+ port: 8080
+
+spring:
+ cloud:
+ gateway:
+ routes:
+ - id: oap-route
+ uri: lb://oap-service
+ predicates:
+ - Path=/graphql/**
+ discovery:
+ client:
+ simple:
+ instances:
+ oap-service:
+ - uri: http://127.0.0.1:12800
+ # - uri: http://:
+ # - uri: http://:
+
+ mvc:
+ throw-exception-if-no-handler-found: true
+
+ web:
+ resources:
+ add-mappings: true
+
+management:
+ server:
+ base-path: /manage
+````
+
+## 3. Linkis service start and enable SkyWalking
+
+It is assumed here that the service deployment of Linkis is relatively clear. If it is not clear, it can be asynchronous:
+
+To start SkyWalking in Linkis, you first need to download the Java agent of SkyWalking, we can download it from the [Downloads](https://skywalking.apache.org/downloads/) page.
+
+![](/Images/deployment/skywalking/SkyWalking_Agent_Download.png)
+
+After downloading, unzip it directly, the internal file structure is as follows:
+```bash
+tree skywalking-agent
+$ skywalking-agent
+├── LICENSE
+├── NOTICE
+├── activations
+│ ├── apm-toolkit-kafka-activation-8.8.0.jar
+│ ├── ...
+├── bootstrap-plugins
+│ ├── apm-jdk-http-plugin-8.8.0.jar
+│ └── apm-jdk-threading-plugin-8.8.0.jar
+├── config
+│ └── agent.config
+├── licenses
+│ └── LICENSE-asm.txt
+├── logs
+├── optional-plugins
+│ ├── apm-customize-enhance-plugin-8.8.0.jar
+│ ├── ...
+├── optional-reporter-plugins
+│ ├── kafka-reporter-plugin-8.8.0.jar
+│ ├── ...
+├── plugins
+│ ├── apm-activemq-5.x-plugin-8.8.0.jar
+│ ├── ...
+└── skywalking-agent.jar
+
+````
+
+Modify the configuration item `SKYWALKING_AGENT_PATH` in linkis-env.sh of Linkis. Set it to the path to `skywalking-agent.jar`.
+```bash
+SKYWALKING_AGENT_PATH=/path/to/skywalking-agent.jar
+````
+
+Then start Linkis.
+
+```bash
+$ bash linkis-start-all.sh
+````
+
+## 4. Result display
+
+The UI port of Linkis starts at port 8080 by default. After Linkis opens SkyWalking and opens the UI, if you can see the following picture, it means success.
+
+![](/Images/deployment/skywalking/SkyWalking_UI_Dashboard.png)
+
+![](/Images/deployment/skywalking/SkyWalking_UI_Dashboard2.png)
+
+![](/Images/deployment/skywalking/SkyWalking_Topology.png)
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/deployment/integrated/sso-with-redis.md b/versioned_docs/version-1.4.0/deployment/integrated/sso-with-redis.md
new file mode 100644
index 00000000000..cca146c9bbc
--- /dev/null
+++ b/versioned_docs/version-1.4.0/deployment/integrated/sso-with-redis.md
@@ -0,0 +1,46 @@
+---
+title: Session Supports Redis Shared Storage
+sidebar_position: 8
+---
+## 1.Background
+Because the original login session does not support distributed storage, for all requests from the same user, nginx needs to forward the requests to the same gateway instance to process the requests normally.
+The common solution is to support it by configuring ip hash load balancing on the ingress nginx.
+However, in the ip hash method, if there is an expansion or contraction of the server, the hash values of all client ips need to be recalculated, which will result in session loss.
+Secondly, it is easy to cause data skew problems due to uneven node distribution.
+In order to optimize the problems existing in the ip hash method, shared storage is implemented for the session in the login state.
+
+## 2.Implementation plan
+Because session information is mainly identified by ticketId, and all external entrances are gateways, only the gateway module needs to be modified.
+For the underlying shared storage, choose the mainstream in-memory database redis. For whether to start redis session storage, it supports configuration file control.
+The key code change is `userTicketIdToLastAccessTime` of `org.apache.linkis.server.security.SSOUtils`.
+
+Request process:
+
+`User request -> nginx -> linkis-gateway -> linkis backend service`
+
+
+## 3.How to use
+
+An available reids environment is required, and stand-alone redis and redis sentinel modes are supported.
+
+After installing and deploying Linkis, modify the configuration file `${LINKIS_HOME}/conf/linkis.properties`
+```shell script
+#Open redis cache configuration
+linkis.session.redis.cache.enabled=true
+
+
+#single vision
+linkis.session.redis.host=127.0.0.1
+linkis.session.redis.port=6379
+linkis.session.redis.password=test123
+
+# sentinel mode
+linkis.session.redis.sentinel.master=sentinel-master-name
+linkis.session.redis.sentinel.nodes=127.0.1.1:6381,127.0.2.1:6381,127.0.3.1:6381
+linkis.session.redis.password=test123
+
+```
+
+Just enable the gateway normally. After starting redis, for multiple gateway examples, you can use the nginx default load balancing mode when configuring on the nginx side.
+
+
diff --git a/versioned_docs/version-1.4.0/deployment/version-adaptation.md b/versioned_docs/version-1.4.0/deployment/version-adaptation.md
new file mode 100644
index 00000000000..0ab2dcf6e9a
--- /dev/null
+++ b/versioned_docs/version-1.4.0/deployment/version-adaptation.md
@@ -0,0 +1,513 @@
+---
+title: Version Adaptation
+sidebar_position: 8
+---
+
+# Version Adaptation
+
+## 1. Function description
+
+Explain where manual modification is required for Apache, CDH, HDP and other version adaptations
+
+## 2. Compilation instruction
+
+Enter the root directory of the project and execute the following commands in sequence
+
+```text
+mvn -N install
+mvn clean install -Dmaven.test.skip=true
+```
+
+## 3. SQL Script Switch
+
+linkis-dist -> package -> linkis-dml.sql(db folder)
+
+Switch the corresponding engine version to the version you need. If the version you use is consistent with the official version, you do not need to modify this step
+
+for example:
+
+1. If Spark is 3.0.0, this is SET @ SPARK_ LABEL="spark-3.0.0";
+2. If hive is 2.1.1-cdh6.3.2, adjust 2.1.1 first_ Cdh6.3.2 (during construction), this is SET @ HIVE_ LABEL="hive-2.1.1_cdh6.3.2";
+
+```sql
+-- variable:
+SET @SPARK_LABEL="spark-2.4.3";
+SET @HIVE_LABEL="hive-2.3.3";
+SET @PYTHON_LABEL="python-python2";
+SET @PIPELINE_LABEL="pipeline-1";
+SET @JDBC_LABEL="jdbc-4";
+SET @PRESTO_LABEL="presto-0.234";
+SET @IO_FILE_LABEL="io_file-1.0";
+SET @OPENLOOKENG_LABEL="openlookeng-1.5.0";
+```
+
+## 4. Linkis official version
+
+| engine | version |
+| ------ | ------- |
+| hadoop | 2.7.2 |
+| hive | 2.3.3 |
+| spark | 2.4.3 |
+| flink | 1.12.2 |
+
+## 5. Apache version adaptation
+
+### 5.1 Apache3.1.x version
+
+| engine | version |
+| ------ | ------- |
+| hadoop | 3.1.1 |
+| hive | 3.1.2 |
+| spark | 3.0.1 |
+| flink | 1.13.2 |
+
+#### 5.1.1 The pom file of linkis
+
+For Linkis version < 1.3.2
+```xml
+3.1.1
+2.12.10
+2.12
+
+
+
+ org.apache.hadoop
+ hadoop-hdfs-client
+ ${hadoop.version}
+
+
+```
+For Linkis version >= 1.3.2, we only need to set `scala.version` and `scala.binary.version` if necessary
+```java
+2.12.10
+2.12
+```
+Because we can directly compile with hadoop-3.3 or hadoop-2.7 profile.
+Profile `hadoop-3.3` can be used for any hadoop3.x, default hadoop3.x version will be hadoop 3.3.1,
+Profile `hadoop-2.7` can be used for any hadoop2.x, default hadoop2.x version will be hadoop 2.7.2,
+other hadoop version can be specified by -Dhadoop.version=xxx
+```text
+mvn -N install
+mvn clean install -Phadoop-3.3 -Dmaven.test.skip=true
+mvn clean install -Phadoop-3.3 -Dhadoop.version=3.1.1 -Dmaven.test.skip=true
+```
+#### 5.1.2 The pom file of linkis-hadoop-common
+
+For Linkis version < 1.3.2
+```xml
+
+
+ org.apache.hadoop
+ hadoop-hdfs-client
+ ${hadoop.version}
+
+```
+
+For Linkis version >= 1.3.2,`linkis-hadoop-common` module no need to change
+
+#### 5.1.3 The pom file of linkis-engineplugin-hive
+
+```xml
+3.1.2
+```
+
+#### 5.1.4 The pom file of linkis-engineplugin-spark
+
+For Linkis version < 1.3.2
+```xml
+3.0.1
+```
+
+For Linkis version >= 1.3.2
+```text
+We can directly compile with spark-3.2 or spark-2.4-hadoop-3.3 profile, if we need to used with hadoop3, then profile hadoop-3.3 will be needed.
+default spark3.x version will be spark 3.2.1. if we compile with spark-3.2 then scala version will be 2.12.15 by default,
+so we do not need to set the scala version in Linkis project pom file(mentioned in 5.1.1).
+if spark2.x used with hadoop3, for compatibility reason, profile `spark-2.4-hadoop-3.3` need to be activated.
+```
+```text
+mvn -N install
+mvn clean install -Pspark-3.2 -Phadoop-3.3 -Dmaven.test.skip=true
+mvn clean install -Pspark-2.4-hadoop-3.3 -Phadoop-3.3 -Dmaven.test.skip=true
+```
+
+#### 5.1.5 The pom file of flink-engineconn-flink
+
+```xml
+1.13.2
+```
+
+Since some classes of Flink 1.12.2 to 1.13.2 are adjusted, it is necessary to compile and adjust Flink. Select Scala version 2.12 for compiling Flink
+
+:::caution temporary plan
+
+Note that the following operations are all in flink
+
+Due to flink1.12.2 to 1.13.2, some classes are adjusted, so flink needs to be compiled and adjusted, and the version of scala selected for compiling flink is version 2.12(The scala version is based on the actual version used)
+
+flink compilation reference instruction: mvn clean install -DskipTests -P scala-2.12 -Dfast -T 4 -Dmaven.compile.fock=true
+
+:::
+
+```text
+-- Note that the following classes are copied from version 1.12.2 to version 1.13.2
+org.apache.flink.table.client.config.entries.DeploymentEntry
+org.apache.flink.table.client.config.entries.ExecutionEntry
+org.apache.flink.table.client.gateway.local.CollectBatchTableSink
+org.apache.flink.table.client.gateway.local.CollectStreamTableSink
+```
+
+#### 5.1.6 linkis-label-commo adjustment
+
+org.apache.linkis.manager.label.conf.LabelCommonConfig file adjustment
+
+```java
+ public static final CommonVars SPARK_ENGINE_VERSION =
+ CommonVars.apply("wds.linkis.spark.engine.version", "3.0.1");
+
+ public static final CommonVars HIVE_ENGINE_VERSION =
+ CommonVars.apply("wds.linkis.hive.engine.version", "3.1.2");
+```
+
+#### 5.1.7 linkis-computation-governance-common adjustment
+
+org.apache.linkis.governance.common.conf.GovernanceCommonConf file adjustment
+
+```java
+ val SPARK_ENGINE_VERSION = CommonVars("wds.linkis.spark.engine.version", "3.0.1")
+
+ val HIVE_ENGINE_VERSION = CommonVars("wds.linkis.hive.engine.version", "3.1.2")
+```
+
+## 6. HDP version adaptation
+
+### 6.1 HDP3.0.1 version
+
+| engine | version |
+| -------------- | ------- |
+| hadoop | 3.1.1 |
+| hive | 3.1.0 |
+| spark | 2.3.2 |
+| json4s.version | 3.2.11 |
+
+#### 6.1.1 The pom file of linkis
+
+For Linkis version < 1.3.2
+```xml
+3.1.1
+3.2.11
+
+
+
+ org.apache.hadoop
+ hadoop-hdfs-client
+ ${hadoop.version}
+
+```
+
+For Linkis version >= 1.3.2, we only need to set `json4s.version` if necessary
+```java
+3.2.11
+```
+Because we can directly compile with hadoop-3.3 or hadoop-2.7 profile.
+Profile `hadoop-3.3` can be used for any hadoop3.x, default hadoop3.x version will be hadoop 3.3.1,
+Profile `hadoop-2.7` can be used for any hadoop2.x, default hadoop2.x version will be hadoop 2.7.2,
+other hadoop version can be specified by -Dhadoop.version=xxx
+```text
+mvn -N install
+mvn clean install -Phadoop-3.3 -Dmaven.test.skip=true
+mvn clean install -Phadoop-3.3 -Dhadoop.version=3.1.1 -Dmaven.test.skip=true
+```
+
+#### 6.1.2 The pom file of linkis-engineplugin-hive
+
+```xml
+3.1.0
+```
+
+#### 6.1.3 The pom file of linkis-engineplugin-spark
+
+For Linkis version < 1.3.2
+```xml
+2.3.2
+```
+
+For Linkis version >= 1.3.2
+```text
+We can directly compile with spark-3.2 profile, if we need to use with hadoop3, then profile hadoop-3.3 will be needed.
+default spark3.x version will be spark 3.2.1. if we compile with spark-3.2 then scala version will be 2.12.15 by default,
+so we do not need to set the scala version in Linkis project pom file(mentioned in 5.1.1).
+if spark2.x used with hadoop3, for compatibility reason, profile `spark-2.4-hadoop-3.3` need to be activated.
+```
+```text
+mvn -N install
+mvn clean install -Pspark-3.2 -Phadoop-3.3 -Dmaven.test.skip=true
+mvn clean install -Pspark-2.4-hadoop-3.3 -Phadoop-3.3 -Dmaven.test.skip=true
+```
+
+#### 6.1.4 linkis-label-common adjustment
+
+org.apache.linkis.manager.label.conf.LabelCommonConfig file adjustment
+
+```java
+ public static final CommonVars SPARK_ENGINE_VERSION =
+ CommonVars.apply("wds.linkis.spark.engine.version", "2.3.2");
+
+ public static final CommonVars HIVE_ENGINE_VERSION =
+ CommonVars.apply("wds.linkis.hive.engine.version", "3.1.0");
+```
+
+#### 6.1.5 linkis-computation-governance-common adjustment
+
+org.apache.linkis.governance.common.conf.GovernanceCommonConf file adjustment
+
+```java
+ val SPARK_ENGINE_VERSION = CommonVars("wds.linkis.spark.engine.version", "2.3.2")
+
+ val HIVE_ENGINE_VERSION = CommonVars("wds.linkis.hive.engine.version", "3.1.0")
+```
+
+## 7 CDH Version adaptation
+
+### 7.1 maven Configure address
+
+#### 7.1.1 setting file
+
+```xml
+
+
+
+ nexus-aliyun
+ *,!cloudera
+ Nexus aliyun
+ http://maven.aliyun.com/nexus/content/groups/public
+
+
+ aliyunmaven
+ *,!cloudera
+ Alibaba Cloud Public Warehouse
+ https://maven.aliyun.com/repository/public
+
+
+ aliyunmaven
+ *,!cloudera
+ spring-plugin
+ https://maven.aliyun.com/repository/spring-plugin
+
+
+ maven-default-http-blocker
+ external:http:*
+ Pseudo repository to mirror external repositories initially using HTTP.
+ http://0.0.0.0/
+ true
+
+
+```
+
+#### 7.1.2 The pom file of linkis
+
+```xml
+
+
+ cloudera
+ https://repository.cloudera.com/artifactory/cloudera-repos/
+
+ true
+
+
+
+
+ aliyun
+ http://maven.aliyun.com/nexus/content/groups/public/
+
+ true
+
+
+
+```
+
+### 7.2 CDH5.12.1 version
+
+| engine | version |
+| --------- | --------------- |
+| hadoop | 2.6.0-cdh5.12.1 |
+| zookeeper | 3.4.5-cdh5.12.1 |
+| hive | 1.1.0-cdh5.12.1 |
+| spark | 2.3.4 |
+| flink | 1.12.4 |
+| python | python3 |
+
+#### 7.2.1 The pom file of linkis
+
+```xml
+2.6.0-cdh5.12.1
+3.4.5-cdh5.12.1
+2.11.8
+```
+
+#### 7.2.2 The pom file of linkis-engineplugin-hive
+
+```xml
+-- update
+1.1.0-cdh5.12.1
+-- add
+1.1.0_cdh5.12.1
+```
+
+- update assembly under distribution.xml file
+
+```xml
+/dist/v${package.hive.version}/lib
+dist/v${package.hive.version}/conf
+plugin/${package.hive.version}
+```
+
+- update CustomerDelimitedJSONSerDe file
+
+ ```
+ /* hive version is too low and needs to be noted
+ case INTERVAL_YEAR_MONTH:
+ {
+ wc = ((HiveIntervalYearMonthObjectInspector) oi).getPrimitiveWritableObject(o);
+ binaryData = Base64.encodeBase64(String.valueOf(wc).getBytes());
+ break;
+ }
+ case INTERVAL_DAY_TIME:
+ {
+ wc = ((HiveIntervalDayTimeObjectInspector) oi).getPrimitiveWritableObject(o);
+ binaryData = Base64.encodeBase64(String.valueOf(wc).getBytes());
+ break;
+ }
+ */
+ ```
+
+#### 7.2.3 The pom file of linkis-engineplugin-flink
+
+```xml
+1.12.4
+```
+
+#### 7.2.4 The pom file of linkis-engineplugin-spark
+
+```xml
+2.3.4
+```
+
+#### 7.2.5 The pom file of linkis-engineplugin-python
+
+```xml
+python3
+```
+
+#### 7.2.6 linkis-label-common adjustment
+
+org.apache.linkis.manager.label.conf.LabelCommonConfig file adjustment
+
+```java
+ public static final CommonVars SPARK_ENGINE_VERSION =
+ CommonVars.apply("wds.linkis.spark.engine.version", "2.3.4");
+
+ public static final CommonVars HIVE_ENGINE_VERSION =
+ CommonVars.apply("wds.linkis.hive.engine.version", "1.1.0");
+
+ CommonVars.apply("wds.linkis.python.engine.version", "python3")
+```
+
+#### 7.2.7 linkis-computation-governance-common adjustment
+
+org.apache.linkis.governance.common.conf.GovernanceCommonConf file adjustment
+
+```java
+ val SPARK_ENGINE_VERSION = CommonVars("wds.linkis.spark.engine.version", "2.3.4")
+
+ val HIVE_ENGINE_VERSION = CommonVars("wds.linkis.hive.engine.version", "1.1.0")
+
+ val PYTHON_ENGINE_VERSION = CommonVars("wds.linkis.python.engine.version", "python3")
+```
+
+### 7.3 CDH6.3.2
+
+| engine | version |
+| ------ | -------------- |
+| hadoop | 3.0.0-cdh6.3.2 |
+| hive | 2.1.1-cdh6.3.2 |
+| spark | 3.0.0 |
+
+#### 7.3.1 The pom file of linkis
+
+```xml
+3.0.0-cdh6.3.2
+2.12.10
+```
+
+#### 7.3.2 The pom file of linkis-hadoop-common
+
+```xml
+
+
+ org.apache.hadoop
+ hadoop-hdfs-client
+
+```
+
+#### 7.3.3 The pom file of linkis-engineplugin-hive
+
+```xml
+-- update
+2.1.1-cdh6.3.2
+-- add
+2.1.1_cdh6.3.2
+```
+
+update assembly under distribution.xml file
+
+```xml
+/dist/v${package.hive.version}/lib
+dist/v${package.hive.version}/conf
+plugin/${package.hive.version}
+```
+
+#### 7.3.4 The pom file of linkis-engineplugin-spark
+
+```xml
+3.0.0
+```
+
+#### 7.3.5 linkis-label-common adjustment
+
+org.apache.linkis.manager.label.conf.LabelCommonConfig file adjustment
+
+```java
+ public static final CommonVars SPARK_ENGINE_VERSION =
+ CommonVars.apply("wds.linkis.spark.engine.version", "3.0.0");
+
+ public static final CommonVars HIVE_ENGINE_VERSION =
+ CommonVars.apply("wds.linkis.hive.engine.version", "2.1.1_cdh6.3.2");
+```
+
+#### 7.3.6 linkis-computation-governance-common adjustment
+
+org.apache.linkis.governance.common.conf.GovernanceCommonConf file adjustment
+
+```java
+ val SPARK_ENGINE_VERSION = CommonVars("wds.linkis.spark.engine.version", "3.0.0")
+
+ val HIVE_ENGINE_VERSION = CommonVars("wds.linkis.hive.engine.version", "2.1.1_cdh6.3.2")
+```
+
+## 8 Compilation Skills
+
+- If the class is missing or the method in the class is missing, find the corresponding package dependency, and how to try to switch to the version with the corresponding package or class
+- If the engine version needs to use -, use _ to replace, addto specify the replaced version, and use ${package.(engine name). version} in the corresponding engine distribution file to replace the original
+- If sometimes there is a 403 problem when using Alibaba Cloud images to download the jars of guava, you can switch to Huawei, Tencent and other image warehouses
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/development/_category_.json b/versioned_docs/version-1.4.0/development/_category_.json
new file mode 100644
index 00000000000..c65e667aab3
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Development",
+ "position": 10.0
+}
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/development/build-console.md b/versioned_docs/version-1.4.0/development/build-console.md
new file mode 100644
index 00000000000..a6fcd2d6dc3
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/build-console.md
@@ -0,0 +1,80 @@
+---
+title: How to Build Console
+sidebar_position: 3.0
+---
+
+## Start the process
+
+### 1. Install Node.js
+Download Node.js to your computer and install it. Download link: [http://nodejs.cn/download/](http://nodejs.cn/download/) (It is recommended to use the latest stable version)
+**This step only needs to be performed the first time you use it. **
+
+### 2. The installation project
+Execute the following commands in the terminal command line:
+
+```
+git clone git@github.com:apache/linkis.git
+cd linkis/web
+npm install
+```
+
+Introduction to the instruction:
+1. Pull the project package from the remote git repository to the local computer
+2. Enter the web root directory of the project: cd Linkis/web
+3. Dependencies required to install the project: npm install
+
+**This step only needs to be performed the first time you use it. **
+
+### 3. Configuration
+:::caution
+If it is a local runtime, this step can be skipped.
+:::
+You need to make some configuration in the code, such as the back-end interface address, etc., such as the .env.development file in the root directory:
+
+```
+// back-end interface address
+VUE_APP_MN_CONFIG_PREFIX=http://yourIp:yourPort/yourPath
+```
+
+For specific explanation of the configuration, please refer to the official vue-cli document: [Environment Variables and Modes](https://cli.vuejs.org/zh/guide/mode-and-env.html#%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F%E5%92%8C%E6%A8%A1%E5%BC%8F)
+
+### 4. Package the project
+You can package the project by executing the following commands on the terminal command line to generate compressed code:
+
+```
+npm run build
+```
+
+After the command is successfully executed, a "dist" folder and a "*-${getVersion()}-dist.zip" compressed file will appear in the project web directory. The directory dist/dist is the packaged code. You can Put the folder directly into your static server, or refer to the installation document and use the script to deploy and install.
+
+### 5. Run the project
+If you want to run the project on a local browser and change the code to view the effect, you need to execute the following commands in the terminal command line:
+
+```
+npm run serve
+```
+
+In the browser (Chrome browser is recommended) to access the application through the link: [http://localhost:8080/](http://localhost:8080/).
+When you run the project in this way, the effect of your code changes will be dynamically reflected in the browser.
+
+**Note: Because the project is developed separately from the front and back ends, when running on a local browser, the browser needs to be set to cross domains to access the back-end interface. For specific setting, please refer to [solve the chrome cross domain problem](https://www.jianshu.com/p/56b1e01e6b6a).**
+
+
+
+
+### 6. Common problem
+
+#### 6.1 npm install cannot succeed
+If you encounter this situation, you can use the domestic Taobao npm mirror:
+
+```
+npm install -g cnpm --registry=https://registry.npm.taobao.org
+```
+
+Then, replace the npm install command by executing the following command
+
+```
+cnpm install
+```
+
+Note that when the project is started and packaged, you can still use the npm run build and npm run serve commands
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/development/build-docker.md b/versioned_docs/version-1.4.0/development/build-docker.md
new file mode 100644
index 00000000000..d4e4215355e
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/build-docker.md
@@ -0,0 +1,136 @@
+---
+title: How to Build Docker Image
+sidebar_position: 4.0
+---
+
+## Linkis Image Components
+
+Starting from version 1.3.0, Linkis introduces some Docker images, and the Dockerfile files for all the images are in the `linkis-dist/docker` directory.
+
+Images currently included as below:
+
+### linkis-base
+
+ - __Dockerfile__:
+ - File: linkis.Dockerfile
+ - Arguments, which can be overridden with the `-build-arg` option of the `docker build` command:
+ * JDK_VERSION: JDK version, default is 1.8.0-openjdk
+ * JDK_BUILD_REVISION: JDK release version, default is 1.8.0.332.b09-1.el7_9
+ - __Description__: Linkis service Base image for Linkis service, mainly used for pre-installation of external libraries, initialization of system environment and directory. This image does not need to be updated frequently, and can be used to speed up the creation of Linkis images by using docker's image caching mechanism.
+
+### linkis
+ - __Dockerfile__:
+ - File Name: linkis.Dockerfile
+ - Arguments:
+ * LINKIS_VERSION: Linkis Version, default is 0.0.0
+ * LINKIS_SYSTEM_USER: System user, default is hadoop
+ * LINKIS_SYSTEM_UID: System user UID, default is 9001
+ * LINKIS_HOME: Linkis home directory, default is /opt/linkis , the binary packages and various scripts will be deployed here
+ * LINKIS_CONF_DIR: Linkis configuration directory, default is /etc/linkis-conf
+ * LINKIS_LOG_DIR: Linkis log directory, default is /var/logs/linkis
+ - __Description__: Linkis service image, it contains binary packages of all components of Apache Linkis and various scripts.
+
+### linkis-web
+ - __Dockerfile__:
+ - File Name: linkis.Dockerfile
+ - Arguments:
+ * LINKIS_VERSION: Linkis Version, default is 0.0.0
+ * LINKIS_HOME: Linkis home directory, default is /opt/linkis , Web 相关的包会被放置在 ${LINKIS_HOME}-web 下
+ - __Description__: Linkis Web Console image, it contains binary packages and various scripts for the Apache Linkis web console, which uses nginx as the web server.
+
+### linkis-ldh
+ - __Dockerfile__:
+ - File Name: ldh.Dockerfile
+ - Arguments:
+ * JDK_VERSION: JDK version, default is 1.8.0-openjdk
+ * JDK_BUILD_REVISION: JDK release version, default is 1.8.0.332.b09-1.el7_9
+ * LINKIS_VERSION: Linkis Version, default is 0.0.0
+ * MYSQL_JDBC_VERSION: MySQL JDBC version, default is 8.0.28
+ * HADOOP_VERSION: Apache Hadoop version, default is 2.7.2
+ * HIVE_VERSION: Apache Hive version, default is 2.3.3
+ * SPARK_VERSION: Apache Spark version, default is 2.4.3
+ * SPARK_HADOOP_VERSION: Hadoop version suffix of pre-build Apache Spark distrubtion package, default is 2.7. This value cannot be set arbitrarily, and must be consistent with the official release version of Apache Spark, otherwise the relevant component cannot be downloaded automatically.
+ * FLINK_VERSION: Apache Flink version, default is 1.12.2
+ * ZOOKEEPER_VERSION: Apache Zookeeper version, default is 3.5.9
+ - __Description__: LDH is a test-oriented image, LDH image provides a complete, pseudo-distributed mode Apache Hadoop runtime environment, including HDFS, YARN, HIVE, Spark, Flink and Zookeeper. you can easily pull up a full-featured Hadoop environment in the development environment for testing Linkis functionality. The ENTRYPOINT of LDH image is in `linkis-dist/docker/scripts/entry-point-ldh.sh`, some initialization operations, such as format HDFS, are done in this script.
+
+### Integrate with MySQL JDBC driver
+
+Due to MySQL licensing restrictions, the official Linkis image does not integrate the MySQL JDBC driver, as a result, users need to by themselves put the MySQL JDBC driver into the container before using the Linkis. To simplify this process, we provide a Dockerfile:
+
+- File Name: linkis-with-mysql-jdbc.Dockerfile
+- Arguments:
+ * LINKIS_IMAGE: Linkis image name with tag, based on which to create a custom image containing the MySQL JDBC driver, default is `linkis:dev`
+ * LINKIS_HOME: Linkis home directory, default is /opt/linkis
+ * MYSQL_JDBC_VERSION: MySQL JDBC version, default is 8.0.28
+
+## Build Linkis Images
+
+> Because some Bash scripts are used in the image creation process, Linkis image packaging is currently only supported under Linux/MaxOS.
+
+### Building images with Maven
+
+Liniks images can be created using Maven commands.
+
+1. Build image `linkis`
+
+ ``` shell
+ # Building a Linkis image without MySQL JDBC
+ $> ./mvnw clean install -Pdocker -Dmaven.javadoc.skip=true -Dmaven.test.skip=true
+ # Building a Linkis image contains MySQL JDBC
+ $> ./mvnw clean install -Pdocker -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -Dlinkis.build.with.jdbc=true
+ ```
+ Note:
+ * The `linkis-base` image will be built on the first build of the `linkis` image, and will not be rebuilt if the Dockerfile has not been modified;
+ * Due to the syntax of the Maven POM file, `linkis.build.with.jdbc` is a pseudo-boolean arguments, in fact `-Dlinkis.build.with.jdbc=false` is the same as `-Dlinkis.build.with.jdbc=true`, If you want to express `-Dlinkis.build.with.jdbc=false`, please just remove this arguments. Other arguments are similar.
+
+2. Build image `linkis-web`
+
+ ``` shell
+ $> ./mvnw clean install -Pdocker -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -Dlinkis.build.web=true
+ ```
+
+3. Build image `linkis-ldh`
+
+ ``` shell
+ $> ./mvnw clean install -Pdocker -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -Dlinkis.build.ldh=true
+ ```
+
+ Note:
+ * In the process of creating this image, we downloaded the pre-built binary distribution of each hadoop components from the official site [Apache Archives](https://archive.apache.org/dist/). However, due to the network environment in China or other nation/region which is slow to visit Apache site, this approach can be very slow. If you have a faster mirror site, you can manually download the corresponding packages from these sites and move them to the following directory `${HOME}/.linkis-build-cache` to solve this problem.
+
+All of the above Arguments can be used in combination, so if you want to build all the images at once, you can use the following command:
+
+``` shell
+$> ./mvnw clean install -Pdocker -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -Dlinkis.build.web=true -Dlinkis.build.ldh=true
+```
+
+### Building images with `docker build` command
+
+It is much convenient to build an image with maven, but the build process introduces a lot of repetitive compilation process, which cause the whole process is rather long. If you only adjust the internal structure of the image, such as directory structure, initialization commands, etc., you can use the `docker build` command to quickly build the image for testing after the first time using the Maven command to build the image.
+
+An example of building a `linkis-ldh` image using the `docker build` command is as follows:
+
+``` shell
+$> docker build -t linkis-ldh:dev --target linkis-ldh -f linkis-dist/docker/ldh.Dockerfile linkis-dist/target
+
+[+] Building 0.2s (19/19) FINISHED
+ => [internal] load build definition from ldh.Dockerfile 0.0s
+ => => transferring dockerfile: 41B 0.0s
+ => [internal] load .dockerignore 0.0s
+ => => transferring context: 2B 0.0s
+ => [internal] load metadata for docker.io/library/centos:7 0.0s
+ => [ 1/14] FROM docker.io/library/centos:7 0.0s
+ => [internal] load build context 0.0s
+ => => transferring context: 1.93kB 0.0s
+ => CACHED [ 2/14] RUN useradd -r -s ... 0.0s
+ => CACHED [ 3/14] RUN yum install -y ... 0.0s
+ ...
+ => CACHED [14/14] RUN chmod +x /usr/bin/start-all.sh 0.0s
+ => exporting to image 0.0s
+ => => exporting layers 0.0s
+ => => writing image sha256:aa3bde0a31bf704413fb75673fc2894b03a0840473d8fe15e2d7f7dd22f1f854 0.0s
+ => => naming to docker.io/library/linkis-ldh:dev
+```
+
+For other images, please refer to the relevant profile in `linkis-dist/pom.xml`.
diff --git a/versioned_docs/version-1.4.0/development/build.md b/versioned_docs/version-1.4.0/development/build.md
new file mode 100644
index 00000000000..ff9b0737599
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/build.md
@@ -0,0 +1,190 @@
+---
+title: How to Build
+sidebar_position: 2.0
+---
+
+## 1. Preparation
+**Environment requirements:** Version of JDK must be higher than **JDK8**, **Oracle/Sun** and **OpenJDK** are both supported.
+
+After obtaining the project code from [github repository](https://github.com/apache/linkis) https://github.com/apache/linkis, use maven to compile the project installation package.
+
+### 1.1 Source code acquisition
+
+- Method 1: Obtain the source code of the project from [github repository](https://github.com/apache/linkis) https://github.com/apache/linkis.
+- Method 2: Download the source code package of the required version from the [linkis official download page](https://linkis.apache.org/download/main) https://linkis.apache.org/download/main.
+
+**Notice** : The official recommended versions for compiling Linkis are hadoop-2.7.2, hive-1.2.1, spark-2.4.3, and Scala-2.11.12.
+
+If you want to compile Linkis with another version of Hadoop, Hive, Spark, please refer to: [How to Modify Linkis dependency of Hadoop, Hive, Spark](#5-how-to-modify-the-hadoop-hive-and-spark-versions-that-linkis-depends-on)
+
+### 1.2 Modify dependency configuration
+:::caution Note
+Because the mysql-connector-java driver is under the GPL2.0 agreement and does not meet the license policy of the Apache open source agreement, starting from version 1.0.3, the scope of the dependency on mysql-connector-java is test by default. If you compile by yourself , You can modify the scope that the mysql-connector-java of the top-level pom.xml depends on (just comment it out)
+:::
+```xml
+
+ mysql
+ mysql-connector-java
+ ${mysql.connector.version}
+
+
+```
+
+## 2. Fully compile Linkis
+
+
+
+### step1 Compile for the first time (not the first time you can skip this step)
+
+**If you are compiling and using it locally for the first time, you must first execute the following command in the root directory of the Linkis source code package**:
+```bash
+ cd linkis-x.x.x
+ mvn -N install
+```
+
+### step2 Execute compilation
+Execute the following commands in the root directory of the Linkis source code package:
+
+```bash
+ cd linkis-x.x.x
+ mvn clean install
+
+```
+
+### step3 Obtain the installation package
+The compiled complete installation package is in the linkis-dist->target directory of the project:
+
+```bash
+ #Detailed path is as follows
+ linkis-x.x.x/linkis-dist/target/apache-linkis-x.x.x-incubating-bin.tar.gz
+```
+
+## 3. Compile a single module
+
+### step1 Compile for the first time (skip this step for non-first time)
+**If you are compiling and using it locally for the first time, you must first execute the following command in the root directory of the Linkis source code package**:
+
+```bash
+ cd linkis-x.x.x
+ mvn -N install
+```
+### step2 Enter the corresponding module to compile
+Enter the corresponding module to compile, for example, if you want to recompile Entrance, the command is as follows:
+
+```bash
+ cd linkis-x.x.x/linkis-computation-governance/linkis-entrance
+ mvn clean install
+```
+
+### step3 Obtain the installation package
+Get the installation package, there will be a compiled package in the ->target directory of the corresponding module:
+
+```
+ linkis-x.x.x/linkis-computation-governance/linkis-entrance/target/linkis-entrance.x.x.x.jar
+```
+
+## 4. Compile an engine
+
+Here's an example of the Spark engine that builds Linkis:
+
+### step1 Compile for the first time (skip this step for non-first time)
+**If you are using it locally for the first time, you must first execute the following command in the root directory of the Linkis source code package**:
+
+```bash
+ cd linkis-x.x.x
+ mvn -N install
+```
+### step2 Enter the corresponding module to compile
+Enter the directory where the Spark engine is located to compile and package, the command is as follows:
+
+```bash
+ cd linkis-x.x.x/linkis-engineconn-pluginsspark
+ mvn clean install
+```
+### step3 Obtain the installation package
+Get the installation package, there will be a compiled package in the ->target directory of the corresponding module:
+
+```
+ linkis-x.x.x/linkis-engineconn-pluginsspark/target/linkis-engineplugin-spark-x.x.x.jar
+```
+
+How to install Spark engine separately? Please refer to [Linkis Engine Plugin Installation Document](../deployment/install-engineconn)
+
+## 5. How to modify the Hadoop, Hive, and Spark versions that Linkis depends on
+
+Please note: Hadoop is a big data basic service, Linkis must rely on Hadoop for compilation;
+If you don't want to use an engine, you don't need to set the version of the engine or compile the engine plug-in.
+
+Specifically, the version of Hadoop can be modified in a different way than Spark, Hive, and other computing engines, as described below:
+
+### 5.1 How to modify the Hadoop version that Linkis depends on
+
+Enter the source package root directory of Linkis, and manually modify the Hadoop version information of the pom.xml file, as follows:
+
+```bash
+ cd linkis-x.x.x
+ vim pom.xml
+```
+
+```xml
+
+
+ 2.7.2 Modify the Hadoop version number here <-->
+ 2.11.12
+ 1.8
+
+
+```
+
+**Please note: If your hadoop version is hadoop3, you need to modify the pom file of linkis-hadoop-common**
+Because under hadoop2.8, hdfs-related classes are in the hadoop-hdfs module, but in hadoop 3.X the corresponding classes are moved to the module hadoop-hdfs-client, you need to modify this file:
+pom:Linkis/linkis-commons/linkis-hadoop-common/pom.xml
+Modify the dependency hadoop-hdfs to hadoop-hdfs-client:
+```
+
+ org.apache.hadoop
+ hadoop-hdfs
+ ${hadoop.version}
+
+ Modify hadoop-hdfs to:
+
+ org.apache.hadoop
+ hadoop-hdfs-client
+ ${hadoop.version}
+
+```
+
+### 5.2 How to modify the Spark and Hive versions that Linkis depends on
+
+Here's an example of changing the version of Spark. Go to the directory where the Spark engine is located and manually modify the Spark version information of the pom.xml file as follows:
+
+```bash
+ cd linkis-x.x.x/linkis-engineconn-pluginsspark
+ vim pom.xml
+```
+
+```xml
+
+ 2.4.3 Modify the Spark version number here <-->
+
+```
+
+Modifying the version of other engines is similar to modifying the Spark version. First, enter the directory where the relevant engine is located, and manually modify the engine version information in the pom.xml file.
+
+Then please refer to [4. Compile an engine](#4-compile-an-engine)
+
+## 6. How to exclude the specified engines during fully compile
+You can use the `-pl` option of `mvn` command, please refer below for details
+```
+-pl,--projects Comma-delimited list of specified
+ reactor projects to build instead
+ of all projects. A project can be
+ specified by [groupId]:artifactId
+ or by its relative path.
+```
+Implement the reverse selection by using `!` to exclude the given engines so that shorten the consumed time for fully compile.
+Here we take flink, sqoop and hive as an example, and exclude them during fully compile:
+```
+mvn clean install -Dmaven.test.skip=true \
+-pl '!linkis-engineconn-plugins/flink,!linkis-engineconn-plugins/sqoop,!linkis-engineconn-plugins/hive'
+```
diff --git a/versioned_docs/version-1.4.0/development/config.md b/versioned_docs/version-1.4.0/development/config.md
new file mode 100644
index 00000000000..2e025aa039d
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/config.md
@@ -0,0 +1,131 @@
+---
+title: Configuration Parameters
+sidebar_position: 10.0
+---
+
+## 1. Parameter classification
+
+ Linkis parameters are mainly divided into the following three parts:
+1. Linkis server parameters, mainly including the parameters of Linkis itself and the parameters of Spring
+2. Parameters submitted by client calls such as Linkis SDK and Restful
+3. Linkis console parameters
+
+
+## 2. Linkis server parameters
+
+1. Parameters of Linkis itself
+ The parameters of linkis itself can be set in the configuration file, or through environment variables and system properties. It is recommended to use the configuration file to set.
+ The Linkis configuration file format is as follows:
+```shell
+├── conf configuration directory
+│ ├── application-eureka.yml
+│ ├── application-linkis.yml
+│ ├── linkis-cg-engineconnmanager-io.properties
+│ ├── linkis-cg-engineconnmanager.properties
+│ ├── linkis-cg-engineplugin.properties
+│ ├── linkis-cg-entrance.properties
+│ ├── linkis-cg-linkismanager.properties
+│ ├── linkis.properties ── linkis global properties
+│ ├── linkis-ps-bml.properties
+│ ├── linkis-ps-cs.properties
+│ ├── linkis-ps-datasource.properties
+│ ├── linkis-ps-publicservice.properties
+│ ├── log4j2.xml
+````
+Each service will load two property configuration files, a common main configuration file linkis.properties, and a service configuration file linkis-serviceName.properties. The priority of the setting is that the service configuration file is higher than the main configuration file
+It is recommended that general parameters be placed in the main configuration file, and personalized configuration files be placed in the service configuration file
+
+2. Spring parameters
+ Linkis service is based on SpringBoot application, Spring related parameters can be set in application-linkis.yml or in linkis configuration file. The configuration in the linkis configuration file needs to be prefixed with spring. as follows:
+
+```shell
+# spring port default
+server.port=9102
+# in linkis conf need spring prefix
+spring.server.port=9102
+
+````
+
+## 3. Linkis client parameters
+ Linkis client parameters mainly refer to the parameters when the task is submitted, mainly the parameters specified in the submission interface.
+1. How restful sets parameters:
+
+```shell
+{
+ "executionContent": {"code": "show tables", "runType": "sql"},
+ "params": { // submit parameters
+ "variable":{ //Custom variables needed in the code
+ "k1":"v1"
+ },
+ "configuration":{
+ "special":{ //Special configuration parameters such as log path, result set path, etc.
+ "k2":"v2"
+ },
+ "runtime":{ //Runtime parameters, execution configuration parameters, such as database connection parameters of JDBC engine, data source parameters of presto engine
+ "k3":"v3"
+ },
+ "startup":{ //Startup parameters, such as memory parameters for starting EC, spark engine parameters, hive engine parameters, etc.
+ "k4":"v4" For example: spark.executor.memory:5G Set the Spark executor memory, the underlying Spark, hive and other engine parameters keyName are consistent with the native parameters
+ }
+ }
+ },
+ "labels": { //Label parameters, support setting engine version, user and application
+ "engineType": "spark-2.4.3",
+ "userCreator": "hadoop-IDE"
+ }
+}
+````
+2. How to set parameters in SDK:
+
+````java
+JobSubmitAction jobSubmitAction = JobSubmitAction.builder()
+ .addExecuteCode(code)
+ .setStartupParams(startupMap) //Startup parameters, such as memory parameters for starting EC, spark engine parameters, hive engine parameters, etc., such as: spark.executor.memory:5G Set the Spark executor memory, the underlying Spark, hive and other engine parameters keyName is the same as the original parameter
+ .setRuntimeParams(runTimeMap) //Engine, execute configuration parameters, such as database connection parameters of JDBC engine, data source parameters of presto engine
+ .setVariableMap(varMap) //Custom variables needed in the code
+ .setLabels(labels) //Label parameters, support setting engine version, user and application, etc.
+ .setUser(user) //submit user
+ .addExecuteUser(user) // execute user
+ .build();
+````
+3. How linkis-cli sets parameters
+
+```shell
+linkis-cli -runtieMap key1=value -runtieMap key2=value
+ -labelMap key1=value
+ -varMap key1=value
+ -startUpMap key1=value
+
+````
+Note: When submitting client parameters, only engine-related parameters, tag parameters, and Yarn queue settings can take effect. Other Linkis server-side parameters and resource limit parameters, such as task and engine concurrency parameters wds.linkis.rm.instances do not support task settings
+
+4. Common label parameters:
+
+```shell
+ Map labels = new HashMap();
+ labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "spark-2.4.3"); // Specify engine type and version
+ labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, user + "-IDE");// Specify the running user and your APPName
+ labels.put(LabelKeyConstant.CODE_TYPE_KEY, "sql"); // Specify the type of script to run: spark supports: sql, scala, py; Hive: hql; shell: sh; python: python; presto: psql
+ labels.put(LabelKeyConstant.JOB_RUNNING_TIMEOUT_KEY, "10000");//The job runs for 10s and automatically initiates Kill, the unit is s
+ labels.put(LabelKeyConstant.JOB_QUEUING_TIMEOUT_KEY, "10000");//The job is queued for more than 10s and automatically initiates Kill, the unit is s
+ labels.put(LabelKeyConstant.RETRY_TIMEOUT_KEY, "10000");//The waiting time for the job to retry due to resources and other reasons, the unit is ms. If it fails due to insufficient queue resources, it will initiate 10 retries at intervals by default
+ labels.put(LabelKeyConstant.TENANT_KEY,"hduser02");//Tenant label, if the tenant parameter is specified for the task, the task will be routed to a separate ECM machine
+ labels.put(LabelKeyConstant.EXECUTE_ONCE_KEY,"");//Execute the label once, this parameter is not recommended to be set. After setting, the engine will not reuse the task and the engine will end after running. Only a certain task parameter can be specialized. set up
+````
+
+## 4. Linkis console parameters
+ Linkis management console parameters are convenient for users to specify resource limit parameters and default task parameters. The web interface provided is as follows:
+Global configuration parameters:
+![](/Images/development/linkis_global_conf.png)
+It mainly includes the global queue parameter [wds.linkis.rm.yarnqueue], the Yarn queue used by the task by default, which can be specified in the client StartUPMap.
+Resource limit parameters, these parameters do not support task settings, but can be adjusted by the management console.
+```shell
+Queue CPU usage upper limit [wds.linkis.rm.yarnqueue.cores.max], currently only supports limit the usage of total queue resources for Spark type tasks
+Queue memory usage limit [wds.linkis.rm.yarnqueue.memory.max]
+The upper limit of the global memory usage of each engine [wds.linkis.rm.client.memory.max] This parameter does not refer to the total memory that can only be used, but specifies the total memory usage of a specific engine of a Creator, such as limiting the IDE-SPARK task to only Can use 10G memory
+The maximum number of global engine cores [wds.linkis.rm.client.core.max] This parameter does not refer to the total number of CPUs that can only be used, but specifies the total memory usage of a specific engine of a Creator, such as limiting IDE-SPARK tasks Can only use 10Cores
+The maximum concurrent number of each engine in the world [wds.linkis.rm.instance], this parameter has two meanings, one is to limit how many a Creator-specific engine can start in total, and to limit the tasks that a Creator-specific engine task can run at the same time number
+````
+Engine configuration parameters:
+![](/Images/development/linkis_creator_ec_conf.png)
+It mainly specifies the startup parameters and runtime parameters of the engine. These parameters can be set on the client side. It is recommended to use the client side for personalized submission settings. Only the default values are set on the page.
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/development/debug-with-helm-charts.md b/versioned_docs/version-1.4.0/development/debug-with-helm-charts.md
new file mode 100644
index 00000000000..70e4c9187d3
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/debug-with-helm-charts.md
@@ -0,0 +1,507 @@
+---
+title: Development & Debugging with Kubernetes
+sidebar_position: 6.0
+---
+
+## Preface
+
+This document describes how to use Kubernetes technology to simplify the development and debugging of Linkis projects. Before the introduction of Kubernetes tools, debugging Linkis was a very tedious and complex task, and sometimes it might be necessary to set up a Hadoop cluster for test. To improve this problem, we introduce an alternative approach , using Kubernetes technology, to create a Hadoop cluster on the development machine and pull up all Linkis services, which is a distributed environment and can be pulled up and destroyed at any time, and the developer connects to these services to performs step-by-step debugging through the JVM remote debugger. Here we use the following technologies:
+
+* Docker: A containerization technology to support the creation and use of Linux containers;
+* Kubernetes: An open source platform that automates the deployment and management of Linux containers, Kubernetes also integrates networking, storage, security, telemetry and other services to provide a comprehensive container-based infrastructure;
+* KinD: A tool that uses Docker containers as "Kubernetes nodes" to run local Kubernetes clusters;
+* Helm: An open source package management tool on Kubernetes that manages user resources on Kubernetes via the Helm command line tool and installation package (Chart);
+
+## Introduction to Dependency Tools
+
+### Version Requirements
+
+* [Docker](https://docs.docker.com/get-docker/), minimum version v20.10.8+
+* [Kubernetes](https://kubernetes.io/docs/setup/), minimum version v1.21.0+
+* [Helm](https://helm.sh/docs/intro/install/), minimum version v3.0.0+.
+* [KinD](https://kind.sigs.k8s.io/docs/user/quick-start/), minimum version v0.11.0+.
+
+### Introduction to Helm Charts
+
+Helm is an open source package management tool on Kubernetes. Helm's original goal was to provide users with a better way to manage all Kubernetes YAML files created on Kubernetes. When using Charts, the user provides a variable file, Helm uses the variables defined in this variable file to render the corresponding Chart, produce a Kubernetes YAML file, and then invoke the Kubernetes api to create the resources. Each Charts released to Kubernetes is called a Release, and a Chart can typically be installed multiple times into the same cluster, with a new Release being created each time it is installed.
+
+Helm is relatively simple to install, please refer to the official documentation for installation: [Installing Helm](https://helm.sh/docs/intro/install/)
+
+### Introduction to KinD
+
+Creating a Kubernetes test environment locally is a very common requirement, and the Kubernetes community offers a variety of solutions, such as MiniKube or MicroK8s. KinD, as the name suggests (Kubernetes in Docker), it uses Docker container to host nodes to create a test-oriented Kubernetes cluster.
+
+KinD Architecture
+
+![](/Images/development/kind-arc.png)
+
+Deploying KinD is also very easy, please refer to the official deployment documentation: [KinD Installation](https://kind.sigs.k8s.io/docs/user/quick-start/#installation), please install Docker before install KinD.
+
+> ⚠️ Note:
+> KinD is a tool for testing purposes and cannot be used for production deployments. For example, KinD clusters cannot be used after the development machine is rebooted and need to be recreated (because KinD performs a series of initialization tasks after the Node container is created, which cannot be automatically reverted after the machine is rebooted).
+
+## Linkis Containerized Components
+
+### Linkis Images
+
+Linkis provides several images, all of which have their Dockerfile and related scripts in the `linkis-dist/docker` directory. Linkis images include the following.
+
+* `linkis`: The Linkis service image, which contains binary packages of all components of Apache Linkis and various scripts.
+* `linkis-web`: Linkis Web console image, which contains the binary packages and various scripts of the Apache Linkis Web console, using nginx as the web server.
+* `linkis-ldh`: LDH is a test-oriented image, LDH image provides a complete, pseudo-distributed mode Apache Hadoop runtime environment, including HDFS, YARN, HIVE, Spark, Flink and Zookeeper, can be easily pulled up in the development environment of a fully real Hadoop environment to test the functionality of Linkis.
+
+For details, please refer to: [Linkis Docker Image Package](https://linkis.apache.org/zh-CN/docs/latest/development/linkis_docker_build_instrument).
+
+### Linkis Helm Chart
+
+Linkis Helm Chart is a Helm installation package developed according to the Helm Chart specification and is located in the `linkis-dist/helm` directory. The module directory structure is as follows:
+
+``` shell
+linkis-dist/helm
+├── charts # Charts directory, currently only contains Linkis Helm Chart
+│ └── linkis # Linkis Helm Chart directory
+│ ├── Chart.yaml # - Chart metadata
+│ ├── templates # - Chart template file containing Kubernetes YAML templates for all linkis components
+│ │ ├── NOTES.txt # - Chart notes
+│ │ ├── _helpers.tpl # - Chart variable helper templates
+│ │ ├── configmap-init-sql.yaml # - Database initialization SQL script template
+│ │ ├── configmap-linkis-config.yaml # - Linkis service configuration file template
+│ │ ├── configmap-linkis-web-config.yaml # - Linkis Web Console configuration file template
+│ │ ├── jobs.yaml # - Kubernetes Job template, currently only includes a database initialization job, the database
+| | | # initialization SQL script will be executed by the job
+│ │ ├── linkis-cg-engineconnmanager.yaml # - Linkis EngineConnManager deployment template, which is a Kubernetes Deployment type workload
+│ │ ├── linkis-cg-engineplugin.yaml # - Linkis EngineConn deployment template, a Kubernetes Deployment type workload
+│ │ ├── linkis-cg-entrance.yaml # - Linkis Entrance deployment template, a Kubernetes Deployment type workload
+│ │ ├── linkis-cg-linkismanager.yaml # - Linkis Manager deployment template, a Kubernetes Deployment type workload
+│ │ ├── linkis-mg-eureka.yaml # - Linkis Eureka deployment template, a Kubernetes Statefulset type workload
+│ │ ├── linkis-mg-gateway.yaml # - Linkis Gateway deployment template, a Kubernetes Deployment type workload
+│ │ ├── linkis-ps-publicservice.yaml # - Linkis PublicService deployment template, a Kubernetes Deployment type workload
+│ │ ├── linkis-web.yaml # - Linkis Web Console deployment template, a Kubernetes Deployment type workload
+│ │ └── serviceaccount.yaml # - Linkis related Kubernetes Service Account template
+│ └── values.yaml # - Linkis Helm Chart variable file, which provides Linkis Local schema related variables by default
+├── scripts # Some tool scripts to simplify development and debugging
+│ ├── common.sh # - public scripts, defining some public methods and variables
+│ ├── create-kind-cluster.sh # - Creates KinD clusters
+│ ├── install-charts-with-ldh.sh # - Deploy Linkis service on KinD cluster, using On-LDH deployment method, calling install-linkis.sh
+│ ├── install-charts.sh # - Deploy Linkis service on KinD cluster, use Local deployment method, call install-linkis.sh
+│ ├── install-ldh.sh # - Deploy LDH deployments on KinD clusters
+│ ├── install-linkis.sh # - Deploy the Linkis service on the KinD cluster, either in Local or On-LDH mode
+│ ├── install-mysql.sh # - Deploy a MySQL instance on the KinD cluster
+│ ├── login-pod.sh # - Login to a Pod and open Bash for interaction
+│ ├── remote-debug-proxy.sh # - Turn on the JVM remote debug proxy
+│ └── resources # - some resource files
+│ ├── kind-cluster.yaml # - KinD cluster configuration, default is single Node
+│ ├── ldh # - LDH related resource files
+│ │ │ ├── configmaps # - LDH configuration files for each component
+│ │ │ │ ├── configmap-flink.yaml # - Flink configuration file
+│ │ │ │ ├── configmap-hadoop.yaml # - Hdfs & Yarn configuration file
+│ │ │ ├── configmap-hive.yaml # - Hive configuration file
+│ │ │ ├── configmap-spark.yaml # - Spark configuration file
+│ │ │ └── configmap-zookeeper.yaml # - Zookeeper configuration file
+│ │ └── ldh.yaml # - LDH Kubernetes YAML, used to deploy LDH instances on KinD
+│ └── mysql.yaml # - MySQL Kubernetes YAML, for deploying MySQL instances on KinD
+
+```
+
+This project provides a set of tool scripts for quickly creating a Linkis environment for development testing. For production deployment, you need to modify the `values.yaml` file according to the actual cluster, and then deploy it using the Helm CLI. There are two common approaches to deploying with the Helm CLI:
+
+1. deploy directly using the `helm install` command. This is suitable for non-customized deployments.
+2. use the `helm template` command to generate Kubernetes YAML files, then manually modify these files, add custom configuration, and then deploy using the `kubectl apply` command. For advanced users who need to customize Kubernetes features that are not supported by Linkis Helm Charts, such as the need to use specific StorageClass or PVs.
+
+### LDH
+
+LDH is a Hadoop cluster image for testing purposes, which provides a pseudo-distributed hadoop cluster for quick testing of On Hadoop deployment mode.
+This image contains the following hadoop components, and the default mode of the engine in LDH is on-yarn.
+
+* Hadoop 2.7.2 , included HDFS and YARN
+* Hive 2.3.3
+* Spark 2.4.3
+* Flink 1.12.2
+* ZooKeeper 3.5.9
+
+LDH will start some initialization operations, such as format hdfs, create the initialization directory on HDFS, etc., these operations are defined in `linkis-dist/docker/scripts/entry-point-ldh.sh` file, add, modify, delete some initialization operations need to recreate LDH image to take effect.
+
+In addition, the Hive component in LDH depends on external MySQL instance, you need to deploy MySQL instance first before you can use the Hive component in LDH.
+
+```shell
+# Create a KinD cluster and deploy Linkis and LDH instances
+$> sh ./scripts/create-kind-cluster.sh \
+ && sh ./scripts/install-mysql.sh \
+ && sh ./scripts/install-ldh.sh
+
+# Quick Experience on LDH
+$> kubectl exec -it -n ldh $(kubectl get pod -n ldh -o jsonpath='{.items[0].metadata.name}') -- bash
+
+[root@ldh-96bdc757c-dnkbs /]# hdfs dfs -ls /
+Found 4 items
+drwxrwxrwx - root supergroup 0 2022-07-31 02:48 /completed-jobs
+drwxrwxrwx - root supergroup 0 2022-07-31 02:48 /spark2-history
+drwxrwxrwx - root supergroup 0 2022-07-31 02:49 /tmp
+drwxrwxrwx - root supergroup 0 2022-07-31 02:48 /user
+
+[root@ldh-96bdc757c-dnkbs /]# beeline -u jdbc:hive2://ldh.ldh.svc.cluster.local:10000/ -n hadoop
+Connecting to jdbc:hive2://ldh.ldh.svc.cluster.local:10000/
+Connected to: Apache Hive (version 2.3.3)
+Driver: Hive JDBC (version 2.3.3)
+Transaction isolation: TRANSACTION_REPEATABLE_READ
+Beeline version 2.3.3 by Apache Hive
+0: jdbc:hive2://ldh.ldh.svc.cluster.local:100> create database demo;
+No rows affected (1.306 seconds)
+0: jdbc:hive2://ldh.ldh.svc.cluster.local:100> use demo;
+No rows affected (0.046 seconds)
+0: jdbc:hive2://ldh.ldh.svc.cluster.local:100> create table t1 (id int, data string);
+No rows affected (0.709 seconds)
+0: jdbc:hive2://ldh.ldh.svc.cluster.local:100> insert into t1 values(1, 'linikis demo');
+WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
+No rows affected (5.491 seconds)
+0: jdbc:hive2://ldh.ldh.svc.cluster.local:100> select * from t1;
++--------+---------------+
+| t1.id | t1.data |
++--------+---------------+
+| 1 | linikis demo |
++--------+---------------+
+1 row selected (0.39 seconds)
+0: jdbc:hive2://ldh.ldh.svc.cluster.local:100> !q
+
+[root@ldh-96bdc757c-dnkbs /]# spark-sql
+22/07/31 02:53:18 INFO hive.metastore: Trying to connect to metastore with URI thrift://ldh.ldh.svc.cluster.local:9083
+22/07/31 02:53:18 INFO hive.metastore: Connected to metastore.
+...
+22/07/31 02:53:19 INFO spark.SparkContext: Running Spark version 2.4.3
+22/07/31 02:53:19 INFO spark.SparkContext: Submitted application: SparkSQL::10.244.0.6
+...
+22/07/31 02:53:27 INFO yarn.Client: Submitting application application_1659235712576_0001 to ResourceManager
+22/07/31 02:53:27 INFO impl.YarnClientImpl: Submitted application application_1659235712576_0001
+22/07/31 02:53:27 INFO cluster.SchedulerExtensionServices: Starting Yarn extension services with app application_1659235712576_0001 and attemptId None
+22/07/31 02:53:28 INFO yarn.Client: Application report for application_1659235712576_0001 (state: ACCEPTED)
+...
+22/07/31 02:53:36 INFO yarn.Client: Application report for application_1659235712576_0001 (state: RUNNING)
+...
+Spark master: yarn, Application Id: application_1659235712576_0001
+22/07/31 02:53:46 INFO thriftserver.SparkSQLCLIDriver: Spark master: yarn, Application Id: application_1659235712576_0001
+spark-sql> use demo;
+Time taken: 0.074 seconds
+22/07/31 02:58:02 INFO thriftserver.SparkSQLCLIDriver: Time taken: 0.074 seconds
+spark-sql> select * from t1;
+...
+1 linikis demo
+2 linkis demo spark sql
+Time taken: 3.352 seconds, Fetched 2 row(s)
+spark-sql> quit;
+
+[root@ldh-96bdc757c-dnkbs /]# zkCli.sh
+Connecting to localhost:2181
+Welcome to ZooKeeper!
+JLine support is enabled
+WATCHER::
+
+WatchedEvent state:SyncConnected type:None path:null
+
+[zk: localhost:2181(CONNECTED) 0] get -s /zookeeper/quota
+
+cZxid = 0x0
+ctime = Thu Jan 01 00:00:00 UTC 1970
+mZxid = 0x0
+mtime = Thu Jan 01 00:00:00 UTC 1970
+pZxid = 0x0
+cversion = 0
+dataVersion = 0
+aclVersion = 0
+ephemeralOwner = 0x0
+dataLength = 0
+numChildren = 0
+[zk: localhost:2181(CONNECTED) 1] quit
+
+# Start a Flink job in per-job cluster mode
+[root@ldh-96bdc757c-dnkbs /]# HADOOP_CLASSPATH=`hadoop classpath` flink run -t yarn-per-job /opt/ldh/current/flink/examples/streaming/TopSpeedWindowing.jar
+# Start Flink jobs in session mode,
+# Flink session is started when LDH Pod starts.
+[root@ldh-96bdc757c-dnkbs /]# flink run /opt/ldh/current/flink/examples/streaming/TopSpeedWindowing.jar
+Executing TopSpeedWindowing example with default input data set.
+Use --input to specify file input.
+Printing result to stdout. Use --output to specify output path.
+...
+```
+
+### KinD Cluster
+
+The default KinD cluster description file used by the Linkis project is `linkis-dist/helm/scripts/resources/kind-cluster.yaml`, which creates a KinD cluster with one node by default. Multiple nodes can be added by remove the comments.
+
+> ⚠️Note that KinD clusters are for testing purposes only.
+
+``` yaml
+# linkis-dist/helm/scripts/resources/kind-cluster.yaml
+kind: Cluster
+apiVersion: kind.x-k8s.io/v1alpha4
+nodes:
+ - role: control-plane
+ extraMounts:
+ - hostPath: ${KIND_CLUSTER_HOST_PATH} # Points to a directory on the development machine. This directory
+ # is mapped to the `/data` directory in the Kind Node container, which
+ # Linkis Helm Charts uses by default as the data directory to mount into
+ # the Pod of each Linkis component. When Linkis is deployed in Local mode,
+ # all components actually use the same directory on the development machine
+ # as if they were on the same machine, thus emulating the behavior of Local
+ # mode. When deployed in On-Hadoop mode, this directory is not used.
+ containerPath: /data
+# - role: worker # Remove comments to add 2 KinD nodes. Adding KinD nodes increases the time
+ # it takes to load Docker images to the KinD cluster, so it is not turned on
+ # by default.
+# extraMounts:
+# - hostPath: ${KIND_CLUSTER_HOST_PATH}
+# containerPath: /data
+# - role: worker
+# extraMounts:
+# - hostPath: ${KIND_CLUSTER_HOST_PATH}
+# containerPath: /data
+
+```
+
+## Developing and Debugging with Linkis Containerized Components
+
+The following steps describe how to develop and debug using Linkis containerized components (currently only supported for Linux and MacOS). Please confirm the following before proceeding.
+1. whether the Docker engine is already installed on the development machine
+2. whether Helm is installed on the development machine
+3. whether KinD has been installed on the development machine
+4. whether the Linkis image has been created as described in [Linkis Docker image packaging](https://linkis.apache.org/zh-CN/docs/latest/development/linkis_docker_build_instrument)
+
+### Create Debugging Environment
+
+This step will create a KinD cluster and deploy MySQL, Linkis and LDH instances on it.
+
+``` shell
+$> cd linkis-dist/helm
+$> sh ./scripts/create-kind-cluster.sh \
+> && sh ./scripts/install-mysql.sh \
+> && sh ./scripts/install-ldh.sh \
+> && sh ./scripts/install-charts-with-ldh.sh
+
+# Creating KinD cluster ...
+- kind cluster config: /var/folders/9d/bb6ggdm177j25q40yf5d50dm0000gn/T/kind-XXXXX.Fc2dFJbG/kind-cluster.yaml
+...
+kind: Cluster
+apiVersion: kind.x-k8s.io/v1alpha4
+nodes:
+ - role: control-plane
+ extraMounts:
+ - hostPath: /var/folders/9d/bb6ggdm177j25q40yf5d50dm0000gn/T/kind-XXXXX.Fc2dFJbG/data
+ containerPath: /data
+...
+Creating cluster "test-helm" ...
+ ✓ Ensuring node image (kindest/node:v1.21.1) 🖼
+ ✓ Preparing nodes 📦
+ ✓ Writing configuration 📜
+ ✓ Starting control-plane 🕹️
+ ✓ Installing CNI 🔌
+ ✓ Installing StorageClass 💾
+Set kubectl context to "kind-test-helm"
+You can now use your cluster with:
+
+kubectl cluster-info --context kind-test-helm
+
+Have a nice day! 👋
+# Loading MySQL image ...
+Image: "mysql:5.7" with ID "sha256:3147495b3a5ce957dee2319099a8808c1418e0b0a2c82c9b2396c5fb4b688509" not yet present on node "test-helm-control-plane", loading...
+# Deploying MySQL ...
+namespace/mysql created
+service/mysql created
+deployment.apps/mysql created
+# LDH version: dev
+# Loading LDH image ...
+Image: "linkis-ldh:dev" with ID "sha256:aa3bde0a31bf704413fb75673fc2894b03a0840473d8fe15e2d7f7dd22f1f854" not yet present on node "test-helm-control-plane", loading...
+# Deploying LDH ...
+namespace/ldh created
+configmap/flink-conf created
+configmap/hadoop-conf created
+configmap/hive-conf created
+configmap/spark-conf created
+configmap/zookeeper-conf created
+service/ldh created
+deployment.apps/ldh created
+# Loading Linkis image ...
+Image: "linkis:dev" with ID "sha256:0dfa7882c4216305a80cf57efa8cfb483d006bae5ba931288ffb8025e1db4e58" not yet present on node "test-helm-control-plane", loading...
+Image: "linkis-web:dev" with ID "sha256:1dbe0e9319761dbe0e93197665d38077cb2432b8b755dee834928694875c8a22" not yet present on node "test-helm-control-plane", loading...
+# Installing linkis, image tag=dev,local mode=false ...
+NAME: linkis-demo
+NAMESPACE: linkis
+STATUS: deployed
+REVISION: 1
+TEST SUITE: None
+NOTES:
+...
+
+---
+Welcome to Apache Linkis (v1.3.0)!
+
+.___ .___ .______ .____/\ .___ .________
+| | : __|: \ : / \: __|| ___/
+| | | : || ||. ___/| : ||___ \
+| |/\ | || | || \ | || /
+| / \| ||___| || \| ||__:___/
+|______/|___| |___||___\ /|___| : v1.3.0
+ \/
+
+Linkis builds a layer of computation middleware between upper applications and underlying engines.
+Please visit https://linkis.apache.org/ for details.
+
+Enjoy!
+configmap/flink-conf created
+configmap/hadoop-conf created
+configmap/hive-conf created
+configmap/spark-conf created
+configmap/zookeeper-conf created
+
+$> kubectl get pods -n ldh -o wide
+NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
+ldh-6648554447-ml2bn 1/1 Running 0 6m25s 10.244.0.6 test-helm-control-plane
+
+$> kubectl get pods -n linkis -o wide
+NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
+init-db-bcp85 0/1 Completed 0 4m52s 10.244.0.14 test-helm-control-plane
+linkis-demo-cg-engineconnmanager-659bf85689-ddvhw 1/1 Running 1 4m52s 10.244.0.7 test-helm-control-plane
+linkis-demo-cg-engineplugin-98bd6945-tsgjl 1/1 Running 1 4m52s 10.244.0.10 test-helm-control-plane
+linkis-demo-cg-entrance-858f74c868-xrd82 1/1 Running 0 4m52s 10.244.0.12 test-helm-control-plane
+linkis-demo-cg-linkismanager-6f96f69b8b-ns6st 1/1 Running 0 4m52s 10.244.0.11 test-helm-control-plane
+linkis-demo-mg-eureka-0 1/1 Running 0 4m52s 10.244.0.13 test-helm-control-plane
+linkis-demo-mg-gateway-68ddb8c547-xgvhn 1/1 Running 0 4m52s 10.244.0.15 test-helm-control-plane
+linkis-demo-ps-publicservice-6bbf99fcd7-sc922 1/1 Running 0 4m52s 10.244.0.8 test-helm-control-plane
+linkis-demo-web-554bd7659f-nmdjl 1/1 Running 0 4m52s 10.244.0.9 test-helm-control-plane
+
+```
+
+### Debugging Components
+
+#### Enable Port Forwarding
+
+Each component has a JVM remote debug port of 5005 within the container, and these ports are mapped to different ports on the host as follows.
+* mg-eureka: 5001
+* mg-gateway: 5002
+* ps-publicservice: 5004
+* cg-linkismanager: 5007
+* cg-entrance: 5008
+* cg-engineconnmanager: 5009
+* cg-engineplugin: 5010
+
+In addition, the Web Console is mapped to port 8087 on the host, which can be accessed by typing `http://localhost:8087` in your browser.
+
+``` shell
+$> ./scripts/remote-debug-proxy.sh start
+- starting port-forwad for [web] with mapping [local->8087:8087->pod] ...
+- starting port-forwad for [mg-eureka] with mapping [local->5001:5005->pod] ...
+- starting port-forwad for [mg-gateway] with mapping [local->5002:5005->pod] ...
+- starting port-forwad for [ps-publicservice] with mapping [local->5004:5005->pod] ...
+- starting port-forwad for [cg-linkismanager] with mapping [local->5007:5005->pod] ...
+- starting port-forwad for [cg-entrance] with mapping [local->5008:5005->pod] ...
+- starting port-forwad for [cg-engineconnmanager] with mapping [local->5009:5005->pod] ...
+- starting port-forwad for [cg-engineplugin] with mapping [local->5010:5005->pod] ..
+
+$> ./scripts/remote-debug-proxy.sh list
+user 10972 0.0 0.1 5052548 31244 s001 S 12:57AM 0:00.10 kubectl port-forward -n linkis pod/linkis-demo-cg-engineplugin-98bd6945-tsgjl 5010:5005 --address=0.0.0.0
+user 10970 0.0 0.1 5053060 30988 s001 S 12:57AM 0:00.12 kubectl port-forward -n linkis pod/linkis-demo-cg-engineconnmanager-659bf85689-ddvhw 5009:5005 --address=0.0.0.0
+user 10968 0.0 0.1 5054084 30428 s001 S 12:57AM 0:00.10 kubectl port-forward -n linkis pod/linkis-demo-cg-entrance-858f74c868-xrd82 5008:5005 --address=0.0.0.0
+user 10966 0.0 0.1 5053316 30620 s001 S 12:57AM 0:00.11 kubectl port-forward -n linkis pod/linkis-demo-cg-linkismanager-6f96f69b8b-ns6st 5007:5005 --address=0.0.0.0
+user 10964 0.0 0.1 5064092 31152 s001 S 12:57AM 0:00.10 kubectl port-forward -n linkis pod/linkis-demo-ps-publicservice-6bbf99fcd7-sc922 5004:5005 --address=0.0.0.0
+user 10962 0.0 0.1 5051012 31244 s001 S 12:57AM 0:00.12 kubectl port-forward -n linkis pod/linkis-demo-mg-gateway-68ddb8c547-xgvhn 5002:5005 --address=0.0.0.0
+user 10960 0.0 0.1 5053060 31312 s001 S 12:57AM 0:00.13 kubectl port-forward -n linkis pod/linkis-demo-mg-eureka-0 5001:5005 --address=0.0.0.0
+
+...
+
+# After debugging is complete, you can stop port forwarding with the following command
+$> ./scripts/remote-debug-proxy.sh stop
+- stopping port-forward for [web] with mapping [local->8087:8087->pod] ...
+- stopping port-forward for [mg-eureka] with mapping [local->5001:5005->pod] ...
+- stopping port-forward for [mg-gateway] with mapping [local->5002:5005->pod] ...
+- stopping port-forward for [ps-publicservice] with mapping [local->5004:5005->pod] ...
+- stopping port-forward for [cg-linkismanager] with mapping [local->5007:5005->pod] ...
+- stopping port-forward for [cg-entrance] with mapping [local->5008:5005->pod] ...
+- stopping port-forward for [cg-engineconnmanager] with mapping [local->5009:5005->pod] ...
+- stopping port-forward for [cg-engineplugin] with mapping [local->5010:5005->pod] ...
+```
+
+#### Configure the IDE for Remote Debugging
+
+Configure the IDE as follows to enable remote debugging:
+
+![](/Images/development/kube-jvm-remote-debug.png)
+
+Turn on remote debugger
+![](/Images/development/kube-jvm-remote-debug-start.png)
+
+Set a breakpoint and submit a job for debugging
+
+``` shell
+$> ./scripts/login-pod.sh mg-gateway
+
+- login [mg-gateway]'s bash ...
+bash-4.2$ ./bin/./linkis-cli -engineType shell-1 -codeType shell -code "echo \"hello\" " -submitUser hadoop -proxyUser hadoop
+=====Java Start Command=====
+exec /etc/alternatives/jre/bin/java -server -Xms32m -Xmx2048m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt/linkis/logs/linkis-cli -XX:ErrorFile=/opt/linkis/logs/linkis-cli/ps_err_pid%p.log -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=80 -XX:+DisableExplicitGC -classpath /opt/linkis/conf/linkis-cli:/opt/linkis/lib/linkis-computation-governance/linkis-client/linkis-cli/*:/opt/linkis/lib/linkis-commons/public-module/*: -Dconf.root=/etc/linkis-conf -Dconf.file=linkis-cli.properties -Dlog.path=/opt/linkis/logs/linkis-cli -Dlog.file=linkis-client..log.20220925171540947077800 org.apache.linkis.cli.application.LinkisClientApplication '-engineType shell-1 -codeType shell -code echo "hello" -submitUser hadoop -proxyUser hadoop'
+...
+```
+![](/Images/development/kube-jvm-remote-debug-breakpoint.png)
+
+
+### Clean Up the Environment
+
+After debugging, you can use the following command to clean up the entire environment:
+
+``` shell
+$> kind delete clusters test-helm
+Deleted clusters: ["test-helm"]
+```
+
+### Other Useful Operations
+
+#### Fetch Logs
+
+``` bash
+$> kubectl logs -n linkis linkis-demo-cg-engineconnmanager-659bf85689-ddvhw -f
+
++ RUN_IN_FOREGROUND=true
++ /opt/linkis/sbin/linkis-daemon.sh start cg-engineconnmanager
+Start to check whether the cg-engineconnmanager is running
+Start server, startup script: /opt/linkis/sbin/ext/linkis-cg-engineconnmanager
+=====Java Start Command=====
+java -DserviceName=linkis-cg-engineconnmanager -Xmx512M -XX:+UseG1GC -Xloggc:/var/logs/linkis/linkis-cg-engineconnmanager-gc.log -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 -cp /etc/linkis-conf:/opt/linkis/lib/linkis-commons/public-module/*:/opt/linkis/lib/linkis-computation-governance/linkis-cg-engineconnmanager/* org.apache.linkis.ecm.server.LinkisECMApplication --eureka.instance.prefer-ip-address=true --eureka.instance.instance-id=${spring.cloud.client.ip-address}:${spring.application.name}:${server.port} 2>&1
+OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
+Listening for transport dt_socket at address: 5005
+16:32:41.101 [main] INFO org.apache.linkis.common.conf.BDPConfiguration$ - ******************* Notice: The Linkis configuration file is linkis.properties ! *******************
+16:32:41.130 [main] INFO org.apache.linkis.common.conf.BDPConfiguration$ - *********************** Notice: The Linkis serverConf file is linkis-cg-engineconnmanager.properties ! ******************
+16:32:41.222 [main] INFO org.apache.linkis.LinkisBaseServerApp - Ready to start linkis-cg-engineconnmanager with args: --eureka.instance.prefer-ip-address=true
+--eureka.instance.instance-id=${spring.cloud.client.ip-address}:${spring.application.name}:${server.port}
+...
+```
+
+#### Entry into the Component Pod
+
+Use `. /scripts/login-pod.sh ` to access the component's Pod to open a Bash for interactive operation, where the component name can be :
+
+* cg-engineconnmanager
+* cg-engineplugin
+* cg-entrance
+* cg-linkismanager
+* mg-eureka
+* mg-gateway
+* ps-publicservice
+* web
+
+``` bash
+$> ./scripts/login-pod.sh cg-engineconnmanager
+- login [cg-engineconnmanager]'s bash ...
+bash-4.2$ pwd
+/opt/linkis
+bash-4.2$ env |grep LINKIS
+LINKIS_DEMO_PS_PUBLICSERVICE_SERVICE_HOST=127.0.0.1
+LINKIS_DEMO_CG_LINKISMANAGER_PORT_9101_TCP_PROTO=tcp
+LINKIS_DEMO_WEB_PORT_8087_TCP_PORT=8087
+...
+LINKIS_CLIENT_CONF_DIR=/etc/linkis-conf
+bash-4.2$ ps aux |grep linkis
+hadoop 1 0.0 0.0 11708 2664 ? Ss 16:32 0:00 /bin/bash /opt/linkis/sbin/linkis-daemon.sh start cg-engineconnmanager
+hadoop 10 0.0 0.0 11708 2624 ? S 16:32 0:00 sh /opt/linkis/sbin/ext/linkis-cg-engineconnmanager
+hadoop 11 0.0 0.0 11712 2536 ? S 16:32 0:00 sh /opt/linkis/sbin/ext/linkis-common-start
+hadoop 12 4.0 3.2 4146404 400084 ? Sl 16:32 0:35 java -DserviceName=linkis-cg-engineconnmanager -Xmx512M -XX:+UseG1GC -Xloggc:/var/logs/linkis/linkis-cg-engineconnmanager-gc.log -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 -cp /etc/linkis-conf:/opt/linkis/lib/linkis-commons/public-module/*:/opt/linkis/lib/linkis-computation-governance/linkis-cg-engineconnmanager/* org.apache.linkis.ecm.server.LinkisECMApplication --eureka.instance.prefer-ip-address=true --eureka.instance.instance-id=${spring.cloud.client.ip-address}:${spring.application.name}:${server.port}
+bash-4.2$ exit
+exit
+```
+
diff --git a/versioned_docs/version-1.4.0/development/debug.md b/versioned_docs/version-1.4.0/development/debug.md
new file mode 100755
index 00000000000..bc28d394883
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/debug.md
@@ -0,0 +1,474 @@
+---
+title: Debug Guide
+sidebar_position: 5.0
+---
+
+> Introduction: This article records in detail how to configure and start various microservices of Linkis in IDEA, and implement the submission and execution of scripts such as JDBC, Python, and Shell. On Mac OS, each microservice of Linkis supports local debugging.
+> However, on Windows OS, the linkis-cg-engineconnmanager service does not support local debugging for the time being. You can refer to the remote debugging documentation in Section 4 below for debugging.
+
+
Linkis before version 1.0.3 has not yet entered the apache incubation, the organization still belongs to webank, the package name of the main class is `com.webank.wedatasphere.linkis`, when debugging, pay attention to the distinction .
+
+## 1. Code debugging environment
+
+- jdk1.8
+- maven3.5+
+
+## 2. Prepare the code and compile
+
+```shell
+git clone git@github.com:apache/linkis.git
+cd linkis
+git checkout dev-1.2.0
+````
+
+Clone the source code of Linkis to the local and open it with IDEA. When you open the project for the first time, the dependency jar package required for the compilation of the Linkis project will be downloaded from the maven repository. When the dependent jar package is loaded, run the following compile and package command.
+
+```shell
+##If the corresponding version has been released, you can skip this step. The released version-related dependencies have been deployed to the maven central repository
+mvn -N install
+mvn clean install -DskipTests
+````
+
+After the compilation command runs successfully, the compiled installation package can be found in the directory linkis/linkis-dist/target/: apache-linkis-version-incubating-bin.tar.gz
+
+## 3. Configure and start the service
+
+### 3.1 add mysql-connector-java to the classpath
+
+If the mysql driver class cannot be found during the service startup process, you can add mysql-connector-java-version.jar to the classpath of the corresponding service module.
+
+At present, the services that rely on mysql and the corresponding pom.xml paths are as follows:
+
+- linkis-mg-gateway: linkis-spring-cloud-services/linkis-service-gateway/linkis-gateway-server-support/pom.xml
+- linkis-ps-publicservice: linkis-public-enhancements/pom.xml
+- linkis-cg-linkismanage: linkis-computation-governance/linkis-manager/linkis-application-manager/pom.xml
+- linkis-cg-engineplugin: linkis-computation-governance/linkis-engineconn/linkis-engineconn-plugin-server/pom.xml
+
+The way to add to the dependency is as follows, modify the pom.xml file of the corresponding service to add the mysql dependency,
+````xml
+
+ mysql
+ mysql-connector-java
+ ${mysql.connector.version}
+
+````
+At the same time, it is necessary to keep whether the scope of mysql-connector-java dependency is set to test according to the `` of pom.xml. If so, comments are required for local debugging
+
+### 3.2 Adjust log4j2.xml configuration
+
+Under the Linkis source code folder, in the subdirectory linkis-dist/package/conf, are some default configuration files of Linkis. First, edit the log4j2.xml file, and add the configuration of log output to the console.
+
+![log4j2.xml](/Images/development/debug/log4j.png)
+
+Only the configuration content that needs to be added is posted here.
+
+````xml
+
+
+
+
+
+
+
+
+
+
+
+
+
+````
+__Note:__linkis.properties needs to modify the parameters of the corresponding jdbc
+
+### 3.3 Start the eureka service
+
+Linkis' services rely on Eureka as the registry, so we need to start the Eureka service first. The Eureka service can be started locally or remotely. After ensuring that each service can access Eureka's IP and port, you can start to start other microservices.
+
+Inside Linkis, the application name and configuration file are set through the -DserviceName parameter, so -DserviceName is a VM startup parameter that must be specified.
+
+You can use the "-Xbootclasspath/a: configuration file path" command to append the configuration file to the end of the bootstrap class path, that is, add the dependent configuration file to the classpath.
+
+By checking Include dependencies with "Provided" scope, you can introduce provided-level dependency packages during debugging.
+
+![eureka](/Images/development/debug/eureka.png)
+
+Parameter explanation:
+
+```shell
+[service name]
+linkis-mg-eureka
+
+[Use classpath of module]
+linkis-eureka
+
+[Main Class]
+org.apache.linkis.eureka.SpringCloudEurekaApplication
+
+[VM Opitons]
+-DserviceName=linkis-mg-eureka -Xbootclasspath/a:{YourPathPrefix}/linkis/linkis-dist/package/conf
+
+[Program arguments]
+--spring.profiles.active=eureka --eureka.instance.preferIpAddress=true
+````
+
+Note that the local path involved in the debugging configuration needs to be modified to the path set by yourself;
+The path writing rule in Windows is: D:\{YourPathPrefix}\linkis\linkis-dist\package\conf
+(The same applies to the following microservices)
+
+If you don't want the default 20303 port, you can modify the port configuration:
+
+```shell
+File path: conf/application-eureka.yml
+Modify the port:
+server:
+ port: 8080 ##Starting port
+````
+
+After the above settings are completed, run the Application directly. After successful startup, you can view the eureka service list through http://localhost:20303/.
+
+![eureka-web](/Images/development/debug/eureka-web.png)
+
+### 3.4 Start linkis-mg-gateway
+
+linkis-mg-gateway is the service gateway of Linkis, and all requests will be forwarded to the corresponding service through the gateway.
+Before starting the server, you first need to edit the conf/linkis-mg-gateway.properties configuration file and add the administrator username and password. The username must be the same as the mac username you are currently logged in to.
+
+````properties
+wds.linkis.admin.user=leojie
+wds.linkis.admin.password=123456
+````
+
+Set the startup Application of linkis-mg-gateway
+
+![gateway-app](/Images/development/debug/gateway.png)
+
+Parameter explanation:
+
+```shell
+[Service Name]
+linkis-mg-gateway
+
+[Use classpath of module]
+linkis-gateway-server-support
+
+[VM Opitons]
+-DserviceName=linkis-mg-gateway -Xbootclasspath/a:{YourPathPrefix}/linkis/linkis-dist/package/conf
+
+[main Class]
+org.apache.linkis.gateway.springcloud.LinkisGatewayApplication
+````
+
+After the above settings are completed, the Application can be run directly.
+
+### 3.5 Start linkis-ps-publicservice
+
+publicservice is a public enhancement service of Linkis, a module that provides functions such as unified configuration management, context service, material library, data source management, microservice management and historical task query for other microservice modules.
+
+Set the startup Application of linkis-ps-publicservice
+
+![publicservice-app](/Images/development/debug/publicservice.png)
+
+Parameter explanation:
+```shell
+[Service Name]
+linkis-ps-publicservice
+
+[Module Name]
+linkis-public-enhancements
+
+[VM Opitons]
+-DserviceName=linkis-ps-publicservice -Xbootclasspath/a:{YourPathPrefix}/linkis/linkis-dist/package/conf
+
+[main Class]
+org.apache.linkis.filesystem.LinkisPublicServiceApp
+
+[Add provided scope to classpath]
+By checking Include dependencies with "Provided" scope, you can introduce provided-level dependency packages during debugging.
+````
+
+When starting publicservice directly, you may encounter the following errors:
+
+![publicservice-debug-error](/Images/development/debug/publicservice-debug-error.png)
+
+You need to add the publicly dependent modules to the classpath of the linkis-public-enhancements module, and modify the pom of pes to add the following dependencies:
+linkis-public-enhancements/pom.xml
+````xml
+
+ org.apache.linkis
+ linkis-dist
+ ${project.version}
+
+
+
+ mysql
+ mysql-connector-java
+ ${mysql.connector.version}
+
+
+````
+
+After completing the above configuration, restart the Application of publicservice
+
+### 3.6 Start linkis-cg-linkismanager
+
+![cg-linkismanager-APP](/Images/development/debug/cg-linkismanager-APP.png)
+
+Parameter explanation:
+
+```shell
+[Service Name]
+linkis-cg-linkismanager
+
+[Use classpath of module]
+linkis-application-manager
+
+[VM Opitons]
+-DserviceName=linkis-cg-linkismanager -Xbootclasspath/a:{YourPathPrefix}/linkis/linkis-dist/package/conf
+
+[main Class]
+org.apache.linkis.manager.am.LinkisManagerApplication
+
+[Add provided scope to classpath]
+By checking Include dependencies with "Provided" scope, you can introduce provided-level dependency packages during debugging.
+````
+
+### 3.7 Start linkis-cg-entrance
+
+![cg-entrance-APP](/Images/development/debug/cg-entrance-APP.png)
+
+Parameter explanation:
+
+```shell
+[Service Name]
+linkis-cg-entrance
+
+[Use classpath of module]
+linkis-entrance
+
+[VM Opitons]
+-DserviceName=linkis-cg-entrance -Xbootclasspath/a:D:\yourDir\linkis\linkis-dist\package\conf
+
+[main Class]
+org.apache.linkis.entrance.LinkisEntranceApplication
+
+[Add provided scope to classpath]
+By checking Include dependencies with "Provided" scope, you can introduce provided-level dependency packages during debugging.
+````
+
+### 3.8 Start linkis-cg-engineconnmanager
+
+![engineconnmanager-app](/Images/development/debug/engineconnmanager-app.png)
+
+Parameter explanation:
+
+```shell
+[Service Name]
+linkis-cg-engineconnmanager
+
+[Use classpath of module]
+linkis-engineconn-manager-server
+
+[VM Opitons]
+-DserviceName=linkis-cg-engineconnmanager -Xbootclasspath/a:{YourPathPrefix}/linkis/linkis-dist/package/conf -DJAVA_HOME=/Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home/
+
+[main Class]
+org.apache.linkis.ecm.server.LinkisECMApplication
+
+[Add provided scope to classpath]
+By checking Include dependencies with "Provided" scope, you can introduce provided-level dependency packages during debugging.
+````
+
+-DJAVA_HOME is to specify the path of the java command used by ecm to start the engine. If the version in the default JAVA environment variable meets your needs, this configuration can be omitted.
+
+Debugging the linkis-cg-engineconnmanager module only supports Mac OS and Linux systems.
+
+### 3.9 Key Configuration Modifications
+
+The above operation only completes the configuration of the application startup of each Linkis microservice. In addition, in the configuration file loaded when the Linkis service starts, some key configurations also need to be modified in a targeted manner, otherwise the process of starting the service or script execution Some errors will be encountered. The key configuration modifications are summarized as follows:
+
+#### 3.9.1 conf/linkis.properties
+
+````properties
+# linkis underlying database connection parameter configuration
+wds.linkis.server.mybatis.datasource.url=jdbc:mysql://yourip:3306/linkis?characterEncoding=UTF-8
+wds.linkis.server.mybatis.datasource.username=your username
+wds.linkis.server.mybatis.datasource.password=your password
+
+# Set the bml material storage path to not hdfs
+wds.linkis.bml.is.hdfs=false
+wds.linkis.bml.local.prefix=/Users/leojie/software/linkis/data/bml
+
+wds.linkis.home=/Users/leojie/software/linkis
+
+# Set the administrator username, your local username
+wds.linkis.governance.station.admin=leojie
+
+# Set the prefer ip address
+linkis.discovery.prefer-ip-address=true
+
+# Set the debug enable
+wds.linkis.engineconn.debug.enable=true
+````
+
+Before configuring linkis underlying database connection parameters, please create linkis database and run linkis-dist/package/db/linkis_ddl.sql and linkis-dist/package/db/linkis_dml.sql to initialize all tables and data.
+
+The directory structure of wds.linkis.home={YourPathPrefix}/linkis is as follows, only the lib directory and the conf directory are placed in it. When the engine process starts, the conf and lib paths in wds.linkis.home will be added to the classpath. If wds.linkis.home is not specified, an exception that the directory cannot be found may be encountered.
+
+![linkis-home](/Images/development/debug/linkis-home.png)
+
+#### 3.9.2 conf/linkis-cg-entrance.properties
+
+````properties
+# The log directory of the entrance service execution task
+wds.linkis.entrance.config.log.path=file:///{YourPathPrefix}/linkis/data/entranceConfigLog
+
+# The result set is saved in the directory, the local user needs read and write permissions
+wds.linkis.resultSet.store.path=file:///{YourPathPrefix}/linkis/data/resultSetDir
+````
+
+#### 3.9.3 conf/linkis-cg-engineconnmanager.properties
+
+````properties
+wds.linkis.engineconn.root.dir={YourPathPrefix}/linkis/data/engineconnRootDir
+````
+
+If you do not modify it, you may encounter an exception that the path does not exist.
+
+#### 3.9.4 conf/linkis-cg-engineplugin.properties
+
+````properties
+wds.linkis.engineconn.home={YourPathPrefix}/linkis/linkis-engineconn-plugins/shell/target/out
+
+wds.linkis.engineconn.plugin.loader.store.path={YourPathPrefix}/linkis/linkis-engineconn-plugins/shell/target/out
+````
+
+The two configurations here are mainly to specify the root directory of the engine storage, and the main purpose of specifying it as target/out is that after the engine-related code or configuration changes, the engineplugin service can be restarted directly to take effect.
+
+### 3.10 Set sudo password-free for the current user
+
+When the engine is started, sudo needs to be used to execute the shell command to start the engine process. The current user on the mac generally needs to enter a password when using sudo. Therefore, it is necessary to set the sudo password-free for the current user. The setting method is as follows:
+
+```shell
+sudo chmod u-w /etc/sudoers
+sudo visudo
+Replace #%admin ALL=(ALL) AL with %admin ALL=(ALL) NOPASSWD: ALL
+save file exit
+````
+
+### 3.11 Service Test
+
+Make sure that the above services are all successfully started, and then test and submit the shell script job in postman.
+
+First visit the login interface to generate a cookie:
+
+![login](/Images/development/debug/login.png)
+
+Then submit the shell code for execution
+
+POST: http://127.0.0.1:9001/api/rest_j/v1/entrance/submit
+
+body parameter:
+
+````json
+{
+ "executionContent": {
+ "code": "echo 'hello'",
+ "runType": "shell"
+ },
+ "params": {
+ "variable": {
+ "testvar": "hello"
+ },
+ "configuration": {
+ "runtime": {},
+ "startup": {}
+ }
+ },
+ "source": {
+ "scriptPath": "file:///tmp/hadoop/test.sql"
+ },
+ "labels": {
+ "engineType": "shell-1",
+ "userCreator": "leojie-IDE"
+ }
+}
+````
+
+Results of the:
+
+````json
+{
+ "method": "/api/entrance/submit",
+ "status": 0,
+ "message": "OK",
+ "data": {
+ "taskID": 1,
+ "execID": "exec_id018017linkis-cg-entrance127.0.0.1:9104IDE_leojie_shell_0"
+ }
+}
+````
+
+Finally, check the running status of the task and get the running result set:
+
+GET http://127.0.0.1:9001/api/rest_j/v1/entrance/exec_id018017linkis-cg-entrance127.0.0.1:9104IDE_leojie_shell_0/progress
+
+````json
+{
+ "method": "/api/entrance/exec_id018017linkis-cg-entrance127.0.0.1:9104IDE_leojie_shell_0/progress",
+ "status": 0,
+ "message": "OK",
+ "data": {
+ "progress": 1,
+ "progressInfo": [],
+ "execID": "exec_id018017linkis-cg-entrance127.0.0.1:9104IDE_leojie_shell_0"
+ }
+}
+````
+
+GET http://127.0.0.1:9001/api/rest_j/v1/jobhistory/1/get
+
+GET http://127.0.0.1:9001/api/rest_j/v1/filesystem/openFile?path=file:///Users/leojie/software/linkis/data/resultSetDir/leojie/linkis/2022-07-16/ 214859/IDE/1/1_0.dolphin
+
+````json
+{
+ "method": "/api/filesystem/openFile",
+ "status": 0,
+ "message": "OK",
+ "data": {
+ "metadata": "NULL",
+ "totalPage": 0,
+ "totalLine": 1,
+ "page": 1,
+ "type": "1",
+ "fileContent": [
+ [
+ "hello"
+ ]
+ ]
+ }
+}
+````
+
+## 4. Remote debugging service guide
+
+Based on the code location that needs debugging, determine the corresponding service it belongs to. Use the startup script linkis-daemon.sh and configure the remote debugging port specifically for that service during startup.
+
+### 4.1 Identify the service where the package that needs to be debugged is located
+
+Identify the service where the package that needs to be debugged is located (If you are not sure about the service name, check in ${LINKIS_HOME}/sbin/linkis-start-all.sh)
+
+### 4.2 Restart the service needs to be debugged
+
+```shell
+sh linkis-daemon.sh restart ps-publicservice debug-5005
+```
+observe the outputting starting shell command,check if it contains `-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005`, which means it starts remote debug port successfully.
+
+### 4.3 Compiler configuration remote debugging
+
+Open the window as shown below and configure the remote debugging port, service, and module
+![c-debug](images/c-debug.png)
+
+### 4.4 Start debugging
+
+Click the debug button, and the following information appears, indicating that you can start debugging
+![debug](images/debug.png)
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/development/development-specification/_category_.json b/versioned_docs/version-1.4.0/development/development-specification/_category_.json
new file mode 100644
index 00000000000..0851e30ec0f
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/development-specification/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Development Specification",
+ "position": 11.0
+}
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/development/development-specification/api.md b/versioned_docs/version-1.4.0/development/development-specification/api.md
new file mode 100644
index 00000000000..a82ddeca259
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/development-specification/api.md
@@ -0,0 +1,148 @@
+---
+title: API Specification
+sidebar_position: 4
+---
+
+ > When Contributor contributes new RESTful interfaces to Linkis, it is required to follow the following interface specifications for interface development.
+
+
+
+## 1. HTTP or WebSocket ?
+
+
+
+Linkis currently provides two interfaces: HTTP and WebSocket.
+
+
+
+WebSocket advantages over HTTP:
+
+
+
+- Less stress on the server
+
+- More timely information push
+
+- Interactivity is more friendly
+
+
+
+Correspondingly, WebSocket has the following disadvantages:
+
+
+
+- The WebSocket may be disconnected while using
+
+- Higher technical requirements on the front end
+
+- It is generally required to have a front-end degradation handling mechanism
+
+
+
+**We generally strongly recommend that Contributor provide the interface using WebSocket as little as possible if not necessary;**
+
+
+
+**If you think it is necessary to use WebSocket and are willing to contribute the developed functions to Linkis, we suggest you communicate with us before the development, thank you!**
+
+
+
+## 2. URL specification
+
+
+
+```
+
+/api/rest_j/v1/{applicationName}/.+
+
+/api/rest_s/v1/{applicationName}/.+
+
+```
+
+
+
+**Convention** :
+
+
+
+- rest_j indicates that the interface complies with the Jersey specification
+
+- REST_S indicates that the interface complies with the SpringMVC REST specification
+
+- v1 is the version number of the service. ** version number will be updated with the Linkis version **
+
+- {applicationName} is the name of the micro-service
+
+
+
+## 3. Interface request format
+
+
+
+```json
+
+{
+
+"method":"/api/rest_j/v1/entrance/execute",
+
+"data":{},
+
+"WebsocketTag" : "37 fcbd8b762d465a0c870684a0261c6e" / / WebSocket requests require this parameter, HTTP requests can ignore
+
+}
+
+```
+
+
+
+**Convention** :
+
+
+
+- method: The requested RESTful API URL.
+
+- data: The specific data requested.
+
+- WebSocketTag: The unique identity of a WebSocket request. This parameter is also returned by the back end for the front end to identify.
+
+
+
+## 4. Interface response format
+
+
+
+```json
+
+{" method ":"/API/rest_j/v1 / project/create ", "status" : 0, "message" : "creating success!" ,"data":{}}
+
+```
+
+
+
+**Convention** :
+
+
+
+- method: Returns the requested RESTful API URL, mainly for the WebSocket mode.
+
+- status: Returns status information, where: -1 means not login, 0 means success, 1 means error, 2 means failed validation, and 3 means no access to the interface.
+
+- data: Returns the specific data.
+
+- message: Returns a prompt message for the request. If status is not 0, message will return an error message, where data may have a stack trace field, and return the specific stack information.
+
+
+
+In addition: Different status cause different HTTP status code, under normal circumstances:
+
+
+
+- When status is 0, the HTTP status code is 200
+
+- When the status is -1, the HTTP status code is 401
+
+- When status is 1, the HTTP status code is 400
+
+- When status is 2, the HTTP status code is 412
+
+- When status is 3, the HTTP status code is 403
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/development/development-specification/commit-message.md b/versioned_docs/version-1.4.0/development/development-specification/commit-message.md
new file mode 100644
index 00000000000..e9c6d87ff47
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/development-specification/commit-message.md
@@ -0,0 +1,100 @@
+---
+title: Commit Message Notice
+sidebar_position: 2
+---
+>This article is quoted from https://dolphinscheduler.apache.org/en-us/docs/dev/user_doc/contribute/join/commit-message.html
+
+### 1.Preface
+
+A good commit message can help other developers (or future developers) quickly understand the context of related changes, and can also help project managers determine whether the commit is suitable for inclusion in the release. But when we checked the commit logs of many open source projects, we found an interesting problem. Some developers have very good code quality, but the commit message record is rather confusing. When other contributors or learners are viewing the code, it can’t be intuitively understood through commit log.
+The purpose of the changes before and after the submission, as Peter Hutterer said:Re-establishing the context of a piece of code is wasteful. We can’t avoid it completely, so our efforts should go to reducing it as much as possible. Commit messages can do exactly that and as a result, a commit message shows whether a developer is a good collaborator. Therefore, DolphinScheduler developed the protocol in conjunction with other communities and official Apache documents.
+
+### 2.Commit Message RIP
+
+#### 2.1 Clearly modify the content
+
+A commit message should clearly state what issues (bug fixes, function enhancements, etc.) the submission solves, so that other developers can better track the issues and clarify the optimization during the version iteration process.
+
+#### 2.2 Associate the corresponding Pull Request or Issue
+
+When our changes are large, the commit message should best be associated with the relevant Issue or Pull Request on GitHub, so that our developers can quickly understand the context of the code submission through the associated information when reviewing the code. If the current commit is for an issue, then the issue can be closed in the Footer section.
+
+#### 2.3 Unified format
+
+The formatted CommitMessage can help provide more historical information for quick browsing, and it can also generate a Change Log directly from commit.
+
+Commit message should include three parts: Header, Body and Footer. Among them, Header is required, Body and Footer can be omitted.
+
+##### Header
+
+The header part has only one line, including three fields: type (required), scope (optional), and subject (required).
+
+[DS-ISSUE number][type] subject
+
+(1) Type is used to indicate the category of commit, and only the following 7 types are allowed.
+
+- feat:New features
+- fix:Bug fixes
+- docs:Documentation
+- style: Format (does not affect changes in code operation)
+- refactor:Refactoring (It is not a new feature or a code change to fix a bug)
+- test:Add test
+- chore:Changes in the build process or auxiliary tools
+
+If the type is feat and fix, the commit will definitely appear in the change log. Other types (docs, chore, style, refactor, test) are not recommended.
+
+(2) Scope
+
+Scope is used to indicate the scope of commit impact, such as server, remote, etc. If there is no suitable scope, you can use \*.
+
+(3) subject
+
+Subject is a short description of the purpose of the commit, no more than 50 characters.
+
+##### Body
+
+The body part is a detailed description of this commit, which can be divided into multiple lines, and the line break will wrap with 72 characters to avoid automatic line wrapping affecting the appearance.
+
+Note the following points in the Body section:
+
+- Use the verb-object structure, note the use of present tense. For example, use change instead of changed or changes
+
+- Don't capitalize the first letter
+
+- The end of the sentence does not need a ‘.’ (period)
+
+##### Footer
+
+Footer only works in two situations
+
+(1) Incompatible changes
+
+If the current code is not compatible with the previous version, the Footer part starts with BREAKING CHANGE, followed by a description of the change, the reason for the change, and the migration method.
+
+(2) Close Issue
+
+If the current commit is for a certain issue, you can close the issue in the Footer section, or close multiple issues at once.
+
+##### For Example
+
+```
+[Linkis-001][docs-en] add commit message
+
+- commit message RIP
+- build some conventions
+- help the commit messages become clean and tidy
+- help developers and release managers better track issues
+ and clarify the optimization in the version iteration
+
+This closes #001
+```
+
+### 3.Reference documents
+
+[Dolphinscheduler Commit Message](https://dolphinscheduler.apache.org/zh-cn/docs/dev/user_doc/contribute/join/commit-message.html)
+
+[Commit message format](https://cwiki.apache.org/confluence/display/GEODE/Commit+Message+Format)
+
+[On commit messages-Peter Hutterer](http://who-t.blogspot.com/2009/12/on-commit-messages.html)
+
+[RocketMQ Community Operation Conventions](https://mp.weixin.qq.com/s/LKM4IXAY-7dKhTzGu5-oug)
diff --git a/versioned_docs/version-1.4.0/development/development-specification/concurrent.md b/versioned_docs/version-1.4.0/development/development-specification/concurrent.md
new file mode 100644
index 00000000000..9eeb78304be
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/development-specification/concurrent.md
@@ -0,0 +1,10 @@
+---
+title: Concurrent Specification
+sidebar_position: 5
+---
+
+1. [**Compulsory**] Make sure getting a singleton object to be thread-safe. Operating inside singletons should also be kept thread-safe.
+2. [**Compulsory**] Thread resources must be provided through the thread pool, and it is not allowed to explicitly create threads in the application.
+3. SimpleDateFormat is a thread-unsafe class. It is recommended to use the DataUtils utility class.
+4. [**Compulsory**] At high concurrency, synchronous calls should consider the performance cost of locking. If you can use lockless data structures, don't use locks. If you can lock blocks, don't lock the whole method body. If you can use object locks, don't use class locks.
+5. [**Compulsory**] Use ThreadLocal as less as possible. Everytime using ThreadLocal and it holds an object which needs to be closed, remember to close it to release.
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/development/development-specification/exception-catch.md b/versioned_docs/version-1.4.0/development/development-specification/exception-catch.md
new file mode 100644
index 00000000000..27a96e980c4
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/development-specification/exception-catch.md
@@ -0,0 +1,10 @@
+---
+title: Exception Catch Specification
+sidebar_position: 3
+---
+
+1. [**Mandatory**] For the exception of each small module, a special exception class should be defined to facilitate the subsequent generation of error codes for users. It is not allowed to throw any RuntimeException or directly throw Exception.
+2. Try not to try-catch a large section of code. This is irresponsible. Please distinguish between stable code and non-stable code when catching. Stable code refers to code that will not go wrong anyway. For the catch of unstable code, try to distinguish the exception types as much as possible, and then do the corresponding exception handling.
+3. [**Mandatory**] The purpose of catching an exception is to handle it. Don't throw it away without handling it. If you don't want to handle it, please throw the exception to its caller. Note: Do not use e.printStackTrace() under any circumstances! The outermost business users must deal with exceptions and turn them into content that users can understand.
+4. The finally block must close the resource object and the stream object, and try-catch if there is an exception.
+5. [**Mandatory**] Prevent NullPointerException. The return value of the method can be null, and it is not mandatory to return an empty collection, or an empty object, etc., but a comment must be added to fully explain under what circumstances the null value will be returned. RPC and SpringCloud Feign calls all require non-empty judgments.
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/development/development-specification/how-to-write-unit-test-code.md b/versioned_docs/version-1.4.0/development/development-specification/how-to-write-unit-test-code.md
new file mode 100644
index 00000000000..9f8159c43b8
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/development-specification/how-to-write-unit-test-code.md
@@ -0,0 +1,393 @@
+---
+title: How to Write Unit Test Code
+sidebar_position: 10
+---
+
+## 1.Frame Selection
+
+Junit5 + mockito + Jacobo + H2 local database
+
+Idea enhancement plugin
+
+- JUnitGenerator V2. 0 standard module for generating test cases
+- Create the allnewset object and set the default value for allnewset
+- The association mapping between mybatisx DAO and mapper is easy to view
+
+### 1.1 Configure the Template of JUnit in Idea
+
+```properties
+
+########################################################################################
+##
+## Available variables:
+## $entryList.methodList - List of method composites
+## $entryList.privateMethodList - List of private method composites
+## $entryList.fieldList - ArrayList of class scope field names
+## $entryList.className - class name
+## $entryList.packageName - package name
+## $today - Todays date in MM/dd/yyyy format
+##
+## MethodComposite variables:
+## $method.name - Method Name
+## $method.signature - Full method signature in String form
+## $method.reflectionCode - list of strings representing commented out reflection code to access method (Private Methods)
+## $method.paramNames - List of Strings representing the method's parameters' names
+## $method.paramClasses - List of Strings representing the method's parameters' classes
+##
+## You can configure the output class name using "testClass" variable below.
+## Here are some examples:
+## Test${entry.ClassName} - will produce TestSomeClass
+## ${entry.className}Test - will produce SomeClassTest
+##
+########################################################################################
+##
+## title case
+#macro (cap $strIn)$strIn.valueOf($strIn.charAt(0)).toUpperCase()$strIn.substring(1)#end
+## Initial lowercase custom down
+#macro (down $strIn)$strIn.valueOf($strIn.charAt(0)).toLowerCase()$strIn.substring(1)#end
+## Iterate through the list and generate testcase for every entry.
+#foreach ($entry in $entryList)
+#set( $testClass="${entry.className}Test")
+##
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package $entry.packageName;
+
+import org.junit.jupiter.api.AfterEach;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.Test;
+import org.springframework.beans.factory.annotation.Autowired;
+
+/**
+ * ${entry.className} Tester
+*/
+public class $testClass {
+
+ @Autowired
+ private ${entry.className} #down(${entry.className});
+
+ @BeforeEach
+ @DisplayName("Each unit test method is executed once before execution")
+ public void before() throws Exception {
+ }
+
+ @AfterEach
+ @DisplayName("Each unit test method is executed once before execution")
+ public void after() throws Exception {
+ }
+
+#foreach($method in $entry.methodList)
+
+ @Test
+ @DisplayName("Method description: ...")
+ public void test#cap(${method.name})() throws Exception {
+ //TODO: Test goes here...
+ }
+
+#end
+
+#foreach($method in $entry.privateMethodList)
+
+ @Test
+ @DisplayName("Method description: ...")
+ public void test#cap(${method.name})() throws Exception {
+ //TODO: Test goes here...
+ #foreach($string in $method.reflectionCode)
+ $string
+ #end
+ }
+
+#end
+}
+#end
+
+```
+
+![test-0](../images/test-0.png)
+
+1. Configure test class generation path
+
+ Original configuration: ${sourcepath}/test/${package}/${filename}
+ Modified configuration: ${sourcepath}/..//test/java/${PACKAGE}/${FILENAME}
+
+ As shown in the figure:
+ ![test-1](../images/test-1.png)
+
+2. Select class -> right click -> generate -> JUnit test to generate a test class
+
+ ![test-2](../images/test-2.png)
+
+
+## 2.Unit Test Criteria
+
+### 2.1 Catalogue And Naming Citeria
+
+- 1. Unit test code directory
+ It must be written in the following project directory: src/test/java. It is not allowed to write in the business code directory.
+ Note: this directory will be skipped during source code compilation, while the unit test framework scans this directory by default. The test configuration file must be placed under the src/test/resources file
+
+- 2. The package name of the test class should be consistent with the package name of the tested class
+ Example:
+ Business class: src/main/java/org/apache/linkis/jobhistory/dao/JobDetailMapper.java
+ Corresponding test class:src/main/java/org/apache/linkis/jobhistory/dao/JobDetailMapperTest java
+
+- 3. Naming and definition specification of test class: use test as the suffix of class name
+ The test class is named as follows:
+ Tested business + test, tested interface + test, tested class + test
+
+- 4. Specification for naming and defining test cases: use test as the prefix of method names
+ The naming rule of test cases is: test + method name. Avoid using names that have no meaning in test1 and test2. Secondly, necessary function and method annotations are required.
+
+### 2.2 Unit Coding Specifications
+
+- 1. System is not allowed to be used in unit test Out for human flesh verification, or if judgment for verification (log can be used for Key log output). Assertion assert must be used for verification.
+
+- 2. Maintain the independence of unit testing. In order to ensure that unit tests are stable, reliable and easy to maintain, unit test cases must not call each other or rely on the order of execution.
+ Counterexample: method2 needs to rely on the execution of method1 and take the execution result as the input of method2
+
+- 3. Unit tests must be repeatable and not affected by the external environment.
+ Note: unit tests are usually put into continuous integration. Unit tests will be executed every time there is code check in. If the single test depends on the external environment (network, service, middleware, etc.), it is easy to lead to the unavailability of the continuous integration mechanism.
+ Positive example: in order not to be affected by the external environment, it is required to change the relevant dependencies of the tested class into injection when designing the code, and inject a local (memory) implementation or mock implementation with a dependency injection framework such as spring during testing.
+
+- 4. Incremental code ensures that the unit test passes.
+ Note: the new code must supplement the unit test. If the new code affects the original unit test, please correct it
+
+- 5. For unit testing, it is necessary to ensure that the test granularity is small enough to help accurately locate the problem. Single test granularity is generally at the method level (very few scenarios such as tool classes or enumeration classes can be at the class level).
+ Note: only with small test granularity can we locate the error location as soon as possible. Single test is not responsible for checking cross class or cross system interaction logic, which is the field of integration testing.
+
+## 3.Use of Assertions
+
+ The result verification of all test cases must use the assertion pattern
+ use Assertions.assertEquals
+ Assertions.assertEquals(expectedJobDetail, actualJobDetail)
+
+ The assertions assertion of junit5 is preferred, and the assertions of assertij are allowed in very few scenarios
+ Comparison of objects before/after updating common scene databases
+ Asserting the usingrecursive comparison pattern using assertj's assertThat
+ Assertions.assertThat(actualObject).usingRecursiveComparison().isEqualTo(expectedObject);
+
+
+### 3.1 Junit5 General Assertion
+
+| Method | description | remarks |
+|--------|-------------|-------------|
+|Assertequals | judge whether two objects or two original types are equal| |
+|Assertnotequals | judge whether two objects or two original types are not equal| |
+|Asserttrue | judge whether the given Boolean value is true| |
+|Assertfalse | judge whether the given Boolean value is false| |
+|AssertNull | judge whether the given object reference is null| |
+|AssertNotNull | judge whether the given object reference is not null| |
+|Assert all | multiple judgment logics are processed together. As long as one error is reported, the overall test will fail| |
+
+### 3.2 Junit5 Combined Assertion and Exception Assertion
+
+**Composite assertion**
+The assertall method can process multiple judgment logics together. As long as one error is reported, the overall test will fail:
+ ```java
+ @Test
+ @DisplayName("assert all")
+ public void all() {
+ //Multiple judgments are executed together. Only when all judgments are passed can they be considered as passed
+ assertAll("Math",
+ () -> assertEquals(2, 1 + 1),
+ () -> assertTrue(1 > 0)
+ );
+ }
+ ```
+
+**Exception assertion**
+
+Assertions. The assertthrows method is used to test whether the executable instance throws an exception of the specified type when executing the execute method;
+If the execute method does not throw an exception during execution, or the exception thrown is inconsistent with the expected type, the test will fail;
+Example:
+
+ ```java
+ @Test
+ @DisplayName("Assertion of exception")
+ void exceptionTesting() {
+ // When the execute method is executed, if an exception is thrown and the type of the exception is the first parameter of assertthrows (here is arithmeticexception. Class)
+ // The return value is an instance of an exception
+ Exception exception = assertThrows(ArithmeticException.class, () -> Math.floorDiv(1,0));
+ log.info("assertThrows pass,return instance:{}", exception.getMessage());
+ }
+ ```
+
+### 3.3 Assertion Usage Criteria
+
+**Object instance equality assertion**
+
+1. Is it the same object instance
+
+```html
+Use junitd's assertions assertEquals
+Assertions.assertEquals(expectedJobDetail, actualJobDetail)
+```
+
+Not the same instance, but whether the attribute values of the comparison instance are exactly equal
+AssertJ
+
+```html
+Comparison of objects before/after updating common scene databases
+Asserting the usingrecursive comparison pattern using assertj's assertthat
+Assertions. assertThat(actualObject). usingRecursiveComparison(). isEqualTo(expectedObject);
+```
+
+2. Assertion of set results such as list
+The size of the result set needs to be asserted
+Scope or specific size
+Each object in the result set needs assertion, which is recommended to be used in combination with the predicate of stream mode
+Example:
+
+```java
+ArrayList jobRespProtocolArrayList=service. batchChange(jobDetailReqBatchUpdate);
+//List is matched with the predicate of stream for assertion judgment
+Predicate statusPrecate = e -> e.getStatus()==0;
+assertEquals(2, jobRespProtocolArrayList.size());
+assertTrue(jobRespProtocolArrayList.stream(). anyMatch(statusPrecate));
+```
+
+## 4.Mock simulation return data
+
+Sometimes we just test some apis or service modules, where the service or dao returns null values for some methods by default, but if the logic includes the judgment or secondary value of the returned null object, it is to throw some exceptions
+
+Example:
+
+```java
+ PageInfo pageInfo =
+ udfService.getManagerPages(udfName, udfTypes, userName, curPage, pageSize);
+ message = Message.ok();
+ // The pageInfo here is null, and subsequent get methods will have exceptions
+ message.data("infoList", pageInfo.getList());
+ message.data("totalPage", pageInfo.getPages());
+ message.data("total", pageInfo.getTotal());
+```
+
+Example of mock simulation data:
+
+```java
+ PageInfo pageInfo = new PageInfo<>();
+ pageInfo.setList(new ArrayList<>());
+ pageInfo.setPages(10);
+ pageInfo.setTotal(100);
+ // For udfService.getManagerPages method passes parameters arbitrarily, and the simulation returns the pageInfo object
+ // With this simulation data, the above example will not have exceptions when executing the get method
+ Mockito.when(
+ udfService.getManagerPages(
+ Mockito.anyString(),
+ Mockito.anyCollection(),
+ Mockito.anyString(),
+ Mockito.anyInt(),
+ Mockito.anyInt()))
+ .thenReturn(pageInfo);
+```
+
+## 5.Compilation of Unit Test
+
+### 5.1 Class Division
+
+It can be roughly classified according to the major functions of the class
+
+-The controller of the HTTP service provided by the controller cooperates with mockmvc for unit testing
+-Service layer of service business logic code
+-Dao and Dao layer of database operation
+-Util tool function class is a common function tool
+-Exception class is a custom exception class
+-Enum class
+-Entity class is used for DB interaction and parameter VO object and other entity classes processed by methods (if there are other user-defined functions besides normal get set, unit test is required)
+
+
+### 5.2 Unit Test of Controller class
+Using mockmvc
+
+It mainly verifies the requestmethod method of interface request, basic parameters and expected return results.
+Main scenarios: scenarios with and without unnecessary parameters are abnormal
+
+```java
+ @Test
+ public void testList() throws Exception {
+ //Bring unnecessary parameters
+ MultiValueMap paramsMap = new LinkedMultiValueMap<>();
+ paramsMap.add("startDate", String.valueOf(System.currentTimeMillis()));
+ MvcResult mvcResult = mockMvc.perform(get("/jobhistory/list")
+ .params(paramsMap))
+ .andExpect(status().isOk())
+ .andExpect(content().contentType(MediaType.APPLICATION_JSON))
+ .andReturn();
+
+ Message res = JsonUtils.jackson().readValue(mvcResult.getResponse().getContentAsString(), Message.class);
+ assertEquals(res.getStatus(), MessageStatus.SUCCESS());
+ logger.info(mvcResult.getResponse().getContentAsString());
+
+ //Without unnecessary parameters
+ mvcResult = mockMvc.perform(get("/jobhistory/list"))
+ .andExpect(status().isOk())
+ .andExpect(content().contentType(MediaType.APPLICATION_JSON))
+ .andReturn();
+
+ res = JsonUtils.jackson().readValue(mvcResult.getResponse().getContentAsString(), Message.class);
+ assertEquals(res.getStatus(), MessageStatus.SUCCESS());
+
+ logger.info(mvcResult.getResponse().getContentAsString());
+ }
+
+```
+
+### 5.3 Unit Test of Server class
+ //todo
+
+### 5.4 Unit Test of Dao class
+
+Use H2 database, application. In the configuration file In properties, you need to configure the basic information of H2 database and the relevant path information of mybatis
+
+```properties
+#h2 database configuration
+spring.datasource.driver-class-name=org.h2.Driver
+# Script to connect database
+spring.datasource.url=jdbc:h2:mem:test;MODE=MySQL;DB_CLOSE_DELAY=-1;DATABASE_TO_LOWER=true
+#Script to initialize database tables
+spring.datasource.schema=classpath:create.sql
+#Script to initialize data for database tables
+spring.datasource.data=classpath:data.sql
+spring.datasource.username=sa
+spring.datasource.password=
+spring.datasource.hikari.connection-test-query=select 1
+spring.datasource.hikari.minimum-idle=5
+spring.datasource.hikari.auto-commit=true
+spring.datasource.hikari.validation-timeout=3000
+spring.datasource.hikari.pool-name=linkis-test
+spring.datasource.hikari.maximum-pool-size=50
+spring.datasource.hikari.connection-timeout=30000
+spring.datasource.hikari.idle-timeout=600000
+spring.datasource.hikari.leak-detection-threshold=0
+spring.datasource.hikari.initialization-fail-timeout=1
+
+#配置mybatis-plus的mapper信息 因为使用的是mybatis-plus,使用mybatis-plus
+mybatis-plus.mapper-locations=classpath:org/apache/linkis/jobhistory/dao/impl/JobDetailMapper.xml,classpath:org/apache/linkis/jobhistory/dao/impl/JobHistoryMapper.xml
+mybatis-plus.type-aliases-package=org.apache.linkis.jobhistory.entity
+mybatis-plus.configuration.log-impl=org.apache.ibatis.logging.stdout.StdOutImpl
+```
+
+List is configured with predicate of stream to make assertion judgment and write specification
+
+1. Use @Transactional and @Rollback to realize data rollback and avoid data pollution
+2. Each DaoTest should have a public method for creating and initializing data (or the way of importing data CSV) to prepare data. For related queries, updates, deletions and other operations, the public method should be called first to prepare data
+3. Create test data. If an attribute value is a self increasing ID, it should not be assigned
+4. The test data created shall be consistent with the actual sample data as far as possible
+5. When updating the data test, if the field allows, please prefix it with 'modify original value'
diff --git a/versioned_docs/version-1.4.0/development/development-specification/license.md b/versioned_docs/version-1.4.0/development/development-specification/license.md
new file mode 100644
index 00000000000..051b19a2096
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/development-specification/license.md
@@ -0,0 +1,165 @@
+---
+title: License Notes
+sidebar_position: 0.1
+---
+
+> Note: This article applies to Apache projects only.
+>This article refers to the Dolphinscheduler project's License Instructions document https://dolphinscheduler.apache.org/zh-cn/docs/dev/user_doc/contribute/join/DS-License.html
+
+The open source projects under the ASF (Apache Foundation) have extremely strict requirements for the license. When you contribute code to Linkis, you must follow the Apache rules. In order to avoid the contributors wasting too much time on the license,
+This article will explain the ASF-License and how to avoid the license risk when participating in the Linkis project development.
+
+## 1.License file directory description
+
+License related can be divided into 3 parts
+- The main scenarios that need to be paid attention to are: in the project source code, the resources are directly included in the project (such as the direct use of video files, sample files, code JAVA of other projects, additions, icons, audio sources) and other files, and modifications made on the basis )
+- The packaging of the project will be packaged and released. The main scenarios that need to be paid attention to are: the running and installation dependencies of the dependent jar packages in the dependencies, and the pom, that is, the packaging of the dependencies, will be packaged in
+- The situation that the material installation package of the management console needs to be paid attention to: the additional dependency packages that are dependent on the front-end web are configured through linkweb/package.json
+
+[Linkis source code](https://github.com/apache/linkis) The directory related to the license is as follows
+```shell script
+# the outermost directory starts
+
+├── LICENSE //LICENSE of the project source code Some files without asf header or the introduction of external resources need to be marked here
+├── NOTICE //The NOTICE of the project source code generally does not change
+├── licenses //Introduction of third-party component licenses at the project source level
+│ └── LICENSE-py4j-0.10.9.5-src.txt
+├── linkis-dist
+│ └── release-docs
+│ ├── LICENSE //Summary of license information of the third-party jar packages that depend on the compiled installation package
+│ ├── licenses //Details of the license information corresponding to the third-party jar package dependent on the compiled installation package
+│ │ ├── LICENSE-log4j-api.txt
+│ │ ├── LICENSE-log4j-core.txt
+│ │ ├── LICENSE-log4j-jul.txt
+│ │ ├── LICENSE-xxxx.txt
+│ └── NOTICE //A summary of NOTICE of dependent third-party jar packages in the compiled installation package
+├── linkis-web
+ └── release-docs
+ ├── LICENSE //LICENSE information summary of the third-party npm dependencies of the front-end web compilation and installation package
+ ├── licenses //The license information corresponding to the third-party npm dependencies of the front-end web compilation and installation package is detailed
+ │ ├── LICENSE-vuedraggable.txt
+ │ ├── LICENSE-vue-i18n.txt
+ │ ├── LICENSE-vue.txt
+ │ ├── LICENSE-vuescroll.txt
+ │ └── LICENSE-xxxx.txt
+ └── NOTICE //A summary of NOTICE dependent on third-party npm for front-end web compilation and installation packages
+
+
+
+````
+
+
+## 2.How to legally use third-party open source software on Linkis
+
+When the code you submit has the following scenarios:
+
+- Scenario 1. The source code has added(removed) third-party code or static resources. For example, the source code directly uses a code file of another project, and adds text, css, js, pictures, icons, audio and video files. , and modifications made on a third-party basis.
+- Scenario 2. The runtime dependencies of the project are added(removed) (runtime dependencies:the final compilation and packaging will be packaged into the released installation package)
+
+- The imported file in Scenario 1 must be a Class A License of [ASF Third Party License Policy](https://apache.org/legal/resolved.html)
+- The dependencies introduced in Scenario 2 must be Class A/Class B licenses in [ASF Third Party License Policy](https://apache.org/legal/resolved.html), not Class C licenses
+
+We need to know the NOTICE/LICENSE of the files introduced by our project or jar dependencies, (most open source projects will have NOTICE files), these must be reflected in our project. In Apache's words, "Work" shall be mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a
+copyright notice that is included in or attached to the work.
+
+### 2.1 Example Scenario 1
+For example, the third-party file `linkis-engineconn-plugins/python/src/main/py4j/py4j-0.10.7-src.zip` is introduced into the source code
+
+Find the source branch of the version corresponding to py4j-0.10.7-src.zip, if there is no `LICENSE/NOTICE` file in the corresponding version branch, select the main branch
+- The project source code is located at: https://github.com/bartdag/py4j/tree/0.10.7/py4j-python
+- LICENSE file: https://github.com/bartdag/py4j/blob/0.10.7/py4j-python/LICENSE.txt
+- NOTICE file: none
+
+The license information of `py4j-0.10.7-src.zip` needs to be specified in the `linkis/LICENSE` file.
+The detailed license.txt file corresponding to `py4j-0.10.7-src.zip` is placed in the same level directory `linkis-engineconn-plugins/python/src/main/py4j/LICENSE-py4j-0.10 .7-src.txt`
+Since https://github.com/bartdag/py4j/tree/0.10.7/py4j-python does not have a NOTICE file, there is no need to append to the `linkis/NOTICE` file.
+
+### 2.2 Example Scene 2
+
+The compilation of the project depends on `org.apache.ant:ant:1.9.1`, and ant-1.9.1.jar will be compiled and installed in the final package `target/apache-linkis-xxx-incubating-bin/linkis-package/lib `medium
+You can decompress ant-1.9.1.jar and extract the LICENSE/NOTICE file from the jar package. If not, you need to find the corresponding version source code
+Find the source branch of the version corresponding to py4j-0.10.7-src.zip, if the corresponding version branch is not available, select the main branch
+- The project source code is located at: https://github.com/apache/ant/tree/rel/1.9.1
+- LICENSE file: https://github.com/apache/ant/blob/rel/1.9.1/LICENSE
+- NOTICE file: https://github.com/apache/ant/blob/rel/1.9.1/NOTICE
+
+The license information of `ant-1.9.1.jar` needs to be specified in the `linkis/LICENSE-binary` file.
+The detailed license.txt file corresponding to `ant-1.9.1.jar` is placed in `licenses-binary/LICENSE-ant.txt`
+The detailed notice.txt corresponding to `ant-1.9.1.jar` is appended to the `NOTICE-binary` file
+
+Regarding the specific open source protocol usage protocols, I will not introduce them one by one here. If you are interested, you can check them yourself.
+
+## 3.License detection rules
+We build a license-check script for our own project to ensure that we can avoid license problems as soon as we use it.
+
+When we need to add new Jars or other external resources, we need to follow these steps:
+
+* Add the jar name + version you need in tool/dependencies/known-dependencies.txt.
+* Add relevant license information in linkis-web/release-docs/LICENSE (depending on the actual situation).
+* Append the relevant NOTICE file to linkis-web/release-docs/NOTICE (determined according to the actual situation). This file must be consistent with the NOTICE file in the code version repository of the dependencies.
+
+:::caution Note
+If the scenario is to remove, then the corresponding reverse operation of the above steps needs to remove the corresponding LICENSE/NOTICE content in the corresponding file. In short, it is necessary to ensure that these files are consistent with the data of the actual source code/compiled package
+- known-dependencies.txt
+- LICENSE/LICENSE-binary/LICENSE-binary-ui
+- NOTICE/NOTICE-binary/NOTICE-binary-ui
+:::
+
+
+** check dependency license fail**
+
+After compiling, execute the tool/dependencies/diff-dependenies.sh script to verify
+````
+--- /dev/fd/63 2020-12-03 03:08:57.191579482 +0000
++++ /dev/fd/62 2020-12-03 03:08:57.191579482 +0000
+@@ -1,0 +2 @@
++HikariCP-java6-2.3.13.jar
+@@ -16,0 +18 @@
++c3p0-0.9.5.2.jar
+@@ -149,0 +152 @@
++mchange-commons-java-0.2.11.jar
+Error: Process completed with exit code 1.
+````
+Generally speaking, the work of adding a jar is often not so easy to end, because it often depends on various other jars, and we also need to add corresponding licenses for these jars.
+In this case, we will get the error message of check dependency license fail in check. As above, we are missing the license statement of HikariCP-java6-2.3.13, c3p0, etc.
+Follow the steps to add jar to add it.
+
+
+## 4.Appendix
+Attachment: Mail format of new jar
+````
+[VOTE][New/Remove Jar] jetcd-core(registry plugin support etcd3 )
+
+
+(state the purpose, and what the jar needs to be added)
+Hi, the registry SPI will provide the implementation of etcd3. Therefore, we need to introduce a new jar (jetcd-core, jetcd-launcher (test)), which complies with the Apache-2.0 License. I checked his related dependencies to make sure it complies with the license of the Apache project.
+
+new or remove jar :
+
+jetcd-core version -x.x.x license apache2.0
+jetcd-launcher (test) version -x.x.x license apache2.0
+
+Dependent jar (which jars it depends on, preferably the accompanying version, and the relevant license agreement):
+grpc-core version -x.x.x license XXX
+grpc-netty version -x.x.x license XXX
+grpc-protobuf version -x.x.x license XXX
+grpc-stub version -x.x.x license XXX
+grpc-grpclb version -x.x.x license XXX
+netty-all version -x.x.x license XXX
+failsafe version -x.x.x license XXX
+
+If it is a new addition, the email needs to attach the following content
+Related addresses: mainly github address, license file address, notice file address, maven central warehouse address
+
+github address: https://github.com/etcd-io/jetcd
+license: https://github.com/etcd-io/jetcd/blob/master/LICENSE
+notice: https://github.com/etcd-io/jetcd/blob/master/NOTICE
+
+Maven repository:
+https://mvnrepository.com/artifact/io.etcd/jetcd-core
+https://mvnrepository.com/artifact/io.etcd/jetcd-launcher
+````
+
+## 5.Reference articles
+* [COMMUNITY-LED DEVELOPMENT "THE APACHE WAY"](https://apache.org/dev/licensing-howto.html)
+* [ASF 3RD PARTY LICENSE POLICY](https://apache.org/legal/resolved.html)
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/development/development-specification/log.md b/versioned_docs/version-1.4.0/development/development-specification/log.md
new file mode 100644
index 00000000000..f30c5e656a7
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/development-specification/log.md
@@ -0,0 +1,12 @@
+---
+title: Log Specification
+sidebar_position: 2
+---
+
+1. [**Convention**] Linkis chooses SLF4J and Log4J2 as the log printing framework, removing the logback in the Spring-Cloud package. Since SLF4J will randomly select a logging framework for binding, it is necessary to exclude bridging packages such as SLF4J-LOG4J after introducing new Maven packages in the future, otherwise log printing will be a problem. However, if the newly introduced Maven package depends on a package such as Log4J, do not exclude, otherwise the code may run with an error.
+2. [**Configuration**] The log4j2 configuration file is default to log4j2.xml and needs to be placed in the classpath. If springcloud combination is needed, "logging:config:classpath:log4j2-spring.xml"(the location of the configuration file) can be added to application.yml.
+3. [**Compulsory**] The API of the logging system (log4j2, Log4j, Logback) cannot be used directly in the class. For Scala code, force inheritance from Logging traits is required. For Java, use LoggerFactory.GetLogger(getClass).
+4. [**Development Convention**] Since engineConn is started by engineConnManager from the command line, we specify the path of the log configuration file on the command line, and also modify the log configuration during the code execution. In particular, redirect the engineConn log to the system's standard out. So the log configuration file for the EngineConn convention is defined in the EnginePlugin and named log4j2-engineConn.xml (this is the convention name and cannot be changed).
+5. [**Compulsory**] Strictly differentiate log levels. Fatal logs should be thrown and exited using System.out(-1) when the SpringCloud application is initialized. Error-level exceptions are those that developers must care about and handle. Do not use them casually. The WARN level is the logs of user action exceptions and some logs to troubleshoot bugs later. INFO is the key process log. Debug is a mode log, write as little as possible.
+6. [**Compulsory**] Requirements: Every module must have INFO level log; Every key process must have INFO level log. The daemon thread must have a WARN level log to clean up resources, etc.
+7. [**Compulsory**] Exception information should include two types of information: crime scene information and exception stack information. If not, then throw it by keyword. Example: logger.error(Parameters/Objects.toString + "_" + e.getMessage(), e);
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/development/development-specification/mapper-xml.md b/versioned_docs/version-1.4.0/development/development-specification/mapper-xml.md
new file mode 100644
index 00000000000..24f9d2e1fb8
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/development-specification/mapper-xml.md
@@ -0,0 +1,159 @@
+---
+title: Mapper XML Specification
+sidebar_position: 10
+---
+
+> Contributor contributes new data tables to Apache Linkis. When writing Mapper XML, the following specifications must be followed for development.
+
+## 1.Basically follow the specifications
+- In mapper.xml namespace is equal to java interface address
+- The method name in the java interface is the same as the id of the statement in XML
+- The input parameter type of the method in the java interface is the same as the type specified by the parameterType of the statement in XML
+- The return value type of the method in the java interface is the same as the type specified by the resultType of the statement in XML
+- All mysql keywords in XML use lowercase uniformly
+- Abstract SQL fragments for excessive query fields
+- It is recommended to use Integer for the integer return value type, which can distinguish between unassigned and 0 cases. For example, if the return value is determined to be a number, int can be used. Other data types are similar.
+- For placeholders, use #{name} instead of ${name}. Fuzzy query can use CONCAT('%',#{sname},'%')
+- For sql statement writing, no annotation method is used, and it is uniformly written in the XML file
+
+## 2.Method name specification
+
+|Method Name|Description|Core Points|Recommendations|
+|:---- |:--- |:--- |:--- |
+|insert | New data | If it is an auto-incrementing primary key, it should return the primary key ID| |
+|deleteById | Delete data according to the primary key ID | sql adds limit 1 by default to prevent multiple deletion of data | This method is not recommended, it is recommended to logically delete |
+|updateById | Modify data according to the primary key ID | sql adds limit 1 by default to prevent multiple data modification | |
+|selectById | Query data by primary key | Query a piece of data | |
+|selectByIdForUpdate | Query data according to the primary key lock | Query a piece of data by locking, for transaction processing | |
+|queryListByParam | Query data list according to input conditions | Multi-parameter query list | |
+|queryCountByParam | The total number of queries based on input conditions | The number of multi-parameter queries | |
+
+## 3.parameterType specification
+The java interface must contain @Param, and the XML can not contain parameterType
+### 3.1 basic type
+````java
+// java interface
+User selectUserById(@Param("id") Integer id);
+// XML file
+
+````
+### 3.2 Collection type
+````java
+// java interface
+List userListByIds(@Param("ids") List ids);
+// XML file
+
+````
+### 3.3 Map type
+````java
+// java interface
+User queryByParams(@Param("map") Map parasms);
+// XML file
+
+````
+### 3.4 Entity Type
+````java
+// java interface
+User queryByUser(@Param("user") User user);
+// XML file
+
+````
+### 3.5 Multiple parameter types
+````java
+// java interface
+User queryByIdAndName(@Param("id") Integer id, @Param("name") String name);
+// XML file
+
+````
+## 4.XML file writing example
+Use spaces and indentation reasonably to enhance readability. Examples of various types of SQL statements are as follows
+```sql
+
+
+ -- add a statement
+
+ insert into user (id, name)
+ values (1, 'z3');
+
+
+ -- delete statement
+
+ delete from user
+ where name = #{name}
+ and id = #{id}
+
+
+ -- modify the statement
+
+ update user
+ set name = #{name}
+ where id = #{id}
+
+
+ -- Check for phrases
+
+
+ -- sql fragment
+
+ id,
+ name
+
+ -- Quote
+
+
+ -- resultMap
+
+
+
+
+ -- Quote
+
+
+ -- conditional judgment
+
+ name = #{name}
+
+
+ -- sub query
+
+
+````
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/development/development-specification/overview.md b/versioned_docs/version-1.4.0/development/development-specification/overview.md
new file mode 100644
index 00000000000..908252021a7
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/development-specification/overview.md
@@ -0,0 +1,17 @@
+---
+title: Overview
+sidebar_position: 0
+---
+
+In order to standardize Linkis's community development environment, improve the output quality of subsequent development iterations of Linkis, and standardize the entire development and design process of Linkis, it is strongly recommended that Contributors follow the following development specifications:
+- [License Notes](license.md)
+- [Programming Specification](programming-specification.md)
+- [Log Specification](log.md)
+- [Exception Handling Specification](exception-catch.md)
+- [API Specification](api.md)
+- [Concurrency Specification](concurrent.md)
+- [Path Specification](path-usage.md)
+- [Test Specification](unit-test.md)
+- [version and new feature specification](version-feature-specifications.md)
+
+**Note**: The development specifications of the initial version of Linkis1.0 are relatively brief, and will continue to be supplemented and improved with the iteration of Linkis. Contributors are welcome to provide their own opinions and comments.
diff --git a/versioned_docs/version-1.4.0/development/development-specification/path-usage.md b/versioned_docs/version-1.4.0/development/development-specification/path-usage.md
new file mode 100644
index 00000000000..f988a3b9a2c
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/development-specification/path-usage.md
@@ -0,0 +1,20 @@
+---
+title: Path Usage Specification
+sidebar_position: 6
+---
+
+Please note: Linkis provides a unified Storage module, so you must follow the Linkis path specification when using the path or configuring the path in the configuration file.
+
+
+
+1. [**Compulsory**]When using a file path, whether it is local, HDFS, or HTTP, the schema information must be included. Among them:
+
+ - The Scheme header for local file is: file:///;
+
+ - The Scheme header for HDFS is: hdfs:///;
+
+ - The Scheme header for HTTP is: http:///.
+
+
+
+2. There should be no special characters in the path. Try to use the combination of English characters, underline and numbers.
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/development/development-specification/programming-specification.md b/versioned_docs/version-1.4.0/development/development-specification/programming-specification.md
new file mode 100644
index 00000000000..b9180105edf
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/development-specification/programming-specification.md
@@ -0,0 +1,98 @@
+---
+title: Programming Specification
+sidebar_position: 1
+---
+## 1. Naming Convention
+1. [**Mandatory**] Do not use Chinese pinyin and unintelligible abbreviations
+2. For basic Java naming conventions, please refer to [naming-conventions](https://alibaba.github.io/Alibaba-Java-Coding-Guidelines/#naming-conventions)
+3. [Constraints] There is a scalastyle style configuration file in linkis, if it does not conform to the specification, you need to rename it according to the scalastyle style
+4. [**Mandatory**] Configuration files, startup file, process name, configuration keys,etc. also need to comply with naming conventions, which are as follows:
+
+|Classification| Style| Specifications| Examples|
+|:---- |:--- |:--- |:--- |
+|Configuration file|Separated by lowercase'-'| linkis-classification level (ps/cg/mg)-service name.propertis| linkis-cg-linkismanager.properties|
+|Start-stop script|Separated by lowercase'-'| linkis-classification level-service name| linkis-cg-linkismanager|
+|Module directory|Separated by lowercase'-'| The module directory must be below the corresponding classification level, and the module name is a subdirectory| linkis-public-enhancements/linkis-bml|
+|Process naming|Camel case naming| Start with Linkis and end with service name| LinkisBMLApplication|
+|Configuration Key Naming|Separated by lowercase'.'| linkis+module name+keyName| linkis.bml.hdfs.prefix|
+
+## 2. Annotation Protocol
+1. [**Mandatory**] The class, class attribute, interface method must be commented, and the comment must use the Javadoc specification, using the format of `/**content*/`
+2. [**Mandatory**] All abstract methods (including methods in interfaces) must be annotated with Javadoc. In addition to return values, parameters, and exception descriptions, they must also indicate what the method does and what functions it implements
+3. [**Mandatory**] All abstract methods (including methods in interfaces) must be annotated with Javadoc, indicating what the method does and does in addition to return values, parameters, and exception descriptions.
+
+
+
+4. [**Mandatory**] method inside a single line comment, a separate line above the comment statement, use // comment. Multi-line comments inside methods use /* */ comments, aligned with code.
+
+
+
+Example:
+
+```java
+
+// Store the reflection relation between parameter variable like 'T' and type like
+
+Map< String, Type> typeVariableReflect = new HashMap< > (a);
+```
+
+5. [**Mandatory**] All enumeration type fields must have a comment stating the purpose of each data item.
+
+
+
+Example:
+
+```java
+/**
+ * to monitor node status info
+ */
+public enum NodeHealthy {
+
+ /**
+ * healthy status
+ */
+ Healthy,
+
+ /**
+ * EM identifies itself as UnHealthy or
+ * The manager marks it as abnormal in the status of UnHealthy processing engine.
+ * The manager requests all engines to withdraw forcibly (engine suicide).
+ */
+ UnHealthy,
+
+ /**
+ * The engine is in the alarm state, but can accept tasks
+ */
+ WARN,
+
+ /**
+ * The stock is available and can accept tasks. When the EM status is not reported for the last n heartbeats,
+ * the Engine that has been started is still normal and can accept tasks
+ */
+ StockAvailable,
+
+ /**
+ * The stock is not available. Tasks cannot be accepted
+ */
+ StockUnavailable;
+```
+
+6. [Recommendation] At the same time of code modification, comments should also be modified, especially parameters, return values, exceptions, core logic, etc.
+
+7. [Recommendation] Delete any unused fields, methods, and inner classes from the class; Remove any unused parameter declarations and internal variables from the method.
+
+8. Carefully comment out code. Specify above, rather than simply commenting it out. If no, delete it. There are two possibilities for the code to be commented out: 1) The code logic will be restored later. 2) Never use it. The former without the comment information, it is difficult to know the annotation motivation. The latter suggestion IS deleted directly CAN, if NEED to consult historical code, log in code WAREHOUSE can.
+
+
+Example:
+
+```java
+ public static final CommonVars TUNING_CLASS =
+ CommonVars.apply(
+ "wds.linkis.cs.ha.class", "org.apache.linkis.cs.highavailable.DefaultContextHAManager");
+ // The following comment code should be removed
+ // public static final CommonVars TUNING_CLASS =
+ // CommonVars.apply("wds.linkis.cs.ha.class","org.apache.linkis.cs.persistence.ProxyMethodA");
+```
+
+9. [Reference] for annotation requirements: first, can accurately reflect the design ideas and code logic; Second, be able to describe the business meaning, so that other programmers can quickly understand the information behind the code. A large piece of code with no comments at all is like a book to the reader. The comments are for the reader, even after a long time, they can clearly understand the thinking at that time. The comments are also for the successor to see, so that he can quickly take over his work.
diff --git a/versioned_docs/version-1.4.0/development/development-specification/release-notes.md b/versioned_docs/version-1.4.0/development/development-specification/release-notes.md
new file mode 100644
index 00000000000..0d98b30d074
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/development-specification/release-notes.md
@@ -0,0 +1,42 @@
+---
+title: Release-Notes Writing Specification
+sidebar_position: 9
+---
+Before each version is released, the release-notes for this version need to be organized by the release manager or developer to briefly describe the specific changes included in the new version update.
+
+In order to maintain uniformity and facilitate writing, the following specifications are formulated:
+- A summary of the version is required, a few sentences summarizing the core main changes of this version
+- According to the changed function points, it is classified into four categories: new features/enhancement points/fixed functions/others
+- Include a thank you column: students who have contributed to this version, in addition to issue/pr, and any students who have participated in this version discussion/community Q&A/comment suggestion
+- Specification of each note: `[Service name abbreviation-L1 maven module name][Linkis-pr/issues serial number] This change briefly describes the information, you can generally know the change of this function through the description information.` `[Service name abbreviation -L1 maven module name]` as a label, the example is as follows
+- Under the same category (new features/enhancement points/fixed functions/others), the service names with the same name are put together and sorted in ascending order of pr/issues serial number
+- Corresponding English documents are required
+
+````
+Service name abbreviation: The change of this pr, at the code level, the corresponding service name abbreviation of the main service
+For example, a pr made bug fixes to the JDBC engine, which is a JDBC module under the linkis-cg-engineconn service
+EG:[EC-Jdbc][[Linkis-1851]](https://github.com/apache/linkis/issues/1851) Fix the jdbc engine, when there are multiple sql statements in one task execution, it cannot be executed normally The problem
+If the L1-module does not exist, or it is the adjustment of the entire service level, the lower-level module may not be written, such as Entrance
+````
+
+## Common notes tags
+```html
+linkis-mg-eureka Eureka
+linkis-mg-gateway Gateway
+linkis-cg-linkismanager LM
+linkis-cg-engineconnplugin ECP
+linkis-cg-engineconnmanager ECM
+linkis-cg-engineconn EC
+linkis-cg-entrance Entrance
+linkis-ps-publicservice PS
+linkis-ps-cs CS
+linkis-ps-metadatamanager MDM
+linkis-ps-data-source-query DSQ
+
+Web console Web
+Install Install
+Install-Scripts Install-Scripts
+Install-SQL Install-Sql
+Install-Web Install-Web
+Common module Common
+````
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/development/development-specification/unit-test.md b/versioned_docs/version-1.4.0/development/development-specification/unit-test.md
new file mode 100644
index 00000000000..09074c0bc6a
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/development-specification/unit-test.md
@@ -0,0 +1,12 @@
+---
+title: Test Specification
+sidebar_position: 7
+---
+
+1. [**Mandatory**] Tool classes and internal interfaces of services must have test case.
+2. [**Mandatory**] Unit testing needs to be able to be automated (triggered by mvn compilation), independence (unit test cases cannot call each other), and repeatable execution (can be executed multiple times, with the same result)
+3. [**Mandatory**] A test case should only test one method.
+4. [**Mandatory**] Test case exceptions cannot be caught and need to be thrown upwards.
+5. [**Mandatory**] The unit test code must be written in the following project directory: src/test/java or scala, and it is not allowed to be written in other records.
+6. [Recommended] Unit testing needs to consider boundary conditions, such as the end of the month and February.
+7. [Recommended] For database-related unit tests, consider data rollback.
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/development/development-specification/version-feature-specifications.md b/versioned_docs/version-1.4.0/development/development-specification/version-feature-specifications.md
new file mode 100644
index 00000000000..0e9bec91030
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/development-specification/version-feature-specifications.md
@@ -0,0 +1,25 @@
+---
+title: Version and New Feature Specification
+sidebar_position: 8
+---
+
+## 1. New version specification
+When you need a new version, you need to follow the steps below:
+1. [Mandatory] The new version must be organized for PMC members and developers to discuss, and meeting minutes must be recorded and sent to the mailing list
+2. [Mandatory] The scope of the new version of the feature list requires email voting. 3+ PMC members approval is required and the approval votes are greater than the negative votes
+3. [Mandatory] After the version is voted on, the corresponding version needs to be established on GitHub [Project](https://github.com/apache/linkis/projects)
+4. [Mandatory] Each feature needs to send a separate mailing list to explain the design reasons and design ideas
+5. [Mandatory] The mailing list needs to be sent to installation, database, configuration modification
+6. [Recommended] One feature corresponds to one issue corresponds to one PR
+7. [Mandatory] Each version requires CICD to pass and test cases to pass before the version can be released
+8. [Constraints] Each version needs to have a corresponding leader, and the leader needs to manage related issues and PRs, and hold discussions, actively respond to emails, confirm plans, track progress, etc.
+
+
+## 2. New feature specification
+When you add new features, you need to follow the steps below:
+1. [Mandatory] New features need to send emails to vote, and attach design reasons and design ideas
+2. [Mandatory] New features need to be added to the version corresponding to GitHub [Project](https://github.com/apache/linkis/projects)
+3. [Mandatory] The mailing list needs to be sent to installation, database, configuration modification
+4. [Mandatory] New features must add new documents
+5. [Mandatory] New features need to add corresponding unit tests, [Unit Test Specification](https://linkis.apache.org/community/development-specification/unit-test)
+6. [Recommended] One feature corresponds to one issue corresponds to one PR
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/development/directory-structure.md b/versioned_docs/version-1.4.0/development/directory-structure.md
new file mode 100644
index 00000000000..b8863a83ff4
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/directory-structure.md
@@ -0,0 +1,291 @@
+---
+title: Directory Structure
+sidebar_position: 0
+---
+
+> linkis code hierarchy structure, as well as package structure and deployment directory structure description description, if you explain, if you want to explain, if you want to know more about each module module module module
+
+## 1. Source code directory structure
+
+```html
+├── docs
+│ ├── configuration //linkis configuration item documents for all modules
+│ ├── errorcode // error code document of all modules of linkis
+│ ├── configuration-change-records.md
+│ ├── index.md
+│ ├── info-1.1.3.md
+│ ├── info-1.2.1.md
+│ ├── info-1.3.1.md
+│ └── trino-usage.md
+├── linkis-commons //Core abstraction, which contains all common modules
+│ ├── linkis-common //Common module, many built-in common tools
+│ ├── linkis-hadoop-common
+│ ├── linkis-httpclient //Java SDK top-level interface further encapsulates httpclient
+│ ├── linkis-module // The top-level public module of linkis service involves parameters and service initialization when the service starts, unified Restful processing, login status verification, etc.
+│ ├── linkis-mybatis //Mybatis module of SpringCloud
+│ ├── linkis-protocol //Some interfaces and entity classes of service request/response
+│ ├── linkis-rpc //RPC module, complex two-way communication based on Feign
+│ ├── linkis-scheduler //General scheduling module
+│ ├── linkis-storage //File operation tool set
+├── linkis-computation-governance //Computation governance service
+│ ├── linkis-client //Java SDK, users can directly access Linkis through Client
+│ ├── linkis-computation-governance-common
+│ ├── linkis-engineconn
+│ ├── linkis-engineconn-manager
+│ ├── linkis-entrance //General underlying entrance module
+│ ├── linkis-jdbc-driver //You can use linkis to connect in a similar way to jdbc sdk
+│ ├── linkis-manager
+├── linkis-dist //The final step of compiling and packaging, integrating all lib packages and installation and deployment script configuration, etc.
+│ ├── bin
+│ │ ├── checkEnv.sh
+│ │ ├── common.sh
+│ │ └── install.sh //Installation script
+│ ├── deploy-config
+│ │ ├── db.sh //database configuration
+│ │ └── linkis-env.sh //linkis startup related configuration
+│ ├── docker
+│ │ └── scripts
+│ ├── helm
+│ │ ├── charts
+│ │ ├── scripts
+│ │ ├── README_CN.md
+│ │ └── README.md
+│ ├── package
+│ │ ├── bin
+│ │ ├── conf
+│ │ ├── db
+│ │ └── sbin
+│ ├── release-docs
+│ │ ├── licenses
+│ │ ├── LICENSE
+│ │ └── NOTICE
+│ ├── src
+│ └── pom.xml
+├── linkis-engineconn-plugins // engine
+│ ├── elasticsearch
+│ ├── flink
+│ ├──hive
+│ ├── io_file
+│ ├── jdbc
+│ ├── open look
+│ ├── pipeline
+│ ├── presto
+│ ├── python
+│ ├── seat tunnel
+│ ├── shell
+│ ├── spark
+│ ├── sqoop
+├── linkis-extensions // extension function enhancement plug-in module
+│ ├── linkis-io-file-client // function extension to linkis-storage
+├── linkis-orchestrator //Service orchestration
+│ ├── linkis-code-orchestrator
+│ ├── linkis-computation-orchestrator
+│ ├── linkis-orchestrator-core
+├── linkis-public-enhancements //public enhancement services
+│ ├── linkis-baseddata-manager
+│ ├── linkis-bml // material library
+│ ├── linkis-configuration
+│ ├── linkis-context-service //unified context
+│ ├── linkis-datasource //data source service
+│ ├── linkis-error-code
+│ ├── linkis-instance-label
+│ ├── linkis-jobhistory
+│ ├── linkis-ps-common-lock
+│ ├── linkis-script-dev
+│ ├── linkis-udf
+│ ├── linkis-variable
+├── linkis-spring-cloud-services //Microservice Governance
+│ ├── linkis-service-discovery
+│ ├── linkis-service-gateway //Gateway Gateway
+├── linkis-web //linkis management console code
+│ ├── release-docs
+│ │ ├── licenses
+│ │ └── LICENSE
+│ ├── src
+│ ├── config.sh
+│ ├── install.sh
+│ ├── package.json
+│ ├── pom.xml
+│ └── vue.config.js
+├── tool
+│ ├── dependencies
+│ │ ├── known-dependencies.txt
+│ │ └── regenerate_konwn_dependencies_txt.sh
+│ ├── code-style-idea.xml
+│ ├── license-header
+│ └── modify_license.sh
+├── CONTRIBUTING_CN.md
+├── CONTRIBUTING.md
+├── linkis-tree.txt
+├── mvnw
+├── mvnw.cmd
+├── pom.xml
+├── README_CN.md
+├── README.md
+└── scalastyle-config.xml
+
+```
+
+## 2. Installation package directory structure
+```html
+
+├── bin
+│ ├── checkEnv.sh ── environment variable detection
+│ ├── common.sh ── some public shell functions
+│ └── install.sh ── Main script for Linkis installation
+├── deploy-config
+│ ├── db.sh //Database connection configuration
+│ └── linkis-env.sh //Related environment configuration information
+├── docker
+├── helm
+├── licenses
+├── linkis-package //Microservice-related startup configuration files, dependencies, scripts, linkis-cli, etc.
+│ ├── bin
+│ ├── conf
+│ ├── db
+│ ├── lib
+│ └── sbin
+├── NOTICE
+├── LICENSE
+├── README_CN.md
+└── README.md
+
+```
+
+## 3. Directory structure after deployment
+
+
+```html
+├── bin ── linkis-cli Shell command line program used to submit tasks to Linkis
+│ ├── linkis-cli
+│ ├── linkis-cli-hive
+│ ├── linkis-cli-pre
+│ ├── linkis-cli-spark-sql
+│ ├── linkis-cli-spark-submit
+│ └── linkis-cli-sqoop
+├── conf configuration directory
+│ ├── application-eureka.yml
+│ ├── application-linkis.yml ── Microservice general yml
+│ ├── linkis-cg-engineconnmanager.properties
+│ ├── linkis-cg-engineplugin.properties
+│ ├── linkis-cg-linkismanager.properties
+│ │── linkis-cli
+│ │ ├── linkis-cli.properties
+│ │ └── log4j2.xml
+│ ├── linkis-env.sh ── linkis environment variable configuration
+│ ├── linkis-mg-gateway.properties
+│ ├── linkis.properties ── The global coordination of linkis services, all microservices will be loaded and used when starting
+│ ├── linkis-ps-publicservice.properties
+│ ├── log4j2.xml
+├── db Database DML and DDL file directory
+│ ├── linkis_ddl.sql ── database table definition SQL
+│ ├── linkis_dml.sql ── database table initialization SQL
+│ └── module ── Contains DML and DDL files of each microservice
+│ └── upgrade ── Incremental DML/DDL for each version
+├── lib lib directory
+│ ├── linkis-commons ── Public dependency package When most services start (except linkis-mg-gateway) -cp path parameter will load this directory
+│ ├── linkis-computation-governance ── lib directory of computing governance module
+│ ├── linkis-engineconn-plugins ── lib directory of all engine plugins
+│ ├── linkis-public-enhancements ── lib directory of public enhancement services
+│ └── linkis-spring-cloud-services ── SpringCloud lib directory
+├── logs log directory
+│ ├── linkis-cg-engineconnmanager-gc.log
+│ ├── linkis-cg-engineconnmanager.log
+│ ├── linkis-cg-engineconnmanager.out
+│ ├── linkis-cg-engineplugin-gc.log
+│ ├── linkis-cg-engineplugin.log
+│ ├── linkis-cg-engineplugin.out
+│ ├── linkis-cg-entrance-gc.log
+│ ├── linkis-cg-entrance.log
+│ ├── linkis-cg-entrance.out
+│ ├── linkis-cg-linkismanager-gc.log
+│ ├── linkis-cg-linkismanager.log
+│ ├── linkis-cg-linkismanager.out
+│ ├── linkis-cli
+│ │ ├── linkis-client.hadoop.log.20220409162400037523664
+│ │ ├── linkis-client.hadoop.log.20220409162524417944443
+│ ├── linkis-mg-eureka-gc.log
+│ ├── linkis-mg-eureka.log
+│ ├── linkis-mg-eureka.out
+│ ├── linkis-mg-gateway-gc.log
+│ ├── linkis-mg-gateway.log
+│ ├── linkis-mg-gateway.out
+│ ├── linkis-ps-publicservice-gc.log
+│ ├── linkis-ps-publicservice.log
+│ └── linkis-ps-publicservice.out
+├── pid The process ID of all microservices
+│ ├── linkis_cg-engineconnmanager.pid ── engine manager microservice
+│ ├── linkis_cg-engineconnplugin.pid ── engine plugin microservice
+│ ├── linkis_cg-entrance.pid ── engine entry microservice
+│ ├── linkis_cg-linkismanager.pid ── linkis manager microservice
+│ ├── linkis_mg-eureka.pid ── eureka microservice
+│ ├── linkis_mg-gateway.pid ──gateway microservice
+│ └── linkis_ps-publicservice.pid ── public microservice
+└── sbin Microservice startup and shutdown script directory
+├── ext ──The start and stop script directory of each microservice
+ ├── linkis-daemon.sh ── Quickly start, stop, and restart a single microservice script
+├── linkis-start-all.sh ── Start all microservice scripts with one click
+└── linkis-stop-all.sh ── Stop all microservice scripts with one click
+```
+### 3.1 Configuration item modification
+
+After executing Linkis installation, all configuration items are located in the conf directory,
+If you need to modify the configuration items, after modifying the `${LINKIS_HOME}/conf/*properties` file, restart the corresponding service,
+For example: `sh sbin/linkis-daemon.sh start ps-publicservice`.
+If you modify the public configuration file `application-eureka.yml/application-linkis.yml/linkis.properties`, you need to restart all services `sh sbin/linkis-start-all.sh`
+
+### 3.2 Microservice start and stop
+
+All microservice names are as follows:
+ ```
+├── linkis-cg-engineconnmanager engine management service
+├── linkis-cg-engineplugin engine plugin management service
+├── linkis-cg-entrance computing governance entry service
+├── linkis-cg-linkismanager computing governance management service
+├── linkis-mg-eureka microservice registry service
+├── linkis-mg-gateway Linkis gateway service
+├── linkis-ps-publicservice public service
+ ```
+
+**Microservice Abbreviation**:
+
+| Abbreviation | Full name in English | Full name in Chinese |
+ |------|-------------------------|------------|
+| cg | Computation Governance | Computing Governance |
+| mg | Microservice Covernance | Microservice Governance |
+| ps | Public Enhancement Service | Public Enhancement Service |
+
+
+
+```
+# Start all microservices at once:
+
+ sh linkis-start-all.sh
+
+# Shut down all microservices at once
+
+ sh linkis-stop-all.sh
+
+# Start a single microservice (the service name needs to remove the linkis prefix, such as: mg-eureka)
+
+ sh linkis-daemon.sh start service-name
+
+ For example: sh linkis-daemon.sh start mg-eureka
+
+# Shut down a single microservice
+
+ sh linkis-daemon.sh stop service-name
+
+ For example: sh linkis-daemon.sh stop mg-eureka
+
+# Restart a single microservice
+
+ sh linkis-daemon.sh restart service-name
+
+ For example: sh linkis-daemon.sh restart mg-eureka
+# View the status of a single microservice
+
+ sh linkis-daemon.sh status service-name
+
+ For example: sh linkis-daemon.sh status mg-eureka
+```
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/development/images/c-debug.png b/versioned_docs/version-1.4.0/development/images/c-debug.png
new file mode 100644
index 00000000000..d5edce7fe24
Binary files /dev/null and b/versioned_docs/version-1.4.0/development/images/c-debug.png differ
diff --git a/versioned_docs/version-1.4.0/development/images/c-port.png b/versioned_docs/version-1.4.0/development/images/c-port.png
new file mode 100644
index 00000000000..2c0a8cb420f
Binary files /dev/null and b/versioned_docs/version-1.4.0/development/images/c-port.png differ
diff --git a/versioned_docs/version-1.4.0/development/images/debug.png b/versioned_docs/version-1.4.0/development/images/debug.png
new file mode 100644
index 00000000000..4c0a9e39e4e
Binary files /dev/null and b/versioned_docs/version-1.4.0/development/images/debug.png differ
diff --git a/versioned_docs/version-1.4.0/development/images/test-0.png b/versioned_docs/version-1.4.0/development/images/test-0.png
new file mode 100644
index 00000000000..3ebe85fe283
Binary files /dev/null and b/versioned_docs/version-1.4.0/development/images/test-0.png differ
diff --git a/versioned_docs/version-1.4.0/development/images/test-1.png b/versioned_docs/version-1.4.0/development/images/test-1.png
new file mode 100644
index 00000000000..3ebe85fe283
Binary files /dev/null and b/versioned_docs/version-1.4.0/development/images/test-1.png differ
diff --git a/versioned_docs/version-1.4.0/development/images/test-2.png b/versioned_docs/version-1.4.0/development/images/test-2.png
new file mode 100644
index 00000000000..95ad650c167
Binary files /dev/null and b/versioned_docs/version-1.4.0/development/images/test-2.png differ
diff --git a/versioned_docs/version-1.4.0/development/new-engine-conn.md b/versioned_docs/version-1.4.0/development/new-engine-conn.md
new file mode 100644
index 00000000000..fd8537782f4
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/new-engine-conn.md
@@ -0,0 +1,477 @@
+---
+title: Quickly Implement New Engine
+sidebar_position: 7.0
+---
+
+## 1. Linkis new engine function code implementation
+
+Implementing a new engine is actually implementing a new EngineConnPlugin (ECP) engine plugin. Specific steps are as follows:
+
+### 1.1 Create a new maven module and introduce the maven dependency of ECP
+
+![maven dep](/Images/EngineConnNew/engine_jdbc_dependency.png)
+
+```xml
+
+ org.apache.linkis
+ linkis-engineconn-plugin-core
+ ${linkis.version}
+
+
+```
+
+### 1.2 Implement the main interface of ECP
+
+- **EngineConnPlugin:** When starting EngineConn, first find the corresponding EngineConnPlugin class, and use this as the entry point to obtain the implementation of other core interfaces, which is the main interface that must be implemented.
+
+- **EngineConnFactory:** Implementing the logic of how to start an engine connector and how to start an engine executor is an interface that must be implemented.
+ - Implement the createEngineConn method: return an EngineConn object, where getEngine returns an object that encapsulates the connection information with the underlying engine, and also contains the Engine type information.
+ - For engines that only support a single computing scenario, inherit SingleExecutorEngineConnFactory, implement createExecutor, and return the corresponding Executor.
+ - For engines that support multi-computing scenarios, you need to inherit MultiExecutorEngineConnFactory and implement an ExecutorFactory for each computation type. EngineConnPlugin will obtain all ExecutorFactory through reflection, and return the corresponding Executor according to the actual situation.
+- **EngineConnResourceFactory:** It is used to limit the resources required to start an engine. Before the engine starts, it will apply for resources from Linkis Manager based on this. Not required, GenericEngineResourceFactory can be used by default.
+- **EngineLaunchBuilder:** It is used to encapsulate the necessary information that EngineConnManager can parse into startup commands. Not required, you can directly inherit JavaProcessEngineConnLaunchBuilder.
+
+### 1.3 Implement the engine Executor executor logic
+
+Executor is an executor. As a real computing scene executor, it is an actual computing logic execution unit and an abstraction of various specific capabilities of the engine. It provides various services such as locking, accessing status, and obtaining logs. And according to the actual needs, Linkis provides the following derived Executor base classes by default. The class names and main functions are as follows:
+
+- **SensibleExecutor:**
+ - Executor has multiple states, allowing Executor to switch states
+ - After the Executor switches states, operations such as notifications are allowed
+- **YarnExecutor:** Refers to the Yarn type engine, which can obtain applicationId, applicationURL and queue.
+- **ResourceExecutor:** means that the engine has the ability to dynamically change resources, and provides the requestExpectedResource method, which is used to apply for a new resource from the RM every time you want to change the resource; and the resourceUpdate method, which is used each time the engine actually uses the resource When changes occur, report the resource situation to RM.
+- **AccessibleExecutor:** is a very important Executor base class. If the user's Executor inherits this base class, it means that the Engine can be accessed. Here, it is necessary to distinguish between the state() of SensibleExecutor and the getEngineStatus() method of AccessibleExecutor: state() is used to obtain the engine status, and getEngineStatus() will obtain the Metric data of basic indicators such as the status, load, and concurrency of the engine.
+- At the same time, if AccessibleExecutor is inherited, the Engine process will be triggered to instantiate multiple EngineReceiver methods. EngineReceiver is used to process RPC requests from Entrance, EM and LinkisMaster, making the engine an accessible engine. If users have special RPC requirements, they can communicate with AccessibleExecutor by implementing the RPCService interface.
+- **ExecutableExecutor:** is a resident Executor base class. The resident Executor includes: Streaming application in the production center, steps specified to run in independent mode after being submitted to Schedulelis, business applications for business users, etc.
+- **StreamingExecutor:** Streaming is a streaming application, inherited from ExecutableExecutor, and needs to have the ability to diagnose, do checkpoint, collect job information, and monitor alarms.
+- **ComputationExecutor:** is a commonly used interactive engine Executor, which handles interactive execution tasks and has interactive capabilities such as status query and task kill.
+- **ConcurrentComputationExecutor:** User concurrent engine Executor, commonly used in JDBC type engines. When executing scripts, the administrator account starts the engine instance, and all users share the engine instance.
+
+## 2. Take the JDBC engine as an example to explain the implementation steps of the new engine in detail
+
+This chapter takes the JDBC engine as an example to explain the implementation process of the new engine in detail, including engine code compilation, installation, database configuration, management console engine label adaptation, and the new engine script type extension in Scripts and the task node extension of the new workflow engine, etc. .
+
+### 2.1 Concurrency engine setting default startup user
+
+The abstract class inherited from the core class `JDBCEngineConnExecutor` in the JDBC engine is `ConcurrentComputationExecutor`, and the abstract class inherited from the core class `XXXEngineConnExecutor` in the calculation engine is `ComputationExecutor`. This leads to the biggest difference between the two: the JDBC engine instance is started by the administrator user and shared by all users to improve the utilization of machine resources; while the script of the computing engine type is submitted, an engine instance will be started under each user. , the engine instances between users are isolated from each other. This will not be elaborated here, because whether it is a concurrent engine or a computing engine, the additional modification process mentioned below should be consistent.
+
+Correspondingly, if your new engine is a concurrent engine, then you need to pay attention to this class: AMConfiguration.scala, if your new engine is a computing engine, you can ignore it.
+
+```scala
+object AMConfiguration {
+ // If your engine is a multi-user concurrent engine, then this configuration item needs to be paid attention to
+ val MULTI_USER_ENGINE_TYPES = CommonVars("wds.linkis.multi.user.engine.types", "jdbc,ck,es,io_file,appconn")
+
+ private def getDefaultMultiEngineUser(): String = {
+ // This should be to set the startup user when the concurrent engine is pulled up. The default jvmUser is the startup user of the engine service Java process.
+ val jvmUser = Utils.getJvmUser
+ s"""{jdbc:"$jvmUser", presto: "$jvmUser" es: "$jvmUser", ck:"$jvmUser", appconn:"$jvmUser", io_file:"root"}"""
+ }
+}
+```
+
+### 2.2 New engine type extension
+
+In the class `JDBCEngineConnFactory` that implements the `ComputationSingleExecutorEngineConnFactory` interface, the following two methods need to be implemented:
+
+```scala
+override protected def getEngineConnType: EngineType = EngineType.JDBC
+
+override protected def getRunType: RunType = RunType.JDBC
+```
+
+Therefore, it is necessary to add variables corresponding to JDBC in EngineType and RunType.
+
+```scala
+// EngineType.scala is similar to the variable definition of the existing engine, adding JDBC related variables or code
+object EngineType extends Enumeration with Logging {
+ val JDBC = Value("jdbc")
+}
+
+def mapStringToEngineType(str: String): EngineType = str match {
+ case _ if JDBC.toString.equalsIgnoreCase(str) => JDBC
+}
+
+// RunType.scla中
+object RunType extends Enumeration {
+ val JDBC = Value("jdbc")
+}
+```
+
+### 2.3 Version number settings in the JDBC engine tab
+
+```scala
+// Add the version configuration of JDBC in LabelCommonConfig
+public class LabelCommonConfig {
+ public final static CommonVars JDBC_ENGINE_VERSION = CommonVars.apply("wds.linkis.jdbc.engine.version", "4");
+}
+
+// Supplement the matching logic of jdbc in the init method of EngineTypeLabelCreator
+// If this step is not done, when the code is submitted to the engine, the version number will be missing from the engine tag information
+public class EngineTypeLabelCreator {
+private static void init() {
+ defaultVersion.put(EngineType.JDBC().toString(), LabelCommonConfig.JDBC_ENGINE_VERSION.getValue());
+ }
+}
+````
+
+### 2.4 Types of script files that are allowed to be opened by the script editor
+
+Follow configuration items:wds.linkis.storage.file.type
+
+```scala
+object LinkisStorageConf{
+ val FILE_TYPE = CommonVars("wds.linkis.storage.file.type", "dolphin,sql,scala,py,hql,python,out,log,text,sh,jdbc,ngql,psql,fql").getValue
+}
+```
+
+### 2.5 Configure JDBC script variable storage and parsing
+
+If this operation is not done, the variables in the JDBC script cannot be stored and parsed normally, and the code execution will fail when ${variable} is directly used in the script!
+
+![变量解析](/Images/EngineConnNew/variable_resolution.png)
+
+
+```scala
+// Maintain the variable relationship between codeType and runType through CODE_TYPE_AND_RUN_TYPE_RELATION in the CodeAndRunTypeUtils tool class
+
+val CODE_TYPE_AND_RUN_TYPE_RELATION = CommonVars("wds.linkis.codeType.runType.relation", "sql=>sql|hql|jdbc|hive|psql|fql,python=>python|py|pyspark,java=>java,scala=>scala,shell=>sh|shell")
+```
+
+Refer to PR:https://github.com/apache/linkis/pull/2047
+
+### 2.6 Add JDBC engine text prompts or icons to the Linkis administrator console interface engine manager
+
+web/src/dss/module/resourceSimple/engine.vue
+
+```js
+methods: {
+ calssifyName(params) {
+ switch (params) {
+ case 'jdbc':
+ return 'JDBC';
+ ......
+ }
+ }
+ // 图标过滤
+ supportIcon(item) {
+ const supportTypes = [
+ ......
+ { rule: 'jdbc', logo: 'fi-jdbc' },
+ ];
+ }
+}
+```
+
+The final effect presented to the user:
+
+![JDBC类型引擎](/Images/EngineConnNew/jdbc_engine_view.png)
+
+### 2.7 Compile, package, install and deploy the JDBC engine
+
+An example command for JDBC engine module compilation is as follows:
+
+```shell
+cd /linkis-project/linkis-engineconn-pluginsjdbc
+
+mvn clean install -DskipTests
+````
+
+When compiling a complete project, the new engine will not be added to the final tar.gz archive by default. If necessary, please modify the following files:
+
+linkis-dist/package/src/main/assembly/assembly.xml
+
+```xml
+
+
+ ......
+
+
+ ../../linkis-engineconn-pluginsjdbc/target/out/
+
+ lib/linkis-engineconn-plugins/
+
+ **/*
+
+
+
+```
+
+Then run the compile command in the project root directory:
+
+```shell
+mvn clean install -DskipTests
+````
+
+After successful compilation, find out.zip in the directories of linkis-dist/target/apache-linkis-1.x.x-incubating-bin.tar.gz and linkis-engineconn-pluginsjdbc/target/.
+
+Upload the out.zip file to the Linkis deployment node and extract it to the Linkis installation directory /lib/linkis-engineconn-plugins/:
+
+![引擎安装](/Images/EngineConnNew/engine_set_up.png)
+
+Don't forget to delete out.zip after decompression, so far the engine compilation and installation are completed.
+
+### 2.8 JDBC engine database configuration
+
+Select Add Engine in the console
+
+![添加引擎](/Images/EngineConnNew/add_engine_conf.png)
+
+
+If you want to support engine parameter configuration on the management console, you can modify the database according to the JDBC engine SQL example.
+
+The JDBC engine is used here as an example. After the engine is installed, if you want to run the new engine code, you need to configure the database of the engine. Take the JDBC engine as an example, please modify it according to the situation of the new engine you implemented yourself.
+
+The SQL reference is as follows:
+
+```sql
+SET @JDBC_LABEL="jdbc-4";
+
+SET @JDBC_ALL=CONCAT('*-*,',@JDBC_LABEL);
+SET @JDBC_IDE=CONCAT('*-IDE,',@JDBC_LABEL);
+SET @JDBC_NODE=CONCAT('*-nodeexecution,',@JDBC_LABEL);
+
+-- JDBC
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.rm.instance', '范围:1-20,单位:个', 'jdbc引擎最大并发数', '2', 'NumInterval', '[1,20]', '0', '0', '1', '队列资源', 'jdbc');
+
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.driver', '取值范围:对应JDBC驱动名称', 'jdbc驱动名称','', 'None', '', '0', '0', '1', '数据源配置', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.connect.url', '例如:jdbc:hive2://127.0.0.1:10000', 'jdbc连接地址', 'jdbc:hive2://127.0.0.1:10000', 'None', '', '0', '0', '1', '数据源配置', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.version', '取值范围:jdbc3,jdbc4', 'jdbc版本','jdbc4', 'OFT', '[\"jdbc3\",\"jdbc4\"]', '0', '0', '1', '数据源配置', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.connect.max', '范围:1-20,单位:个', 'jdbc引擎最大连接数', '10', 'NumInterval', '[1,20]', '0', '0', '1', '数据源配置', 'jdbc');
+
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.auth.type', '取值范围:SIMPLE,USERNAME,KERBEROS', 'jdbc认证方式', 'USERNAME', 'OFT', '[\"SIMPLE\",\"USERNAME\",\"KERBEROS\"]', '0', '0', '1', '用户配置', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.username', 'username', '数据库连接用户名', '', 'None', '', '0', '0', '1', '用户配置', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.password', 'password', '数据库连接密码', '', 'None', '', '0', '0', '1', '用户配置', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.principal', '例如:hadoop/host@KDC.COM', '用户principal', '', 'None', '', '0', '0', '1', '用户配置', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.keytab.location', '例如:/data/keytab/hadoop.keytab', '用户keytab文件路径', '', 'None', '', '0', '0', '1', '用户配置', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.proxy.user.property', '例如:hive.server2.proxy.user', '用户代理配置', '', 'None', '', '0', '0', '1', '用户配置', 'jdbc');
+
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.engineconn.java.driver.cores', '取值范围:1-8,单位:个', 'jdbc引擎初始化核心个数', '1', 'NumInterval', '[1,8]', '0', '0', '1', 'jdbc引擎设置', 'jdbc');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.engineconn.java.driver.memory', '取值范围:1-8,单位:G', 'jdbc引擎初始化内存大小', '1g', 'Regex', '^([1-8])(G|g)$', '0', '0', '1', 'jdbc引擎设置', 'jdbc');
+
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType',@JDBC_ALL, 'OPTIONAL', 2, now(), now());
+
+insert into `linkis_ps_configuration_key_engine_relation` (`config_key_id`, `engine_type_label_id`)
+ (select config.id as `config_key_id`, label.id AS `engine_type_label_id` FROM linkis_ps_configuration_config_key config INNER JOIN linkis_cg_manager_label label ON config.engine_conn_type = 'jdbc' and label_value = @JDBC_ALL);
+
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType',@JDBC_IDE, 'OPTIONAL', 2, now(), now());
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType',@JDBC_NODE, 'OPTIONAL', 2, now(), now());
+
+
+
+select @label_id := id from linkis_cg_manager_label where `label_value` = @JDBC_IDE;
+insert into linkis_ps_configuration_category (`label_id`, `level`) VALUES (@label_id, 2);
+
+select @label_id := id from linkis_cg_manager_label where `label_value` = @JDBC_NODE;
+insert into linkis_ps_configuration_category (`label_id`, `level`) VALUES (@label_id, 2);
+
+
+-- jdbc default configuration
+insert into `linkis_ps_configuration_config_value` (`config_key_id`, `config_value`, `config_label_id`)
+ (select `relation`.`config_key_id` AS `config_key_id`, '' AS `config_value`, `relation`.`engine_type_label_id` AS `config_label_id` FROM linkis_ps_configuration_key_engine_relation relation INNER JOIN linkis_cg_manager_label label ON relation.engine_type_label_id = label.id AND label.label_value = @JDBC_ALL);
+```
+
+If you want to reset the database configuration data of the engine, the reference files are as follows, please modify and use as needed:
+
+```sql
+-- Clear the initialization data of the jdbc engine
+SET @JDBC_LABEL="jdbc-4";
+
+SET @JDBC_ALL=CONCAT('*-*,',@JDBC_LABEL);
+SET @JDBC_IDE=CONCAT('*-IDE,',@JDBC_LABEL);
+SET @JDBC_NODE=CONCAT('*-nodeexecution,',@JDBC_LABEL);
+
+delete from `linkis_ps_configuration_config_value` where `config_label_id` in
+ (select `relation`.`engine_type_label_id` AS `config_label_id` FROM `linkis_ps_configuration_key_engine_relation` relation INNER JOIN `linkis_cg_manager_label` label ON relation.engine_type_label_id = label.id AND label.label_value = @JDBC_ALL);
+
+
+delete from `linkis_ps_configuration_key_engine_relation`
+where `engine_type_label_id` in
+ (select label.id FROM `linkis_ps_configuration_config_key` config
+ INNER JOIN `linkis_cg_manager_label` label
+ ON config.engine_conn_type = 'jdbc' and label_value = @JDBC_ALL);
+
+
+delete from `linkis_ps_configuration_category`
+where `label_id` in (select id from `linkis_cg_manager_label` where `label_value` in(@JDBC_IDE, @JDBC_NODE));
+
+
+delete from `linkis_ps_configuration_config_key` where `engine_conn_type` = 'jdbc';
+
+delete from `linkis_cg_manager_label` where `label_value` in (@JDBC_ALL, @JDBC_IDE, @JDBC_NODE);
+
+```
+
+Final effect:
+
+![JDBC引擎](/Images/EngineConnNew/jdbc_engine_conf_detail.png)
+
+After this configuration, when linkis-cli and Scripts submit the engine script, the tag information of the engine and the connection information of the data source can be correctly matched, and then the newly added engine can be pulled up.
+
+### 2.9 Added JDBC script type and icon information in DSS Scripts
+
+If you use the Scripts function of DSS, you also need to make some small changes to the front-end files of the web in the dss project. The purpose of the changes is to support creating, opening, and executing JDBC engine script types in Scripts, as well as implementing the corresponding engine. Icons, fonts, etc.
+
+#### 2.9.1 scriptis.js
+
+web/src/common/config/scriptis.js
+
+```js
+{
+ rule: /\.jdbc$/i,
+ lang: 'hql',
+ executable: true,
+ application: 'jdbc',
+ runType: 'jdbc',
+ ext: '.jdbc',
+ scriptType: 'jdbc',
+ abbr: 'jdbc',
+ logo: 'fi-jdbc',
+ color: '#444444',
+ isCanBeNew: true,
+ label: 'JDBC',
+ isCanBeOpen: true
+},
+```
+
+#### 2.9.2 Script copy support
+
+web/src/apps/scriptis/module/workSidebar/workSidebar.vue
+
+```js
+copyName() {
+ let typeArr = ['......', 'jdbc']
+}
+```
+
+#### 2.9.3 Logo and font color matching
+
+web/src/apps/scriptis/module/workbench/title.vue
+
+```js
+ data() {
+ return {
+ isHover: false,
+ iconColor: {
+ 'fi-jdbc': '#444444',
+ },
+ };
+ },
+```
+
+web/src/apps/scriptis/module/workbench/modal.js
+
+```js
+let logoList = [
+ { rule: /\.jdbc$/i, logo: 'fi-jdbc' },
+];
+```
+
+web/src/components/tree/support.js
+
+```js
+export const supportTypes = [
+ // Probably useless here
+ { rule: /\.jdbc$/i, logo: 'fi-jdbc' },
+]
+```
+
+Engine icon display
+
+web/src/dss/module/resourceSimple/engine.vue
+
+```js
+methods: {
+ calssifyName(params) {
+ switch (params) {
+ case 'jdbc':
+ return 'JDBC';
+ ......
+ }
+ }
+ // 图标过滤
+ supportIcon(item) {
+ const supportTypes = [
+ ......
+ { rule: 'jdbc', logo: 'fi-jdbc' },
+ ];
+ }
+}
+```
+
+web/src/dss/assets/projectIconFont/iconfont.css
+
+```css
+.fi-jdbc:before {
+ content: "\e75e";
+}
+```
+
+The control here should be:
+
+![引擎图标](/Images/EngineConnNew/jdbc_engine_logo.png)
+
+Find an svg file of the engine icon
+
+web/src/components/svgIcon/svg/fi-jdbc.svg
+
+If the new engine needs to contribute to the community in the future, the svg icons, fonts, etc. corresponding to the new engine need to confirm the open source agreement to which they belong, or obtain their copyright license.
+
+### 2.10 Workflow adaptation of DSS
+
+The final result:
+
+![工作流适配](/Images/EngineConnNew/jdbc_job_flow.png)
+
+Save the definition data of the newly added JDBC engine in the dss_workflow_node table, refer to SQL:
+
+```sql
+-- Engine task node basic information definition
+insert into `dss_workflow_node` (`id`, `name`, `appconn_name`, `node_type`, `jump_url`, `support_jump`, `submit_to_scheduler`, `enable_copy`, `should_creation_before_node`, `icon`) values('18','jdbc','-1','linkis.jdbc.jdbc',NULL,'1','1','1','0','svg文件');
+
+-- The svg file corresponds to the new engine task node icon
+
+-- Classification and division of engine task nodes
+insert into `dss_workflow_node_to_group`(`node_id`,`group_id`) values (18, 2);
+
+-- Basic information (parameter attribute) binding of the engine task node
+INSERT INTO `dss_workflow_node_to_ui`(`workflow_node_id`,`ui_id`) VALUES (18,45);
+
+-- The basic information related to the engine task node is defined in the dss_workflow_node_ui table, and then displayed in the form of a form on the right side of the above figure. You can expand other basic information for the new engine, and then it will be automatically rendered by the form on the right.
+```
+
+web/src/apps/workflows/service/nodeType.js
+
+```js
+import jdbc from '../module/process/images/newIcon/jdbc.svg';
+
+const NODETYPE = {
+ ......
+ JDBC: 'linkis.jdbc.jdbc',
+}
+
+const ext = {
+ ......
+ [NODETYPE.JDBC]: 'jdbc',
+}
+
+const NODEICON = {
+ [NODETYPE.JDBC]: {
+ icon: jdbc,
+ class: {'jdbc': true}
+ },
+}
+```
+
+Add the icon of the new engine in the web/src/apps/workflows/module/process/images/newIcon/ directory
+
+web/src/apps/workflows/module/process/images/newIcon/jdbc
+
+Also when contributing to the community, please consider the lincese or copyright of the svg file.
+
+## 3. Chapter Summary
+
+The above content records the implementation process of the new engine, as well as some additional engine configurations that need to be done. At present, the expansion process of a new engine is still relatively cumbersome, and it is hoped that the expansion and installation of the new engine can be optimized in subsequent versions.
+
+
+
diff --git a/versioned_docs/version-1.4.0/development/new-microservice.md b/versioned_docs/version-1.4.0/development/new-microservice.md
new file mode 100644
index 00000000000..cd5e14be036
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/new-microservice.md
@@ -0,0 +1,377 @@
+---
+title: How to Develop A New Microservice
+sidebar_position: 8.0
+---
+
+> This article introduces how to develop, debug and deploy a new microservice in the local area based on the existing Linkis microservice architecture, so as to facilitate the need for logs of new applications.
+
+Mind mapping:
+
+![mind-Mapping](/Images/deployment/microservice/thinking.png)
+
+## 1. New microservice development
+
+> This article introduces the new microservice `linkis-new-microservice` as an example. How to create and register a new microservice belonging to linkis in IDEA
+
+**Software requirements**
+- jdk1.8
+- maven3.5+
+
+### 1.1 Create a new submodule
+
+**Note**: The new sub-module under which module is not fixed and depends on the situation. Generally, it is divided and confirmed by service group. Here is just an example.
+
+- Right click under the linkis-public-enhancements module
+
+![new-module](/Images/deployment/microservice/new-module.png)
+
+- Select maven and click Nex to next step
+
+![maven-module](/Images/deployment/microservice/maven-module.png)
+
+- Enter the module name and click Finsh
+
+![name-module](/Images/deployment/microservice/name-module.png)
+
+- Created successfully
+
+![created-successfully](/Images/deployment/microservice/created-successfully.png)
+
+#### 1.1.1 Modify the pom.xml file of the linkis-new-microservice module
+
+**path**: linkis-public-enhancements/linkis-new-microservice/pom.xml
+
+```
+## Add the public dependency module of linkis and the mybatis module dependency (if it does not involve database operations, you can not add mybatis)
+
+ org.apache.linkis
+ linkis-module
+ ${project.version}
+
+
+ org.ow2.asm
+ asm
+
+
+
+
+
+ org.apache.linkis
+ linkis-mybatis
+ ${project.version}
+
+```
+
+#### 1.1.2 Add configuration files corresponding to new services
+
+> The configuration file is named according to linkis-service name.properties, and placed in the `linkis-dist/package/conf/` directory. When the service starts, the linkis.properties general configuration file and the linkis-service name.properties configuration file will be loaded
+
+Add `linkis-new-microservice.properties` configuration file
+
+**path**: linkis-dist/package/conf/linkis-new-microservice.properties
+
+``` properties
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+# http://www.apache.org/licenses/LICENSE-2.0
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+
+## If you do not need to provide interface Api, you do not need to add this configuration
+##restful
+wds.linkis.server.restful.scan.packages=org.apache.linkis.newmicroservice.server.restful
+
+## mybatis Configuration of data manipulation items
+wds.linkis.server.mybatis.mapperLocations=classpath*:org/apache/linkis/newmicroservice/server/dao/mapper/*.xml
+wds.linkis.server.mybatis.typeAliasesPackage=org.apache.linkis.newmicroservice.server.domain
+wds.linkis.server.mybatis.BasePackage=org.apache.linkis.newmicroservice.server.dao
+
+
+## Never use the same port as other services
+spring.server.port=9208
+
+```
+
+
+#### 1.1.4 Enable debug mode
+
+> It is convenient to debug the interface, no need to verify the login status
+
+**path**: linkis-dist/package/conf/linkis.properties
+
+![test-mode](/Images/deployment/microservice/test-mode.png)
+
+``` properties
+wds.linkis.test.mode=true # Turn on test mode
+wds.linkis.test.user=hadoop # Specify which user to proxy all requests to in test mode
+
+```
+
+### 1.2 Code Development
+
+To make it easier for everyone to learn, let's take creating a simple API interface as an example.
+
+#### 1.2.1 Create a new interface class
+
+![new-microservice](/Images/deployment/microservice/new-microservice.png)
+
+``` java
+package org.apache.linkis.newmicroservice.server.restful;
+
+
+import io.swagger.annotations.ApiOperation;
+import org.apache.linkis.server.Message;
+import org.springframework.web.bind.annotation.*;
+
+import io.swagger.annotations.Api;
+
+import java.util.HashMap;
+import java.util.Map;
+
+@Api(tags = "newmicroservice")
+@RestController
+@RequestMapping(path = "/newmicroservice")
+public class NewMicroservice {
+
+
+ @ApiOperation(value = "establish", httpMethod = "GET")
+ @RequestMapping(path = "establish", method = RequestMethod.GET)
+ public Message list() {
+ Map<String,String> map=new HashMap<>();
+ map.put("NewMicroservice","Hello! This is a new microservice I registered(这是我注册的一个新的微服务)");
+ return Message.ok("").data("map", map);
+ }
+
+}
+```
+
+#### 1.2.2 new startup class
+
+![maven-module](/Images/deployment/microservice/start-up.png)
+
+``` java
+
+package org.apache.linkis.newmicroservice.server;
+
+import org.apache.linkis.LinkisBaseServerApp;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+public class LinkisNewMicroserviceApplication {
+
+ private static final Log logger = LogFactory.getLog(LinkisNewMicroserviceApplication.class);
+
+ public static void main(String[] args) throws ReflectiveOperationException {
+ logger.info("Start to running LinkisNewmicroserviceApplication");
+ LinkisBaseServerApp.main(args);
+ }
+}
+```
+
+### 1.3 Start eureka service
+
+The specific guidelines for this step have been written in the [Debugging Guidelines](../development/debug) document and can be directly accessed, so I won’t introduce too much here
+
+
+### 1.4 Start the new microservice locally
+
+Set the startup Application of linkis-new-microservice
+
+![commissioning-service](/Images/deployment/microservice/commissioning-service.png)
+
+parameter explanation:
+
+```shell
+[Service Name]
+linkis-new-microservice
+
+[Module Name]
+linkis-new-microservice
+
+[VM Opitons]
+-DserviceName=linkis-new-microservice -Xbootclasspath/a:{YourPathPrefix}/linkis/linkis-dist/package/conf
+
+[main Class]
+org.apache.linkis.newmicroservice.server.LinkisNewmicroserviceApplication
+
+[Add provided scope to classpath]
+By checking Include dependencies with “Provided” scope, you can introduce provided-level dependency packages during debugging.
+```
+
+> After the above settings are completed, the Application can be run directly. After running successfully, open the browser and enter the url of the eureka registration center
+
+``` text
+ http://ip:port/
+```
+
+![new-service](/Images/deployment/microservice/new-service.png)
+
+> When the linkis-new-microservice service appears in the eureka registration center, the local registration of the new microservice is successful.
+
+### 1.5 Postman for interface debugging
+
+**URL**: http://ip:port/api/rest_j/v1/newmicroservice/establish
+
+![postman-test](/Images/deployment/microservice/postman-test.png)
+
+
+## 2. Package deployment
+> Packaging and deployment mainly has two stages. The first step is that after the module is packaged by maven, the dependencies required by the module will be packaged into the corresponding target directory of the module linkis-new-microservice/target/out/lib.
+> The second step is to assemble the complete final deployment installation package, you need to automatically copy `linkis-new-microservice/target/out/lib` to `linkis-dist/target/apache-linkis-x.x.x-incubating-bin/linkis - under package/lib`
+
+### 2.1 Modify the distribution.xml under the new service
+
+**path**: linkis-public-enhancements/linkis-new-microservice/src/main/assembly/distribution.xml
+
+![new-distribution](/Images/deployment/microservice/new-distribution.png)
+
+> Since there are many dependencies that need to be excluded, only part of the code is posted here
+
+``` xml
+
+ antlr:antlr:jar
+ aopalliance:aopalliance:jar
+ com.fasterxml.jackson.core:jackson-annotations:jar
+ com.fasterxml.jackson.core:jackson-core:jar
+
+```
+
+> Here is an explanation of why you need to add `excludes`, because the service startup script linkis-dist/package/sbin/ext/linkis-common-start generally loads the general lib by default
+
+![common-start](/Images/deployment/microservice/common-start.png)
+
+> Therefore, when packaging service dependencies, existing lib packages can be excluded. For details, please refer to linkis-computation-governance/linkis-entrance/src/main/assembly/distribution.xml
+
+### 2.2 Modify distribution.xml under linkis-dist
+
+**path**: linkis-dist/src/main/assembly/distribution.xml
+
+
+> Add fileSet configuration, changing the configuration is mainly to control the output linkis-new-microservice service package when compiling and packaging
+
+![fileset](/Images/deployment/microservice/fileset.png)
+
+> Only the configuration content that needs to be added is posted here.
+
+``` xml
+
+
+ ../linkis-public-enhancements/linkis-new-microservice/target/out/lib
+
+
+ linkis-package/lib/linkis-public-enhancements/linkis-new-microservice
+
+
+ **/*
+
+
+
+```
+
+### 2.3 Run configuration script for the service
+
+![new-configuration](/Images/deployment/microservice/new-configuration.png)
+
+``` text
+
+#!/usr/bin/env bash
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+# http://www.apache.org/licenses/LICENSE-2.0
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# description: manager start cmd
+#
+# Modified for Linkis 1.0.0
+
+
+export SERVER_SUFFIX="linkis-public-enhancements/linkis-new-microservice"
+
+
+export SERVER_CLASS=org.apache.linkis.newmicroservice.server.LinkisNewMicroserviceApplication
+
+#export DEBUG_PORT=
+
+export COMMON_START_BIN=$LINKIS_HOME/sbin/ext/linkis-common-start
+if [[ ! -f "${COMMON_START_BIN}" ]]; then
+ echo "The $COMMON_START_BIN does not exist!"
+ exit 1
+else
+ sh $COMMON_START_BIN
+fi
+
+```
+
+
+### 2.4 linkis-start-all.sh configuration modification
+
+**path**: linkis-dist/package/sbin/linkis-start-all.sh
+
+![start-script](/Images/deployment/microservice/start-script.png)
+
+> Only the configuration content that needs to be added is posted here.
+
+``` text
+ ## startApp
+ #linkis-new-microservice
+ SERVER_NAME="new-microservice"
+ startApp
+```
+
+![detection-script](/Images/deployment/microservice/detection-script.png)
+
+> Only the configuration content that needs to be added is posted here.
+
+``` text
+ ##checkServer
+ #linkis-new-microservice
+ SERVER_NAME="new-microservice"
+ checkServer
+```
+
+### 2.5 linkis-stop-all.sh configuration modification
+
+**path**:linkis-dist/package/sbin/linkis-stop-all.sh
+
+![stop-script](/Images/deployment/microservice/stop-script.png)
+
+> Only the configuration content that needs to be added is posted here.
+
+``` text
+ ## stopApp
+ #linkis-new-microservice
+ export SERVER_NAME="new-microservice"
+ stopApp
+```
+
+### 2.6 Installation package preparation
+
+The specific guidelines for this step have been written in the [backend compilation](../development/build) document and can be directly accessed, so I won’t introduce too much here
+
+### 2.7 Server Deployment
+
+Here is an example of single-machine deployment, and the specific guidance of this step has been written in the [Single-machine deployment](../deployment/deploy-quick) document and can be accessed directly, so I won’t introduce it here
+
+After the installation and deployment is successful, you can directly visit the eureka registration center in the browser to see if the center has successfully registered the linkis-new-microservice service. If the registration is successful, the creation of a new microservice is successful.
+
+![new-service](/Images/deployment/microservice/new-service.png)
diff --git a/versioned_docs/version-1.4.0/development/swwager.md b/versioned_docs/version-1.4.0/development/swwager.md
new file mode 100644
index 00000000000..ca736a55fca
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/swwager.md
@@ -0,0 +1,261 @@
+---
+title: Swagger Annotation
+sidebar_position: 9.0
+---
+
+## 1. Scope of swagger annotations
+| API| Scope | Where to use |
+| -------- | -------- | ----- |
+|@Api|Protocol set description|Used on the controller class|
+|@ApiOperation|Protocol description|Used in controller methods|
+|@ApiImplicitParams|Non-object parameter set|Used in controller methods|
+|@ApiImplicitParam|Non-object parameter description|Used in methods of @ApiImplicitParams|
+|@ApiResponses|Response set|Used in the controller's method|
+|@ApiResponse|Response|Used in @ApiResponses|
+|@ApiModel|Describe the meaning of the returned object|Used in the returned object class|
+|@ApiModelProperty|Object property|Used on the fields of the parameter object|
+|@ApiParam|Protocol description|Used on methods, parameters, fields of classes|
+
+## 2. @Api
+Use the location to use on the class to describe the request class. Identifies a Controller class is the Swagger document class.
+
+### 2.1 Attributes of annotations
+
+| Property Name | Property Type | Property Default Value | Property Description |
+| -------- | -------- | ----- |----- |
+|value|String|""|Description, meaningless. |
+|tags|String[]|""|Grouping|
+|basePath|String|""|Base Path|
+|protocols|String|int|Request Protocol|
+|authorizations|Authorization[]|@Authorization(value = "")|Configuration for advanced feature authentication|
+|hidden|boolean|false|Is it hidden (not displayed, the default is false)|
+
+
+### 2.2 The difference between attribute value and tags
+
+The value attribute is used to describe both the role of the class and the role of the method;
+
+The tags attribute is used for grouping both on classes and methods, but the effect of grouping is very different:
+
+When tags act on a class, the global methods are grouped, that is, multiple copies are made according to the tags attribute value. At this time, the tags value on the method is invalid, and the effect of matching or not matching the tags on the method is the same.
+
+When tags act on a method, they will be grouped according to the tags values of all methods of the current class, with a finer granularity.
+
+### 2.3 How to use
+Note: The difference between java and scala in @Api annotation
+
+````java
+*java
+@Api(tags = "Swagger test related interface")
+@RestController
+
+*scala
+@Api(tags = Array("Swagger test related interface"))
+@RestController
+````
+
+
+## 3. @ApiOperation
+Used in methods, to describe the request method.
+### 3.1 Attributes of annotations
+
+| Property Name | Property Type | Property Default Value | Property Description |
+| -------- | -------- | ----- |----- |
+|value|String|""|Description|
+|notes|String|""| Detailed description|
+|tags|String[]|""|Grouping|
+|response|Class>|Void.class|Response parameter type|
+|responseReference|String[]|""|Specifies a reference to the response type, local/remote reference, and will override any other specified response() class|
+|httpMethod|String|""|http request method, such as: GET, HEAD, POST, PUT, DELETE, OPTION, SPATCH|
+|hidden|boolean|false|whether hidden (not displayed) defaults to false|
+|code|int|200|http status code|
+|extensions|Extension[]|@Extension(properties = @ExtensionProperty(name = "", value = "")|Extension Properties|
+
+### 3.2 How to use
+
+````java
+@GetMapping("test1")
+@ApiOperation(value = "test1 interface", notes = "test1 interface detailed description")
+public ApiResult test1(@RequestParam String aa, @RequestParam String bb, @RequestParam String cc) {
+ return ApiUtil.success("success");
+}
+````
+
+## 4. @ApiImplicitParams
+
+Commonly used in methods to describe the request parameter list.
+The value attribute can contain multiple @ApiImplicitParam, and describe each participant in detail.
+
+### 4.1 Attributes of annotations
+| Property Name | Property Type | Property Default Value | Property Description |
+| -------- | -------- | ----- |----- |
+|value|String|""|Description|
+
+## 5. @ApiImplicitParams
+
+Used in methods to describe request parameters. When multiple parameters need to be described, it is used as a property of @ApiImplicitParams.
+
+### 5.1 Attributes of annotations
+| Property Name | Property Type | Property Default Value | Property Description |
+| -------- | -------- | ----- |----- |
+|value|String|""|Description|
+|name|String|""|Parameter Description|
+|defaultValue|String|""|default value|
+|allowableValues|String|""|Parameter allowable values|
+|required|boolean|false|Required, default false|
+|access|String|""|Parameter Filter|
+|allowMultiple|boolean|false|Whether the parameter can accept multiple values by appearing multiple times, the default is not allowed|
+|dataType|String|""|The data type of the parameter, which can be a class name or a primitive data type|
+|dataTypeClass|Class>|Void.class| The data type of the parameter, overriding dataType| if provided
+|paramType|String|""|Parameter type, valid values are path, query, body, header, form|
+|example|String|""|Parameter example of non-body type|
+|examples|Example|@Example(value = @ExampleProperty(mediaType = "", value = ""))|Parameter example of body type|
+|type|String|""|Add functionality to override detected types|
+|format|String|""|Add the function to provide custom format format|
+|readOnly|boolean|false|Adds features designated as read-only|
+
+### 5.2 How to use
+
+````java
+@GetMapping("test1")
+@ApiOperation(value = "test1 interface", notes = "test1 interface detailed description")
+@ApiImplicitParams(value = {
+ @ApiImplicitParam(name = "aa",value = "aa description",defaultValue = "1",allowableValues = "1,2,3",required = true),
+ @ApiImplicitParam(name = "bb",value = "bb description",defaultValue = "1",allowableValues = "1,2,3",required = true),
+ @ApiImplicitParam(name = "cc",value = "Description of cc",defaultValue = "2",allowableValues = "1,2,3",required = true),
+
+})
+````
+
+## 6. @ApiParam
+
+Used in fields of methods, parameters, and classes to describe request parameters.
+
+### 6.1 Attributes of annotations
+| Property Name | Property Type | Property Default Value | Property Description |
+| -------- | -------- | ----- |----- |
+|value|String|""|Description|
+|name|String|""|Parameter Description|
+|defaultValue|String|""|default value|
+|allowableValues|String|""|Parameter allowable values|
+|required|boolean|false|Required, default false|
+|access|String|""|Parameter Filter|
+|allowMultiple|boolean|false|Whether the parameter can accept multiple values by appearing multiple times, the default is not allowed|
+|dataType|String|""|The data type of the parameter, which can be a class name or a primitive data type|
+|dataTypeClass|Class>|Void.class| The data type of the parameter, overriding dataType| if provided
+|paramType|String|""|Parameter type, valid values are path, query, body, header, form|
+|example|String|""|Parameter example of non-body type|
+|examples|Example|@Example(value = @ExampleProperty(mediaType = "", value = ""))|Parameter example of body type|
+|type|String|""|Add functionality to override detected types|
+|format|String|""|Add the function to provide custom format format|
+|readOnly|boolean|false|Adds features designated as read-only|
+
+### 6.2 How to use
+
+````java
+@GetMapping("test2")
+@ApiOperation(value = "test2 interface", notes = "test2 interface detailed description")
+public ApiResult test2(@ApiParam(value = "aa description") @RequestParam String aa, @ApiParam(value = "bb description") @RequestParam String bb) {
+ return ApiUtil.success(new TestRes());
+}
+````
+
+## 7. @ApiModel
+
+Used in classes to describe requests, response classes, and entity classes.
+
+### 7.1 Attributes of annotations
+| Property Name | Property Type | Property Default Value | Property Description |
+| -------- | -------- | ----- |----- |
+|value|String|""| is an alternative name to provide the model, by default, the class name is used|
+|description|String|""|Class description|
+|parent|Class> parent|Void.class|Provides a parent class for the model to allow describing inheritance relationships|
+|discriminatory|String|""|Supports model inheritance and polymorphism, using the name of the discriminator's field, you can assert which subtype to use|
+|subTypes|boolean|false|Required, default false|
+|access|Class> parent|Void.class| Array of subtypes inherited from this model|
+|reference|boolean|false|Specifies a reference to the corresponding type definition, overriding any other metadata specified|
+
+## 8 @ApiModelProperty
+
+Used in classes to describe requests, response classes, and entity classes.
+
+### 8.1 Attributes of annotations
+| Property Name | Property Type | Property Default Value | Property Description |
+| -------- | -------- | ----- |----- |
+|value|String|""|Attribute Description|
+|name|String|""|Override property name|
+|allowableValues|String|""|Parameter allowable values|
+|access|String|""|Filter Attribute|
+|required|boolean|false|Required, default false|
+|dataType|String|""|The data type of the parameter, which can be a class name or a primitive data type|
+|hidden|boolean|false| Hidden|
+|readOnly|String|""|Add functionality designated as read-only|
+|reference|String|""|Specifies a reference to the corresponding type definition, overriding any other metadata specified|
+|allowEmptyValue|boolean|false|Allow empty values|
+|example|String|""|Example value for attribute|
+
+### 8.2 How to use
+
+Note: The difference between java and scala in the use of @ApiModelProperty annotation
+
+````java
+*java entity class
+@Data
+@ApiModel(description = "Test request class")
+public class TestReq {
+
+ @ApiModelProperty(value = "User ID",required = true)
+ private Long userId;
+ @ApiModelProperty(value = "Username", example = "Zhang San")
+}
+
+*scala entity class
+@Data
+@ApiModel(description = "Test response class")
+public class TestRes {
+ @(ApiModelProperty @field)("User ID")
+ private Long userId;
+ @(ApiModelProperty @field)("Username")
+}
+````
+
+
+## 9. @ApiResponses
+
+Used on methods and classes to describe the response status code list.
+
+### 9.1 Attributes of annotations
+| Property Name | Property Type | Property Default Value | Property Description |
+| -------- | -------- | ----- |----- |
+|value|ApiResponse[]|""|Description of response status code list|
+
+## 10. @ApiResponse
+
+Used in the method to describe the response status code. Generally used as a property of @ApiResponses.
+
+### 10.1 Attributes of annotations
+| Property Name | Property Type | Property Default Value | Property Description |
+| -------- | -------- | ----- |----- |
+|code|int|""|Response HTTP Status Code|
+|message|String|""|Description of the response|
+|response|Class>|Void.class|An optional response class used to describe the message payload, corresponding to the schema field of the response message object|
+|reference|String|""|Specifies a reference to the response type, the specified application can make a local reference, or a remote reference, will be used as is, and will override any specified response() class|
+|responseHeaders|ResponseHeader[]|@ResponseHeader(name = "", response = Void.class)|List of possible response headers|
+|responseContainer|String|""|Declare the container of the response, valid values are List, Set, Map, any other value will be ignored|
+
+
+### 10.2 How to use
+
+````java
+@GetMapping("test2")
+@ApiOperation(value = "test2 interface", notes = "test2 interface detailed description")
+@ApiResponses(value = {
+ @ApiResponse(code = 200, message = "Request successful", responseHeaders = {@ResponseHeader(name = "header1", description = "description of header1",response = String.class)}),
+ @ApiResponse(code = 401, message = "No permission"),
+ @ApiResponse(code = 403, message = "Access forbidden")
+})
+public ApiResult test2(@ApiParam(value = "aa description") @RequestParam String aa, @ApiParam(value = "bb description") @RequestParam String bb) {
+ return ApiUtil.success(new TestRes());
+}
+
+````
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/development/table/_category_.json b/versioned_docs/version-1.4.0/development/table/_category_.json
new file mode 100644
index 00000000000..f513fce7bc2
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/table/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Table Structure",
+ "position": 12.0
+}
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/development/table/all.md b/versioned_docs/version-1.4.0/development/table/all.md
new file mode 100644
index 00000000000..0268ce99e2c
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/table/all.md
@@ -0,0 +1,5 @@
+---
+title: Tables Message
+sidebar_position: 2
+---
+## todo
diff --git a/versioned_docs/version-1.4.0/development/table/udf-table.md b/versioned_docs/version-1.4.0/development/table/udf-table.md
new file mode 100644
index 00000000000..1bb6786c6db
--- /dev/null
+++ b/versioned_docs/version-1.4.0/development/table/udf-table.md
@@ -0,0 +1,99 @@
+---
+title: UDF Table Structure
+sidebar_position: 2
+---
+
+## 1. linkis_ps_udf_baseinfo
+
+The basic information table of udf function, which stores basic information such as udf name/type
+
+| number | name | description | type | key | empty | extra | default value |
+|------ |------ |------ |------ |------ |------ |------ |------ |
+| 1 | `id` | primary key auto-increment id | bigint(20) | PRI | NO | auto_increment | |
+| 2 | `create_user` | create user | varchar(50) | | NO | | |
+| 3 | `udf_name` | udf name | varchar(255) | | NO | | |
+| 4 | `udf_type` | udf type | int(11) | | YES | | 0 |
+| 5 | `tree_id` | id of linkis_ps_udf_tree | bigint(20) | | NO | | |
+| 6 | `create_time` | creation time | timestamp | | NO | on update CURRENT_TIMESTAMP | CURRENT_TIMESTAMP |
+| 7 | `update_time` | update time | timestamp | | NO | | CURRENT_TIMESTAMP |
+| 8 | `sys` | source system | varchar(255) | | NO | | ide |
+| 9 | `cluster_name` | Cluster name, not used yet, default is all | varchar(255) | | NO | | |
+| 10 | `is_expire` | Expired or not | bit(1) | | YES | | |
+| 11 | `is_shared` | Is it shared | bit(1) | | YES | | |
+
+
+udf_type
+````
+udf_type 0: udf function - generic
+udf_type 2: udf function - spark
+
+udf_type 3: custom function - python function
+udf_type 4: custom function - scala function
+````
+
+## 2. linkis_ps_udf_manager
+
+The administrator user table of the udf function, with sharing permissions, only the front end of the udf administrator has a shared entry
+
+| number | name | description | type | key | empty | extra | default value |
+|------ |------ |------ |------ |------ |------ |------ |------ |
+| 1 | `id` | | bigint(20) | PRI | NO | auto_increment | |
+| 2 | `user_name` | | varchar(20) | | YES | | |
+
+## 3. linkis_ps_udf_shared_info
+
+udf shared record table
+
+| number | name | description | type | key | empty | extra | default value |
+|------ |------ |------ |------ |------ |------ |------ |------ |
+| 1 | `id` | | bigint(20) | PRI | NO | auto_increment | |
+| 2 | `udf_id` | id of linkis_ps_udf_baseinfo | bigint(20) | | NO | | |
+| 3 | `user_name` | username used by the share | varchar(50) | | NO | | |
+
+## 4. linkis_ps_udf_tree
+
+Tree-level record table for udf classification
+
+| number | name | description | type | key | empty | extra | default value |
+|------ |------ |------ |------ |------ |------ |------ |------ |
+| 1 | `id` | | bigint(20) | PRI | NO | auto_increment | |
+| 2 | `parent` | parent category | bigint(20) | | NO | | |
+| 3 | `name` | Class name of the function | varchar(100) | | YES | | |
+| 4 | `user_name` | username | varchar(50) | | NO | | |
+| 5 | `description` | description information | varchar(255) | | YES | | |
+| 6 | `create_time` | | timestamp | | NO | on update CURRENT_TIMESTAMP | CURRENT_TIMESTAMP |
+| 7 | `update_time` | | timestamp | | NO | | CURRENT_TIMESTAMP |
+| 8 | `category` | category distinction udf / function | varchar(50) | | YES | | |
+
+## 5. linkis_ps_udf_user_load
+
+Whether udf is the configuration loaded by default
+
+| number | name | description | type | key | empty | extra | default value |
+|------ |------ |------ |------ |------ |------ |------ |------ |
+| 1 | `id` | | bigint(20) | PRI | NO | auto_increment | |
+| 2 | `udf_id` | id of linkis_ps_udf_baseinfo | int(11) | | NO | | |
+| 3 | `user_name` | user owned | varchar(50) | | NO | | |
+
+## 6. linkis_ps_udf_version
+
+udf version information table
+
+| number | name | description | type | key | empty | extra | default value |
+|------ |------ |------ |------ |------ |------ |------ |------ |
+| 1 | `id` | | bigint(20) | PRI | NO | auto_increment | |
+| 2 | `udf_id` | id of linkis_ps_udf_baseinfo | bigint(20) | | NO | | |
+| 3 | `path` | The local path of the uploaded script/jar package | varchar(255) | | NO | | |
+| 4 | `bml_resource_id` | Material resource id in bml | varchar(50) | | NO | | |
+| 5 | `bml_resource_version` | bml material version | varchar(20) | | NO | | |
+| 6 | `is_published` | whether to publish | bit(1) | | YES | | |
+| 7 | `register_format` | registration format | varchar(255) | | YES | | |
+| 8 | `use_format` | use format | varchar(255) | | YES | | |
+| 9 | `description` | Version description | varchar(255) | | NO | | |
+| 10 | `create_time` | | timestamp | | NO | on update CURRENT_TIMESTAMP | CURRENT_TIMESTAMP |
+| 11 | `md5` | | varchar(100) | | YES | | |
+
+
+## ER diagram
+
+![image](/Images-zh/table/udf.png)
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/engine-usage/_category_.json b/versioned_docs/version-1.4.0/engine-usage/_category_.json
new file mode 100644
index 00000000000..a682853ef7e
--- /dev/null
+++ b/versioned_docs/version-1.4.0/engine-usage/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Engine Usage",
+ "position": 5.0
+}
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/engine-usage/elasticsearch.md b/versioned_docs/version-1.4.0/engine-usage/elasticsearch.md
new file mode 100644
index 00000000000..f8eda6a1c84
--- /dev/null
+++ b/versioned_docs/version-1.4.0/engine-usage/elasticsearch.md
@@ -0,0 +1,243 @@
+---
+title: ElasticSearch Engine
+sidebar_position: 11
+---
+
+This article mainly introduces the installation, usage and configuration of the `ElasticSearch` engine plugin in `Linkis`.
+
+## 1. Preliminary work
+### 1.1 Engine installation
+
+If you want to use the `ElasticSearch` engine on your `Linkis` service, you need to install the `ElasticSearch` service and make sure the service is available.
+
+### 1.2 Service Authentication
+Use the following command to verify whether the `ElasticSearch` engine service is available. If the service has enabled user authentication, you need to add `--user username:password`
+```
+curl [--user username:password] http://ip:port/_cluster/healty?pretty
+```
+The following output means that the `ElasticSearch` service is available, note that the cluster `status` is `green`
+```json
+{
+ "cluster_name" : "docker-cluster",
+ "status" : "green",
+ "timed_out" : false,
+ "number_of_nodes" : 1,
+ "number_of_data_nodes" : 1,
+ "active_primary_shards" : 7,
+ "active_shards" : 7,
+ "relocating_shards" : 0,
+ "initializing_shards" : 0,
+ "unassigned_shards" : 0,
+ "delayed_unassigned_shards" : 0,
+ "number_of_pending_tasks" : 0,
+ "number_of_in_flight_fetch" : 0,
+ "task_max_waiting_in_queue_millis" : 0,
+ "active_shards_percent_as_number" : 100.0
+}
+```
+## 2. Engine plugin installation
+
+### 2.1 Engine plugin preparation (choose one) [non-default engine](./overview.md)
+
+Method 1: Download the engine plug-in package directly
+
+[Linkis Engine Plugin Download](https://linkis.apache.org/zh-CN/blog/2022/04/15/how-to-download-engineconn-plugin)
+
+Method 2: Compile the engine plug-in separately (maven environment is required)
+
+```
+# compile
+cd ${linkis_code_dir}/linkis-engineconn-plugins/elasticsearch/
+mvn clean install
+# The compiled engine plug-in package is located in the following directory
+${linkis_code_dir}/linkis-engineconn-plugins/elasticsearch/target/out/
+```
+
+[EngineConnPlugin Engine Plugin Installation](../deployment/install-engineconn.md)
+
+### 2.2 Upload and load engine plugins
+
+Upload the engine plug-in package in 2.1 to the engine directory of the server
+```bash
+${LINKIS_HOME}/lib/linkis-engineplugins
+```
+The directory structure after uploading is as follows
+```
+linkis-engineconn-plugins/
+├── elasticsearch
+│ ├── dist
+│ │ └── 7.6.2
+│ │ ├── conf
+│ │ └── lib
+│ └── plugin
+│ └── 7.6.2
+```
+### 2.3 Engine refresh
+
+#### 2.3.1 Restart and refresh
+Refresh the engine by restarting the `linkis-cg-linkismanager` service
+```bash
+cd ${LINKIS_HOME}/sbin
+sh linkis-daemon.sh restart cg-linkismanager
+```
+
+### 2.3.2 Check if the engine is refreshed successfully
+You can check whether the `last_update_time` of this table in the `linkis_engine_conn_plugin_bml_resources` in the database is the time when the refresh is triggered.
+
+```sql
+#Login to the linkis database
+select * from linkis_cg_engine_conn_plugin_bml_resources;
+```
+
+## 3. Engine usage
+
+### 3.1 Submit tasks through `Linkis-cli`
+**`-codeType` parameter description**
+- `essql`: Execute `ElasticSearch` engine tasks through `SQL` scripts
+- `esjson`: Execute `ElasticSearch` engine tasks through `JSON` script
+
+**`essql` method example**
+
+**Note:** Using this form, the `ElasticSearch` service must install the SQL plug-in, please refer to the installation method: https://github.com/NLPchina/elasticsearch-sql#elasticsearch-762
+```shell
+ sh ./bin/linkis-cli -submitUser Hadoop \
+ -engineType elasticsearch-7.6.2 -codeType essql \
+ -code '{"sql": "select * from kibana_sample_data_ecommerce limit 10' \
+ -runtimeMap linkis.es.http.method=GET \
+ -runtimeMap linkis.es.http.endpoint=/_sql \
+ -runtimeMap linkis.es.datasource=hadoop \
+ -runtimeMap linkis.es.cluster=127.0.0.1:9200
+```
+
+**`esjson` style example**
+```shell
+sh ./bin/linkis-cli -submitUser Hadoop \
+-engineType elasticsearch-7.6.2 -codeType esjson \
+-code '{"query": {"match": {"order_id": "584677"}}}' \
+-runtimeMap linkis.es.http.method=GET \
+-runtimeMap linkis.es.http.endpoint=/kibana_sample_data_ecommerce/_search \
+-runtimeMap linkis.es.datasource=hadoop \
+-runtimeMap linkis.es.cluster=127.0.0.1:9200
+```
+
+More `Linkis-Cli` command parameter reference: [`Linkis-Cli` usage](../user-guide/linkiscli-manual.md)
+
+## 4. Engine configuration instructions
+
+### 4.1 Default Configuration Description
+
+| Configuration | Default | Required | Description |
+| ------------------------ | ------------------- | ---| ------------------------------------------- |
+| linkis.es.cluster | 127.0.0.1:9200 | yes | ElasticSearch cluster, multiple nodes separated by commas |
+| linkis.es.datasource | hadoop |是 | ElasticSearch datasource |
+| linkis.es.username | none | no | ElasticSearch cluster username |
+| linkis.es.password | none | no | ElasticSearch cluster password |
+| linkis.es.auth.cache | false | No | Whether the client caches authentication |
+| linkis.es.sniffer.enable | false | No | Whether the client enables sniffer |
+| linkis.es.http.method | GET |No| Call method |
+| linkis.es.http.endpoint | /_search | No | Endpoint called by JSON script |
+| linkis.es.sql.endpoint | /_sql | No | Endpoint called by SQL script |
+| linkis.es.sql.format | {"query":"%s"} |No | Template called by SQL script, %s is replaced with SQL as the request body to request Es cluster |
+| linkis.es.headers.* | None | No | Client Headers Configuration |
+| linkis.engineconn.concurrent.limit | 100|No| Maximum concurrent engine |
+
+### 4.2 Configuration modification
+If the default parameters are not satisfied, there are the following ways to configure some basic parameters
+
+#### 4.2.1 Management console configuration
+
+![](./images/es-manage.png)
+
+Note: After modifying the configuration under the `IDE` tag, you need to specify `-creator IDE` to take effect (other tags are similar), such as:
+
+```shell
+sh ./bin/linkis-cli -creator IDE -submitUser hadoop \
+-engineType elasticsearch-7.6.2 -codeType esjson \
+-code '{"query": {"match": {"order_id": "584677"}}}' \
+-runtimeMap linkis.es.http.method=GET \
+-runtimeMap linkis.es.http.endpoint=/kibana_sample_data_ecommerce/_search
+```
+
+#### 4.2.2 Task interface configuration
+Submit the task interface, configure it through the parameter `params.configuration.runtime`
+
+```shell
+Example of http request parameters
+{
+ "executionContent": {"code": "select * from kibana_sample_data_ecommerce limit 10;", "runType": "essql"},
+ "params": {
+ "variable": {},
+ "configuration": {
+ "runtime": {
+ "linkis.es.cluster":"http://127.0.0.1:9200",
+ "linkis.es.datasource":"hadoop",
+ "linkis.es.username":"",
+ "linkis.es.password":""
+ }
+ }
+ },
+ "labels": {
+ "engineType": "elasticsearch-7.6.2",
+ "userCreator": "hadoop-IDE"
+ }
+}
+```
+
+#### 4.2.3 File Configuration
+Configure by modifying the `linkis-engineconn.properties` file in the directory `${LINKIS_HOME}/lib/linkis-engineconn-plugins/elasticsearch/dist/7.6.2/conf/`, as shown below:
+
+![](./images/es-config.png)
+
+### 4.3 Engine related data sheet
+
+`Linkis` is managed through the engine tag, and the data table information involved is shown below.
+
+```
+linkis_ps_configuration_config_key: key and default values of configuration parameters inserted into the engine
+linkis_cg_manager_label: Insert engine label such as: elasticsearch-7.6.2
+linkis_ps_configuration_category: Insert the directory association of the engine
+linkis_ps_configuration_config_value: The configuration that the insertion engine needs to display
+linkis_ps_configuration_key_engine_relation: The relationship between the configuration item and the engine
+```
+
+The initial data related to the engine in the table is as follows
+
+```sql
+-- set variable
+SET @ENGINE_LABEL="elasticsearch-7.6.2";
+SET @ENGINE_ALL=CONCAT('*-*,',@ENGINE_LABEL);
+SET @ENGINE_IDE=CONCAT('*-IDE,',@ENGINE_LABEL);
+SET @ENGINE_NAME="elasticsearch";
+
+-- engine label
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType', @ENGINE_ALL, 'OPTIONAL', 2, now(), now());
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType', @ENGINE_IDE, 'OPTIONAL', 2, now(), now());
+
+select @label_id := id from `linkis_cg_manager_label` where label_value = @ENGINE_IDE;
+insert into `linkis_ps_configuration_category` (`label_id`, `level`) VALUES (@label_id, 2);
+
+-- configuration key
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.es.cluster', 'eg: http://127.0.0.1:9200', 'connection address', 'http://127.0.0.1:9200', 'None', '', @ENGINE_NAME , 0, 0, 1, 'data source conf');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.es.datasource', 'Connection Alias', 'Connection Alias', 'hadoop', 'None', '', @ENGINE_NAME, 0, 0, 1, 'Datasource Configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.es.username', 'username', 'ES cluster username', 'No', 'None', '', @ENGINE_NAME, 0, 0, 1, 'data source conf');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.es.password', 'password', 'ES cluster password', 'None', 'None', '', @ENGINE_NAME, 0, 0, 1, 'data source conf');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.es.auth.cache', 'Does the client cache authentication', 'Does the client cache authentication', 'false', 'None', '', @ENGINE_NAME, 0, 0, 1, 'data source conf');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.es.sniffer.enable', 'Whether the client enables sniffer', 'Whether the client enables sniffer', 'false', 'None', '', @ENGINE_NAME, 0, 0, 1, 'data source conf');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.es.http.method', 'call method', 'HTTP request method', 'GET', 'None', '', @ENGINE_NAME, 0, 0, 1, 'data source conf');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.es.http.endpoint', '/_search', 'JSON script Endpoint', '/_search', 'None', '', @ENGINE_NAME, 0, 0, 1, 'data source conf');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.es.sql.endpoint', '/_sql', 'SQL script Endpoint', '/_sql', 'None', '', @ENGINE_NAME, 0, 0, 1, 'data source conf');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.es.sql.format', 'The template called by the SQL script, replace %s with SQL as the request body to request the Es cluster', 'request body', '{"query":"%s"}', 'None', '', @ENGINE_NAME, 0, 0, 1, 'data source conf');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.es.headers.*', 'Client Headers Configuration', 'Client Headers Configuration', 'None', 'None', '', @ENGINE_NAME, 0, 0, 1, 'data source conf');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.engineconn.concurrent.limit', 'engine max concurrency', 'engine max concurrency', '100', 'None', '', @ENGINE_NAME, 0, 0, 1, 'data source conf') ;
+
+-- key engine relation
+insert into `linkis_ps_configuration_key_engine_relation` (`config_key_id`, `engine_type_label_id`)
+(select config.id as config_key_id, label.id AS engine_type_label_id FROM `linkis_ps_configuration_config_key` config
+INNER JOIN `linkis_cg_manager_label` label ON config.engine_conn_type = @ENGINE_NAME and label_value = @ENGINE_ALL);
+
+-- engine default configuration
+insert into `linkis_ps_configuration_config_value` (`config_key_id`, `config_value`, `config_label_id`)
+(select relation.config_key_id AS config_key_id, '' AS config_value, relation.engine_type_label_id AS config_label_id FROM `linkis_ps_configuration_key_engine_relation` relation
+INNER JOIN `linkis_cg_manager_label` label ON relation.engine_type_label_id = label.id AND label.label_value = @ENGINE_ALL);
+
+```
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/engine-usage/flink.md b/versioned_docs/version-1.4.0/engine-usage/flink.md
new file mode 100644
index 00000000000..cd876ef8419
--- /dev/null
+++ b/versioned_docs/version-1.4.0/engine-usage/flink.md
@@ -0,0 +1,191 @@
+---
+title: Flink Engine
+sidebar_position: 8
+---
+
+# Flink engine usage documentation
+
+This article mainly introduces the installation, use and configuration of the `flink` engine plugin in `Linkis`.
+
+## 1. Preliminary work
+### 1.1 Engine environment configuration
+
+If you want to use the `Flink` engine on your server, you need to ensure that the following environment variables are set correctly and that the user who started the engine has these environment variables.
+
+### 1.2 Engine Verification
+
+It is strongly recommended that you check these environment variables for the executing user before executing `flink` tasks. The specific way is
+```
+sudo su -${username}
+echo ${JAVA_HOME}
+echo ${FLINK_HOME}
+```
+
+| Environment variable name | Environment variable content | Remarks |
+|-----------------|----------------|-------------- -----------------------------|
+| JAVA_HOME | JDK installation path | Required |
+| HADOOP_HOME | Hadoop installation path | Required |
+| HADOOP_CONF_DIR | Hadoop configuration path | Linkis starts the Flink on yarn mode adopted by the Flink engine, so yarn support is required. |
+| FLINK_HOME | Flink installation path | Required |
+| FLINK_CONF_DIR | Flink configuration path | Required, such as ${FLINK_HOME}/conf |
+| FLINK_LIB_DIR | Flink package path | Required, ${FLINK_HOME}/lib |
+
+
+## 2. Engine plugin installation
+
+### 2.1 Engine plugin preparation (choose one) [non-default engine](./overview.md)
+
+Method 1: Download the engine plug-in package directly
+
+[Linkis Engine Plugin Download](https://linkis.apache.org/zh-CN/blog/2022/04/15/how-to-download-engineconn-plugin)
+
+Method 2: Compile the engine plug-in separately (requires a `maven` environment)
+
+```
+# compile
+cd ${linkis_code_dir}/linkis-engineconn-plugins/flink/
+mvn clean install
+# The compiled engine plug-in package is located in the following directory
+${linkis_code_dir}/linkis-engineconn-plugins/flink/target/out/
+```
+
+[EngineConnPlugin engine plugin installation](../deployment/install-engineconn.md)
+
+### 2.2 Upload and load engine plugins
+
+Upload the engine plug-in package in 2.1 to the engine directory of the server
+```bash
+${LINKIS_HOME}/lib/linkis-engineplugins
+```
+The directory structure after uploading is as follows
+```
+linkis-engineconn-plugins/
+├── big
+│ ├── dist
+│ │ └── 1.12.2
+│ │ ├── conf
+│ │ └── lib
+│ └── plugin
+│ └── 1.12.2
+```
+### 2.3 Engine refresh
+
+#### 2.3.1 Restart and refresh
+Refresh the engine by restarting the `linkis-cg-linkismanager` service
+```bash
+cd ${LINKIS_HOME}/sbin
+sh linkis-daemon.sh restart cg-linkismanager
+```
+
+### 2.3.2 Check if the engine is refreshed successfully
+You can check whether the `last_update_time` of this table in the `linkis_engine_conn_plugin_bml_resources` in the database is the time when the refresh is triggered.
+
+```sql
+#Login to the linkis database
+select * from linkis_cg_engine_conn_plugin_bml_resources;
+```
+
+
+## 3. Use of Flink engine
+
+The `Flink` engine of `Linkis` is started by `flink on yarn`, so the queue used by the user needs to be specified, as shown in the figure below.
+
+![yarn](./images/yarn-conf.png)
+
+### 3.1 Submit tasks through `Linkis-cli`
+
+```shell
+sh ./bin/linkis-cli -engineType flink-1.12.2 \
+-codeType sql -code "show tables" \
+-submitUser hadoop -proxyUser hadoop
+```
+
+More `Linkis-Cli` command parameter reference: [`Linkis-Cli` usage](../user-guide/linkiscli-manual.md)
+
+### 3.2 Submitting tasks via `ComputationEngineConn`
+
+`FlinkSQL` can support a variety of data sources, such as `binlog, kafka, hive`, etc. If you want to use these data sources in `Flink` code, you need to put these `connector` plugin `jar` packages into In the `lib` of the `flink` engine, and restart the `EnginePlugin` service of `Linkis`. If you want to use `binlog` as a data source in your `FlinkSQL`, then you need to place `flink-connector-mysql-cdc-1.1.1.jar` in the `lib` of the `flink` engine.
+
+In order to facilitate sampling and debugging, we have added the `fql` script type in `Scriptis`, which is specially used to execute `FlinkSQL`. But you need to ensure that your `DSS` has been upgraded to `DSS1.0.0`. After upgrading to `DSS1.0.0`, you can directly enter `Scriptis` to create a new `fql` script for editing and execution.
+
+Writing example of `FlinkSQL`, taking `binlog` as an example
+```sql
+CREATE TABLE mysql_binlog (
+ id INT NOT NULL,
+ name STRING,
+ age INT
+) WITH (
+ 'connector' = 'mysql-cdc',
+ 'hostname' = 'ip',
+ 'port' = 'port',
+ 'username' = 'username',
+ 'password' = 'password',
+ 'database-name' = 'dbname',
+ 'table-name' = 'tablename',
+ 'debezium.snapshot.locking.mode' = 'none' -- it is recommended to add, otherwise the lock table will be required
+);
+select * from mysql_binlog where id > 10;
+```
+When debugging using the `select` syntax in `Scriptis`, the `Flink` engine will have an automatic `cancel` mechanism, that is, when the specified time or the number of lines sampled reaches the specified number, the `Flink` engine will Actively cancel the task and persist the obtained result set, and then the front end will call the interface to open the result set to display the result set on the front end.
+
+### 3.3 Submitting tasks via `OnceEngineConn`
+
+`OnceEngineConn` is used to officially start `Flink` streaming applications, specifically by calling `LinkisManager` `createEngineConn` interface through `LinkisManagerClient`, and sending the code to the created `Flink` engine, and then The `Flink` engine starts to execute, and this method can be called by other systems, such as `Streamis`. The usage of `Client` is also very simple, first create a `maven` project, or introduce the following dependencies in your project.
+```xml
+
+ org.apache.linkis
+ linkis-computation-client
+ ${linkis.version}
+
+```
+Then create a `scala` test file, click execute, and the parsing from a `binlog` data is completed and inserted into a table in another mysql database. But it should be noted that you must create a `resources` directory in the `maven` project, place a `linkis.properties` file, and specify the `gateway` address and `api` version of `linkis`, such as
+```properties
+wds.linkis.server.version=v1
+wds.linkis.gateway.url=http://ip:9001/
+```
+```java
+object OnceJobTest {
+ def main(args: Array[String]): Unit = {
+ val sql = """CREATE TABLE mysql_binlog (
+ | id INT NOT NULL,
+ | name STRING,
+ | age INT
+ |) WITH (
+ | 'connector' = 'mysql-cdc',
+ | 'hostname' = 'ip',
+ | 'port' = 'port',
+ | 'username' = '${username}',
+ | 'password' = '${password}',
+ | 'database-name' = '${database}',
+ | 'table-name' = '${tablename}',
+ | 'debezium.snapshot.locking.mode' = 'none'
+ |);
+ |CREATE TABLE sink_table (
+ | id INT NOT NULL,
+ | name STRING,
+ | age INT,
+ | primary key(id) not enforced
+ |) WITH (
+ | 'connector' = 'jdbc',
+ | 'url' = 'jdbc:mysql://${ip}:port/${database}',
+ | 'table-name' = '${tablename}',
+ | 'driver' = 'com.mysql.jdbc.Driver',
+ | 'username' = '${username}',
+ | 'password' = '${password}'
+ |);
+ |INSERT INTO sink_table SELECT id, name, age FROM mysql_binlog;
+ |""".stripMargin
+ val onceJob = SimpleOnceJob.builder().setCreateService("Flink-Test").addLabel(LabelKeyUtils.ENGINE_TYPE_LABEL_KEY, "flink-1.12.2")
+ .addLabel(LabelKeyUtils.USER_CREATOR_LABEL_KEY, "hadoop-Streamis").addLabel(LabelKeyUtils.ENGINE_CONN_MODE_LABEL_KEY, "once")
+ .addStartupParam(Configuration.IS_TEST_MODE.key, true)
+ // .addStartupParam("label." + LabelKeyConstant.CODE_TYPE_KEY, "sql")
+ .setMaxSubmitTime(300000)
+ .addExecuteUser("hadoop").addJobContent("runType", "sql").addJobContent("code", sql).addSource("jobName", "OnceJobTest")
+ .build()
+ onceJob.submit()
+ println(onceJob.getId)
+ onceJob.waitForCompleted()
+ System.exit(0)
+ }
+}
+```
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/engine-usage/hive.md b/versioned_docs/version-1.4.0/engine-usage/hive.md
new file mode 100644
index 00000000000..aac1f8d8556
--- /dev/null
+++ b/versioned_docs/version-1.4.0/engine-usage/hive.md
@@ -0,0 +1,291 @@
+---
+title: Hive Engine
+sidebar_position: 2
+---
+
+This article mainly introduces the installation, usage and configuration of the `Hive` engine plugin in `Linkis`.
+
+## 1. Preliminary work
+### 1.1 Environment configuration before engine use
+
+If you want to use the `hive` engine on your server, you need to ensure that the following environment variables have been set correctly and the engine startup user has these environment variables.
+
+It is strongly recommended that you check these environment variables for the executing user before executing `hive` tasks.
+
+| Environment variable name | Environment variable content | Remarks |
+|-----------------|----------------|------|
+| JAVA_HOME | JDK installation path | Required |
+| HADOOP_HOME | Hadoop installation path | Required |
+| HADOOP_CONF_DIR | Hadoop configuration path | required |
+| HIVE_CONF_DIR | Hive configuration path | required |
+
+### 1.1 Environment verification
+```
+# link hive
+bin/hive
+
+# test command
+show databases;
+
+# Being able to link successfully and output database information normally means that the environment configuration is successful
+hive (default)> show databases;
+OK
+databases_name
+default
+```
+
+## 2. Engine plugin installation [default engine](./overview.md)
+
+The binary installation package released by `linkis` includes the `Hive` engine plug-in by default, and users do not need to install it additionally.
+
+The version of `Hive` supports `hive1.x` and `hive2.x`. The default is to support `hive on MapReduce`. If you want to change to `Hive on Tez`, you need to modify it according to this `pr`.
+
+
+
+The `hive` version supported by default is 3.1.3, if you want to modify the `hive` version, you can find the `linkis-engineplugin-hive` module, modify the \ tag, and then compile this module separately Can
+
+[EngineConnPlugin engine plugin installation](../deployment/install-engineconn.md)
+
+## 3. Engine usage
+
+### 3.1 Submitting tasks via `Linkis-cli`
+
+```shell
+sh ./bin/linkis-cli -engineType hive-3.1.3 \
+-codeType hql -code "show databases" \
+-submitUser hadoop -proxyUser hadoop
+```
+
+More `Linkis-Cli` command parameter reference: [`Linkis-Cli` usage](../user-guide/linkiscli-manual.md)
+
+### 3.2 Submit tasks through Linkis SDK
+
+`Linkis` provides `SDK` of `Java` and `Scala` to submit tasks to `Linkis` server. For details, please refer to [JAVA SDK Manual](../user-guide/sdk-manual.md).
+For the `Hive` task, you only need to modify `EngineConnType` and `CodeType` parameters in `Demo`:
+
+```java
+Map labels = new HashMap();
+labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "hive-3.1.3"); // required engineType Label
+labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE");// required execute user and creator
+labels.put(LabelKeyConstant.CODE_TYPE_KEY, "hql"); // required codeType
+```
+
+## 4. Engine configuration instructions
+
+### 4.1 Default Configuration Description
+| Configuration | Default | Required | Description |
+| ------------------------ | ------------------- | ---| ------------------------------------------- |
+| wds.linkis.rm.instance | 10 | no | engine maximum concurrency |
+| wds.linkis.engineconn.java.driver.memory | 1g | No | engine initialization memory size |
+| wds.linkis.engineconn.max.free.time | 1h | no | engine idle exit time |
+
+### 4.2 Queue resource configuration
+The `MapReduce` task of `hive` needs to use `yarn` resources, so a queue needs to be set
+
+![yarn](./images/yarn-conf.png)
+
+### 4.3 Configuration modification
+If the default parameters are not satisfied, there are the following ways to configure some basic parameters
+
+#### 4.3.1 Management Console Configuration
+
+![hive](./images/hive-config.png)
+
+Note: After modifying the configuration under the `IDE` tag, you need to specify `-creator IDE` to take effect (other tags are similar), such as:
+
+```shell
+sh ./bin/linkis-cli -creator IDE \
+-engineType hive-3.1.3 -codeType hql \
+-code "show databases" \
+-submitUser hadoop -proxyUser hadoop
+```
+
+#### 4.3.2 Task interface configuration
+Submit the task interface, configure it through the parameter `params.configuration.runtime`
+
+```shell
+Example of http request parameters
+{
+ "executionContent": {"code": "show databases;", "runType": "sql"},
+ "params": {
+ "variable": {},
+ "configuration": {
+ "runtime": {
+ "wds.linkis.rm.instance":"10"
+ }
+ }
+ },
+ "labels": {
+ "engineType": "hive-3.1.3",
+ "userCreator": "hadoop-IDE"
+ }
+}
+```
+
+### 4.4 Engine related data table
+
+`Linkis` is managed through engine tags, and the data table information involved is as follows.
+
+```
+linkis_ps_configuration_config_key: Insert the key and default values of the configuration parameters of the engine
+linkis_cg_manager_label: insert engine label such as: hive-3.1.3
+linkis_ps_configuration_category: Insert the directory association of the engine
+linkis_ps_configuration_config_value: The configuration that the insertion engine needs to display
+linkis_ps_configuration_key_engine_relation: The relationship between the configuration item and the engine
+```
+
+The initial data related to the engine in the table is as follows
+
+```sql
+-- set variable
+SET @HIVE_LABEL="hive-3.1.3";
+SET @HIVE_ALL=CONCAT('*-*,',@HIVE_LABEL);
+SET @HIVE_IDE=CONCAT('*-IDE,',@HIVE_LABEL);
+
+-- engine label
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType', @HIVE_ALL, 'OPTIONAL', 2, now(), now());
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType', @HIVE_IDE, 'OPTIONAL', 2, now(), now());
+
+select @label_id := id from linkis_cg_manager_label where `label_value` = @HIVE_IDE;
+insert into linkis_ps_configuration_category (`label_id`, `level`) VALUES (@label_id, 2);
+
+-- configuration key
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.rm.instance', 'range: 1-20, unit: piece', 'hive engine maximum concurrent number', '10', 'NumInterval', '[1,20]', '0 ', '0', '1', 'Queue resource', 'hive');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.engineconn.java.driver.memory', 'Value range: 1-10, unit: G', 'hive engine initialization memory size', '1g', 'Regex', '^([ 1-9]|10)(G|g)$', '0', '0', '1', 'hive engine settings', 'hive');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('hive.client.java.opts', 'hive client process parameters', 'jvm parameters when the hive engine starts','', 'None', NULL, '1', '1', '1', 'hive engine settings', 'hive');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('mapred.reduce.tasks', 'Range: -1-10000, unit: number', 'reduce number', '-1', 'NumInterval', '[-1,10000]', '0', '1', '1', 'hive resource settings', 'hive');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.engineconn.max.free.time', 'Value range: 3m,15m,30m,1h,2h', 'Engine idle exit time','1h', 'OFT', '[\ "1h\",\"2h\",\"30m\",\"15m\",\"3m\"]', '0', '0', '1', 'hive engine settings', ' hive');
+
+-- key engine relation
+insert into `linkis_ps_configuration_key_engine_relation` (`config_key_id`, `engine_type_label_id`)
+(select config.id as `config_key_id`, label.id AS `engine_type_label_id` FROM linkis_ps_configuration_config_key config
+INNER JOIN linkis_cg_manager_label label ON config.engine_conn_type = 'hive' and label_value = @HIVE_ALL);
+
+-- engine default configuration
+insert into `linkis_ps_configuration_config_value` (`config_key_id`, `config_value`, `config_label_id`)
+(select `relation`.`config_key_id` AS `config_key_id`, '' AS `config_value`, `relation`.`engine_type_label_id` AS `config_label_id` FROM linkis_ps_configuration_key_engine_relation relation
+INNER JOIN linkis_cg_manager_label label ON relation.engine_type_label_id = label.id AND label.label_value = @HIVE_ALL);
+```
+
+## 5. Hive modification log display
+The default log interface does not display `application_id` and the number of `task` completed, users can output the log according to their needs
+The code blocks that need to be modified in the `log4j2-engineconn.xml/log4j2.xml` configuration file in the engine are as follows
+1. Need to add under the `appenders` component
+```xml
+
+
+
+```
+2. Need to add under `root` component
+```xml
+
+```
+3. Need to add under `loggers` component
+```xml
+
+
+
+```
+After making the above related modifications, the log can add task `task` progress information, which is displayed in the following style
+```
+2022-04-08 11:06:50.228 INFO [Linkis-Default-Scheduler-Thread-3] SessionState 1111 printInfo - Status: Running (Executing on YARN cluster with App id application_1631114297082_432445)
+2022-04-08 11:06:50.248 INFO [Linkis-Default-Scheduler-Thread-3] SessionState 1111 printInfo - Map 1: -/- Reducer 2: 0/1
+2022-04-08 11:06:52.417 INFO [Linkis-Default-Scheduler-Thread-3] SessionState 1111 printInfo - Map 1: 0/1 Reducer 2: 0/1
+2022-04-08 11:06:55.060 INFO [Linkis-Default-Scheduler-Thread-3] SessionState 1111 printInfo - Map 1: 0(+1)/1 Reducer 2: 0/1
+2022-04-08 11:06:57.495 INFO [Linkis-Default-Scheduler-Thread-3] SessionState 1111 printInfo - Map 1: 1/1 Reducer 2: 0(+1)/1
+2022-04-08 11:06:57.899 INFO [Linkis-Default-Scheduler-Thread-3] SessionState 1111 printInfo - Map 1: 1/1 Reducer 2: 1/1
+```
+
+An example of a complete `xml` configuration file is as follows:
+```xml
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/check-seatunnel.png b/versioned_docs/version-1.4.0/engine-usage/images/check-seatunnel.png
new file mode 100644
index 00000000000..982c227195b
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/check-seatunnel.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/datasourceconntest.png b/versioned_docs/version-1.4.0/engine-usage/images/datasourceconntest.png
new file mode 100644
index 00000000000..2c25accb032
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/datasourceconntest.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/datasourcemanage.png b/versioned_docs/version-1.4.0/engine-usage/images/datasourcemanage.png
new file mode 100644
index 00000000000..f6be867c90d
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/datasourcemanage.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/es-config.png b/versioned_docs/version-1.4.0/engine-usage/images/es-config.png
new file mode 100644
index 00000000000..d06c2b878ba
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/es-config.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/es-manage.png b/versioned_docs/version-1.4.0/engine-usage/images/es-manage.png
new file mode 100644
index 00000000000..f4a4616a528
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/es-manage.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/historical_information.png b/versioned_docs/version-1.4.0/engine-usage/images/historical_information.png
new file mode 100644
index 00000000000..6c10cd71b7c
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/historical_information.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/hive-config.png b/versioned_docs/version-1.4.0/engine-usage/images/hive-config.png
new file mode 100644
index 00000000000..f9b15cc8b5d
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/hive-config.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/hive-run.png b/versioned_docs/version-1.4.0/engine-usage/images/hive-run.png
new file mode 100644
index 00000000000..287b1abfdef
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/hive-run.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/jdbc-config.png b/versioned_docs/version-1.4.0/engine-usage/images/jdbc-config.png
new file mode 100644
index 00000000000..b9ef60ee794
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/jdbc-config.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/jdbc-run.png b/versioned_docs/version-1.4.0/engine-usage/images/jdbc-run.png
new file mode 100644
index 00000000000..fe51598b235
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/jdbc-run.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/job_state.png b/versioned_docs/version-1.4.0/engine-usage/images/job_state.png
new file mode 100644
index 00000000000..cadf1d1cd90
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/job_state.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/muti-data-source-usage.png b/versioned_docs/version-1.4.0/engine-usage/images/muti-data-source-usage.png
new file mode 100644
index 00000000000..cc017a53935
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/muti-data-source-usage.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/new_pipeline_script.png b/versioned_docs/version-1.4.0/engine-usage/images/new_pipeline_script.png
new file mode 100644
index 00000000000..8a1b59ce29e
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/new_pipeline_script.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/openlookeng-config.png b/versioned_docs/version-1.4.0/engine-usage/images/openlookeng-config.png
new file mode 100644
index 00000000000..c30b764ed52
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/openlookeng-config.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/pipeline-conf.png b/versioned_docs/version-1.4.0/engine-usage/images/pipeline-conf.png
new file mode 100644
index 00000000000..b531d31f790
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/pipeline-conf.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/presto-console.png b/versioned_docs/version-1.4.0/engine-usage/images/presto-console.png
new file mode 100644
index 00000000000..f39242cf298
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/presto-console.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/presto-file.png b/versioned_docs/version-1.4.0/engine-usage/images/presto-file.png
new file mode 100644
index 00000000000..49c00b99656
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/presto-file.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/presto-psql.png b/versioned_docs/version-1.4.0/engine-usage/images/presto-psql.png
new file mode 100644
index 00000000000..505f0a7a8c1
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/presto-psql.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/pyspakr-run.png b/versioned_docs/version-1.4.0/engine-usage/images/pyspakr-run.png
new file mode 100644
index 00000000000..c80c85bae00
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/pyspakr-run.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/python-conf.png b/versioned_docs/version-1.4.0/engine-usage/images/python-conf.png
new file mode 100644
index 00000000000..3417af1c1e1
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/python-conf.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/python-config.png b/versioned_docs/version-1.4.0/engine-usage/images/python-config.png
new file mode 100644
index 00000000000..a6c6f08b833
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/python-config.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/python-run.png b/versioned_docs/version-1.4.0/engine-usage/images/python-run.png
new file mode 100644
index 00000000000..65467afca15
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/python-run.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/queue-set.png b/versioned_docs/version-1.4.0/engine-usage/images/queue-set.png
new file mode 100644
index 00000000000..46f7e2c40bd
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/queue-set.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/scala-run.png b/versioned_docs/version-1.4.0/engine-usage/images/scala-run.png
new file mode 100644
index 00000000000..7c01aadcdf8
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/scala-run.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/shell-run.png b/versioned_docs/version-1.4.0/engine-usage/images/shell-run.png
new file mode 100644
index 00000000000..734bdb22dce
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/shell-run.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/spark-conf.png b/versioned_docs/version-1.4.0/engine-usage/images/spark-conf.png
new file mode 100644
index 00000000000..0b7ce439f5e
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/spark-conf.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/sparksql-run.png b/versioned_docs/version-1.4.0/engine-usage/images/sparksql-run.png
new file mode 100644
index 00000000000..f0b1d1bcaf2
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/sparksql-run.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/to_write.png b/versioned_docs/version-1.4.0/engine-usage/images/to_write.png
new file mode 100644
index 00000000000..e75ab0638e4
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/to_write.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/trino-config.png b/versioned_docs/version-1.4.0/engine-usage/images/trino-config.png
new file mode 100644
index 00000000000..b6dc459a2f1
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/trino-config.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/workflow.png b/versioned_docs/version-1.4.0/engine-usage/images/workflow.png
new file mode 100644
index 00000000000..3a5919f2594
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/workflow.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/images/yarn-conf.png b/versioned_docs/version-1.4.0/engine-usage/images/yarn-conf.png
new file mode 100644
index 00000000000..46f7e2c40bd
Binary files /dev/null and b/versioned_docs/version-1.4.0/engine-usage/images/yarn-conf.png differ
diff --git a/versioned_docs/version-1.4.0/engine-usage/impala.md b/versioned_docs/version-1.4.0/engine-usage/impala.md
new file mode 100644
index 00000000000..8fa24bf0b03
--- /dev/null
+++ b/versioned_docs/version-1.4.0/engine-usage/impala.md
@@ -0,0 +1,219 @@
+---
+title: Impala
+sidebar_position: 12
+---
+
+This article mainly introduces the installation, usage and configuration of the `Impala` engine plugin in `Linkis`.
+
+## 1. Pre-work
+
+### 1.1 Environment installation
+
+If you want to use the Impala engine on your server, you need to prepare the Impala service and provide connection information, such as the connection address of the Impala cluster, SASL user name and password, etc.
+
+### 1.2 Environment verification
+
+Execute the impala-shell command to get the following output, indicating that the impala service is available.
+```
+[root@8f43473645b1 /]# impala-shell
+Starting Impala Shell without Kerberos authentication
+Connected to 8f43473645b1:21000
+Server version: impalad version 2.12.0-cdh5.15.0 RELEASE (build 23f574543323301846b41fa5433690df32efe085)
+***************************************************** *********************************
+Welcome to the Impala shell.
+(Impala Shell v2.12.0-cdh5.15.0 (23f5745) built on Thu May 24 04:07:31 PDT 2018)
+
+When pretty-printing is disabled, you can use the '--output_delimiter' flag to set
+the delimiter for fields in the same row. The default is ','.
+***************************************************** *********************************
+[8f43473645b1:21000] >
+```
+
+## 2. Engine plugin deployment
+
+Before compiling the `Impala` engine, the `Linkis` project needs to be fully compiled, and the default installation and deployment package released by `Linkis` does not include this engine plug-in by default.
+
+### 2.1 Engine plugin preparation (choose one) [non-default engine](./overview.md)
+
+Method 1: Download the engine plug-in package directly
+
+[Linkis Engine Plugin Download](https://linkis.apache.org/zh-CN/blog/2022/04/15/how-to-download-engineconn-plugin)
+
+Method 2: Compile the engine plug-in separately (requires `maven` environment)
+
+```
+# compile
+cd ${linkis_code_dir}/linkis-engineconn-plugins/impala/
+mvn clean install
+# The compiled engine plug-in package is located in the following directory
+${linkis_code_dir}/linkis-engineconn-plugins/impala/target/out/
+```
+[EngineConnPlugin Engine Plugin Installation](../deployment/install-engineconn.md)
+
+### 2.2 Upload and load engine plugins
+
+Upload the engine package in 2.1 to the engine directory of the server
+```bash
+${LINKIS_HOME}/lib/linkis-engineplugins
+```
+The directory structure after uploading is as follows
+```
+linkis-engineconn-plugins/
+├── impala
+│ ├── dist
+│ │ └── 3.4.0
+│ │ ├── conf
+│ │ └── lib
+│ └── plugin
+│ └── 3.4.0
+```
+
+### 2.3 Engine refresh
+
+#### 2.3.1 Restart and refresh
+Refresh the engine by restarting the `linkis-cg-linkismanager` service
+```bash
+cd ${LINKIS_HOME}/sbin
+sh linkis-daemon.sh restart cg-linkismanager
+```
+
+### 2.3.2 Check whether the engine is refreshed successfully
+You can check whether the `last_update_time` of the `linkis_engine_conn_plugin_bml_resources` table in the database is the time to trigger the refresh.
+
+```sql
+#login to `linkis` database
+select * from linkis_cg_engine_conn_plugin_bml_resources;
+```
+
+## 3 Engine usage
+
+### 3.1 Submit tasks through `Linkis-cli`
+
+```shell
+sh ./bin/linkis-cli -submitUser impala \
+-engineType impala-3.4.0 -code 'show databases;' \
+-runtimeMap linkis.es.http.method=GET \
+-runtimeMap linkis.impala.servers=127.0.0.1:21050
+```
+
+More `Linkis-Cli` command parameter reference: [Linkis-Cli usage](../user-guide/linkiscli-manual.md)
+
+## 4. Engine configuration instructions
+
+### 4.1 Default Configuration Description
+
+| Configuration | Default | Description | Required |
+| ----------------------------------------- | ---------- ----------- | -------------------------------------- ----- | -------- |
+| linkis.impala.default.limit | 5000 | Yes | The limit on the number of returned items in the query result set |
+| linkis.impala.engine.user | ${HDFS_ROOT_USER} | yes | default engine startup user |
+| linkis.impala.user.isolation.mode | false | yes | start the engine in multi-user mode |
+| linkis.impala.servers | 127.0.0.1:21050 | is | Impala server address, separated by ',' |
+| linkis.impala.maxConnections | 10 | Yes | Maximum number of connections to each Impala server |
+| linkis.impala.ssl.enable | false | yes | whether to enable SSL connection |
+| linkis.impala.ssl.keystore.type | JKS | No | SSL Keystore type |
+| linkis.impala.ssl.keystore | null | No | SSL Keystore path |
+| linkis.impala.ssl.keystore.password | null | No | SSL Keystore password |
+| linkis.impala.ssl.truststore.type | JKS | No | SSL Truststore type |
+| linkis.impala.ssl.truststore | null | No | SSL Truststore path |
+| linkis.impala.ssl.truststore.password | null | No | SSL Truststore password |
+| linkis.impala.sasl.enable | false | yes | whether to enable SASL authentication |
+| linkis.impala.sasl.mechanism | PLAIN | 否 | SASL Mechanism |
+| linkis.impala.sasl.authorizationId | null | 否 | SASL AuthorizationId |
+| linkis.impala.sasl.protocol | LDAP | 否 | SASL Protocol |
+| linkis.impala.sasl.properties | null | No | SASL Properties: key1=value1,key2=value2 |
+| linkis.impala.sasl.username | ${impala.engine.user}| 否 | SASL Username |
+| linkis.impala.sasl.password | null | No | SASL Password |
+| linkis.impala.sasl.password.cmd | null | No | SASL Password get command |
+| linkis.impala.heartbeat.seconds | 1 | yes | task status update interval |
+| linkis.impala.query.timeout.seconds | 0 | No | Task execution timeout |
+| linkis.impala.query.batchSize | 1000 | yes | result set fetch batch size |
+| linkis.impala.query.options | null | No | Query submission parameters: key1=value1,key2=value2 |
+
+### 4.2 Configuration modification
+
+If the default parameters are not satisfied, there are the following ways to configure some basic parameters
+
+#### 4.2.1 Task interface configuration
+Submit the task interface and configure it through the parameter `params.configuration.runtime`
+
+```shell
+Example of http request parameters
+{
+ "executionContent": {"code": "show databases;", "runType": "sql"},
+ "params": {
+ "variable": {},
+ "configuration": {
+ "runtime": {
+ "linkis.impala.servers"="127.0.0.1:21050"
+ }
+ }
+ },
+ "labels": {
+ "engineType": "impala-3.4.0",
+ "userCreator": "hadoop-IDE"
+ }
+}
+```
+
+### 4.3 Engine related data table
+
+`Linkis` is managed through engine tags, and the data table information involved is as follows.
+
+```
+linkis_ps_configuration_config_key: Insert the key and default values of the configuration parameters of the engine
+linkis_cg_manager_label: insert engine label such as: impala-3.4.0
+linkis_ps_configuration_category: Insert the directory association of the engine
+linkis_ps_configuration_config_value: Insert the configuration that the engine needs to display
+linkis_ps_configuration_key_engine_relation: the relationship between configuration items and engines
+```
+
+The initial data related to the engine in the table is as follows
+
+
+```sql
+-- set variable
+SET @ENGINE_LABEL="impala-3.4.0";
+SET @ENGINE_IDE=CONCAT('*-IDE,',@ENGINE_LABEL);
+SET @ENGINE_ALL=CONCAT('*-*,',@ENGINE_LABEL);
+SET @ENGINE_NAME="impala";
+
+-- add impala engine to IDE
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType', @ENGINE_ALL, 'OPTIONAL', 2, now(), now());
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType', @ENGINE_IDE, 'OPTIONAL', 2, now(), now());
+select @label_id := id from `linkis_cg_manager_label` where label_value = @ENGINE_IDE;
+insert into `linkis_ps_configuration_category` (`label_id`, `level`) VALUES (@label_id, 2);
+
+-- insert configuration key
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.impala.default.limit', 'result result set limit of query', 'result set limit', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1 , 'Data Source Configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.impala.engine.user', 'Default engine startup user', 'Default startup user', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, 'Data source configuration' );
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.impala.user.isolation.mode', 'Start engine in multi-user mode', 'Multi-user mode', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, ' Datasource configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.impala.servers', 'Impala server address', 'service address', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, 'data source configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.impala.maxConnections ', 'The maximum number of connections to each Impala server', 'Maximum number of connections', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, 'data source configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.impala.ssl.enable', 'Enable SSL connection', 'Enable SSL', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, 'Data source configuration') ;
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.impala.ssl.keystore.type', 'SSL Keystore类型', 'SSL Keystore类型', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.impala.ssl.keystore', 'SSL Keystore路径', 'SSL Keystore路径', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.impala.ssl.keystore.password', 'SSL Keystore密码', 'SSL Keystore密码', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.impala.ssl.truststore.type', 'SSL Truststore类型', 'SSL Truststore类型', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.impala.ssl.truststore', 'SSL Truststore路径', 'SSL Truststore路径', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.impala.ssl.truststore.password', 'SSL Truststore密码', 'SSL Truststore密码', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.impala.sasl.enable', 'whether to enable SASL authentication', 'enable SASL', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, 'data source configuration') ;
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.impala.sasl.mechanism', 'SASL Mechanism', 'SASL Mechanism', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.impala.sasl.authorizationId', 'SASL AuthorizationId', 'SASL AuthorizationId', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.impala.sasl.protocol', 'SASL Protocol', 'SASL Protocol', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.impala.sasl.properties', 'SASL Properties: key1=value1,key2=value2', 'SASL Properties', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.impala.sasl.username', 'SASL Username', 'SASL Username', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.impala.sasl.password', 'SASL Password', 'SASL Password', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.impala.sasl.password.cmd', 'SASL Password get command', 'SASL Password get command', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, 'data source configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.impala.heartbeat.seconds', 'Task status update interval', 'Task status update interval', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, 'Data source configuration ');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.impala.query.timeout.seconds', 'Task execution timeout', 'Task execution timeout', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, 'data source configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.impala.query.batchSize', 'result set acquisition batch size', 'result set acquisition batch size', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, 'Datasource Configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.impala.query.options', 'Query submission parameters: key1=value1,key2=value2', 'Query submission parameters', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, 'Data source configuration');
+-- impala engine -*
+insert into `linkis_ps_configuration_key_engine_relation` (`config_key_id`, `engine_type_label_id`)
+(select config.id as config_key_id, label.id AS engine_type_label_id FROM `linkis_ps_configuration_config_key` config
+INNER JOIN `linkis_cg_manager_label` label ON config.engine_conn_type = @ENGINE_NAME and label_value = @ENGINE_ALL);
+-- impala engine default configuration
+insert into `linkis_ps_configuration_config_value` (`config_key_id`, `config_value`, `config_label_id`)
+(select relation.config_key_id AS config_key_id, '' AS config_value, relation.engine_type_label_id AS config_label_id FROM `linkis_ps_configuration_key_engine_relation` relation
+INNER JOIN `linkis_cg_manager_label` label ON relation.engine_type_label_id = label.id AND label.label_value = @ENGINE_ALL);
+```
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/engine-usage/jdbc.md b/versioned_docs/version-1.4.0/engine-usage/jdbc.md
new file mode 100644
index 00000000000..d579aae88b8
--- /dev/null
+++ b/versioned_docs/version-1.4.0/engine-usage/jdbc.md
@@ -0,0 +1,275 @@
+---
+title: JDBC Engine
+sidebar_position: 7
+---
+
+This article mainly introduces the installation, use and configuration of the `JDBC` engine plugin in `Linkis`.
+
+## 1. Preliminary work
+### 1.1 Environment Installation
+
+If you want to use `JDBC` engine on your server, you need to prepare `JDBC` connection information, such as `MySQL` database connection address, username and password, etc.
+
+### 1.2 Environment verification (take `Mysql` as an example)
+```
+mysql -uroot -P 3306 -h 127.0.0.1 -p 123456
+```
+The output of the following information means that the `JDBC` connection information is available
+```
+mysql: [Warning] Using a password on the command line interface can be insecure.
+Welcome to the MySQL monitor. Commands end with ; or \g.
+Your MySQL connection id is 9
+Server version: 5.7.39 MySQL Community Server (GPL)
+
+Copyright (c) 2000, 2022, Oracle and/or its affiliates.
+
+Oracle is a registered trademark of Oracle Corporation and/or its
+affiliates. Other names may be trademarks of their respective
+owners.
+
+Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
+
+mysql>
+```
+
+## 2. Engine plugin installation
+
+### 2.1 Engine plugin preparation (choose one) [non-default engine](./overview.md)
+
+Method 1: Download the engine plug-in package directly
+
+[`Linkis` engine plugin download](https://linkis.apache.org/zh-CN/blog/2022/04/15/how-to-download-engineconn-plugin)
+
+Method 2: Compile the engine plug-in separately (requires a `maven` environment)
+
+```
+# compile
+cd ${linkis_code_dir}/linkis-engineconn-plugins/jdbc/
+mvn clean install
+# The compiled engine plug-in package is located in the following directory
+${linkis_code_dir}/linkis-engineconn-plugins/jdbc/target/out/
+```
+
+[`EngineConnPlugin` engine plugin installation](../deployment/install-engineconn.md)
+
+### 2.2 Upload and load engine plugins
+
+Upload the engine plug-in package in 2.1 to the engine directory of the server
+```bash
+${LINKIS_HOME}/lib/linkis-engineplugins
+```
+The directory structure after uploading is as follows
+```
+linkis-engineconn-plugins/
+├── jdbc
+│ ├── dist
+│ │ └── 4
+│ │ ├── conf
+│ │ └── lib
+│ └── plugin
+│ └── 4
+```
+
+### 2.3 Engine refresh
+
+#### 2.3.1 Restart and refresh
+Refresh the engine by restarting the `linkis-cg-linkismanager` service
+```bash
+cd ${LINKIS_HOME}/sbin
+sh linkis-daemon.sh restart cg-linkismanager
+```
+
+### 2.3.2 Check whether the engine is refreshed successfully
+
+You can check whether the `last_update_time` of the `linkis_engine_conn_plugin_bml_resources` table in the database is the time to trigger the refresh.
+
+```sql
+#Login to the `linkis` database
+select * from linkis_cg_engine_conn_plugin_bml_resources;
+```
+
+## 3. Engine usage
+
+### 3.1 Submit tasks through `Linkis-cli`
+
+```shell
+sh ./bin/linkis-cli -engineType jdbc-4 \
+-codeType jdbc -code "show tables" \
+-submitUser hadoop -proxyUser hadoop \
+-runtimeMap wds.linkis.jdbc.connect.url=jdbc:mysql://127.0.0.1:3306/linkis_db \
+-runtimeMap wds.linkis.jdbc.driver=com.mysql.jdbc.Driver \
+-runtimeMap wds.linkis.jdbc.username=test \
+-runtimeMap wds.linkis.jdbc.password=123456
+```
+
+More `Linkis-Cli` command parameter reference: [`Linkis-Cli` usage](../user-guide/linkiscli-manual.md)
+
+### 3.2 Submitting tasks through `Linkis SDK`
+
+`Linkis` provides `SDK` of `Java` and `Scala` to submit tasks to `Linkis` server. For details, please refer to [JAVA SDK Manual](../user-guide/sdk-manual.md). For the `JDBC` task, you only need to modify `EngineConnType` and `CodeType` parameters in `Demo`:
+
+```java
+Map labels = new HashMap();
+labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "jdbc-4"); // required engineType Label
+labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE");// required execute user and creator
+labels.put(LabelKeyConstant.CODE_TYPE_KEY, "jdbc"); // required codeType
+```
+
+### 3.3 Multiple data source support
+Starting from `Linkis 1.2.0`, it provides support for multiple data sources in the `JDBC` engine. First, we can manage different data sources in the console. Address: Log in to the management console-->Data source management-->Add data source
+
+![](./images/datasourcemanage.png)
+
+Figure 3-3 Data source management
+
+![](./images/datasourceconntest.png)
+
+Figure 3-4 Data source connection test
+
+After the data source is added, you can use the multi-data source switching function of the `JDBC` engine. There are two ways:
+1. Specify the data source name parameter through the interface parameter
+Example parameters:
+```json
+{
+ "executionContent": {
+ "code": "show databases",
+ "runType": "jdbc"
+ },
+ "params": {
+ "variable": {},
+ "configuration": {
+ "startup": {},
+ "runtime": {
+ "wds.linkis.engine.runtime.datasource": "test_mysql"
+ }
+ }
+ },
+ "source": {
+ "scriptPath": ""
+ },
+ "labels": {
+ "engineType": "jdbc-4",
+ "userCreator": "hadoop-IDE"
+ }
+}
+```
+
+Parameter: `wds.linkis.engine.runtime.datasource` is a configuration with a fixed name, do not modify the name definition at will
+
+2. Use the `Scripts` code of `DSS` to submit the entry drop-down to filter the data sources that need to be submitted, as shown in the figure below:
+![](./images/muti-data-source-usage.png)
+Currently `dss-1.1.0` does not support drop-down selection of data source name, `PR` is under development, you can wait for the subsequent release or pay attention to related `PR`:
+(https://github.com/WeBankFinTech/DataSphereStudio/issues/940)
+
+
+Function description of multiple data sources:
+
+1) In the previous version, the `JDBC` engine's support for data sources was not perfect, especially when used with Scripts, the `JDBC` script type can only bind a set of `JDBC` engine parameters of the console.
+When we need to switch multiple data sources, we can only modify the connection parameters of the `JDBC` engine, which is troublesome.
+
+2) To cooperate with data source management, we introduce the multi-data source switching function of `JDBC` engine, which can realize that only setting the data source name can submit jobs to different `JDBC` services, and ordinary users do not need to
+It maintains the connection information of the data source, avoids the complicated configuration, and also meets the security requirements of the data source connection password and other configurations.
+
+3) The data sources set in the multi-data source management can be loaded by the `JDBC` engine only after they have been released and have not expired, otherwise different types of exception prompts will be fed back to the user.
+
+4) The loading priority of `JDBC` engine parameters is: task submission parameters > data source selection parameters > console JDBC engine parameters
+
+
+## 4. Engine configuration instructions
+
+### 4.1 Default Configuration Description
+
+| Configuration | Default | Required | Description |
+| ------------------------ | ------------------- | ---| ------------------------------------------- |
+| wds.linkis.jdbc.connect.url | jdbc:mysql://127.0.0.1:10000 | yes | jdbc connection address |
+| wds.linkis.jdbc.driver | no | yes | jdbc connection driver |
+| wds.linkis.jdbc.username | no | yes | database connection username |
+| wds.linknis.jdbc.password | no | yes | database link password |
+| wds.linkis.jdbc.connect.max | 10 | No | The maximum number of jdbc engine connections |
+| wds.linkis.jdbc.version | jdbc4 | no | jdbc version |
+
+### 4.2 Configuration modification
+If the default parameters are not satisfied, there are the following ways to configure some basic parameters
+
+
+#### 4.2.1 Management console configuration
+
+![jdbc](./images/jdbc-config.png)
+
+Note: After modifying the configuration under the `IDE` tag, you need to specify `-creator IDE` to take effect (other tags are similar), such as:
+
+```shell
+sh ./bin/linkis-cli -creator IDE \
+-engineType jdbc-4 -codeType jdbc \
+-code "show tables" \
+-submitUser hadoop -proxyUser hadoop
+```
+
+#### 4.2.2 Task interface configuration
+Submit the task interface, configure it through the parameter `params.configuration.runtime`
+
+```shell
+Example of http request parameters
+{
+ "executionContent": {"code": "show databases;", "runType": "jdbc"},
+ "params": {
+ "variable": {},
+ "configuration": {
+ "runtime": {
+ "wds.linkis.jdbc.connect.url":"jdbc:mysql://127.0.0.1:3306/test",
+ "wds.linkis.jdbc.driver":"com.mysql.jdbc.Driver",
+ "wds.linkis.jdbc.username":"test",
+ "wds.linkis.jdbc.password":"test23"
+ }
+ }
+ },
+ "labels": {
+ "engineType": "jdbc-4",
+ "userCreator": "hadoop-IDE"
+ }
+}
+```
+### 4.3 Engine related data sheet
+
+`Linkis` is managed through engine tags, and the data table information involved is as follows.
+
+```
+linkis_ps_configuration_config_key: key and default values of configuration parameters inserted into the engine
+linkis_cg_manager_label: Insert engine label such as: jdbc-4
+linkis_ps_configuration_category: The directory association relationship of the insertion engine
+linkis_ps_configuration_config_value: The configuration that the insertion engine needs to display
+linkis_ps_configuration_key_engine_relation: The relationship between the configuration item and the engine
+```
+
+The initial data related to the engine in the table is as follows
+
+```sql
+-- set variable
+SET @JDBC_LABEL="jdbc-4";
+SET @JDBC_ALL=CONCAT('*-*,',@JDBC_LABEL);
+SET @JDBC_IDE=CONCAT('*-IDE,',@JDBC_LABEL);
+
+-- engine label
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType', @JDBC_ALL, 'OPTIONAL', 2, now(), now());
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType', @JDBC_IDE, 'OPTIONAL', 2, now(), now());
+
+select @label_id := id from linkis_cg_manager_label where `label_value` = @JDBC_IDE;
+insert into linkis_ps_configuration_category (`label_id`, `level`) VALUES (@label_id, 2);
+
+-- configuration key
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.connect.url', 'For example: jdbc:mysql://127.0.0.1:10000', 'jdbc connection address', 'jdbc:mysql://127.0.0.1:10000', 'Regex', '^\\s*jdbc:\\w+://([^:]+)(:\\d+)(/[^\\?]+)?(\\?\\S* )?$', '0', '0', '1', 'Datasource configuration', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.driver', 'For example: com.mysql.jdbc.Driver', 'jdbc connection driver', '', 'None', '', '0', '0', '1 ', 'User Configuration', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.version', 'Value range: jdbc3,jdbc4', 'jdbc version','jdbc4', 'OFT', '[\"jdbc3\",\"jdbc4\"]' , '0', '0', '1', 'userconfig', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.username', 'username', 'Database connection username', '', 'None', '', '0', '0', '1', 'User configuration', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.password', 'password', 'Database connection password', '', 'None', '', '0', '0', '1', 'User configuration', ' jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.connect.max', 'range: 1-20, unit: piece', 'jdbc engine maximum number of connections', '10', 'NumInterval', '[1,20]', '0', '0', '1', 'Datasource configuration', 'jdbc');
+
+-- key engine relation
+insert into `linkis_ps_configuration_key_engine_relation` (`config_key_id`, `engine_type_label_id`)
+(select config.id as `config_key_id`, label.id AS `engine_type_label_id` FROM linkis_ps_configuration_config_key config
+INNER JOIN linkis_cg_manager_label label ON config.engine_conn_type = 'jdbc' and label_value = @JDBC_ALL);
+
+insert into `linkis_ps_configuration_config_value` (`config_key_id`, `config_value`, `config_label_id`)
+(select `relation`.`config_key_id` AS `config_key_id`, '' AS `config_value`, `relation`.`engine_type_label_id` AS `config_label_id` FROM linkis_ps_configuration_key_engine_relation relation
+INNER JOIN linkis_cg_manager_label label ON relation.engine_type_label_id = label.id AND label.label_value = @JDBC_ALL);
+```
diff --git a/versioned_docs/version-1.4.0/engine-usage/openlookeng.md b/versioned_docs/version-1.4.0/engine-usage/openlookeng.md
new file mode 100644
index 00000000000..cfc29676cdd
--- /dev/null
+++ b/versioned_docs/version-1.4.0/engine-usage/openlookeng.md
@@ -0,0 +1,211 @@
+---
+title: OpenLooKeng Engine
+sidebar_position: 8
+---
+
+This article mainly introduces the installation, usage and configuration of the openLooKeng engine plugin in `Linkis` `.
+
+## 1. Environmental Requirements
+
+### 1.1 Environment Installation
+
+If you wish to deploy the `openLooKeng` engine, you need to prepare a working `openLooKeng` environment.
+
+### 1.2 Service Authentication
+
+```shell
+# Prepare hetu-cli
+wget https://download.openlookeng.io/1.5.0/hetu-cli-1.5.0-executable.jar
+mv hetu-cli-1.5.0-executable.jar hetu-cli
+chmod +x hetu-cli
+
+# link service
+./hetu-cli --server 127.0.0.1:9090 --catalog tpcds --schema default
+
+# Execute query statement
+lk:default> select d_date_sk, d_date_id, d_date, d_month_seq from tpcds.sf1.date_dim order by d_date limit 5;
+
+# Get the following output to represent the service is available
+ d_date_sk | d_date_id | d_date | d_month_seq
+-------------+--------+------------+------ -------
+ 2415022 | AAAAAAAAOKJNECAA | 1900-01-02 | 0
+ 2415023 | AAAAAAAAPKJNECAA | 1900-01-03 | 0
+ 2415024 | AAAAAAAAALJNECAA | 1900-01-04 | 0
+ 2415025 | AAAAAAAABLJNECAA | 1900-01-05 | 0
+ 2415026 | AAAAAAAACLJNECAA | 1900-01-06 | 0
+(5 rows)
+
+Query 20221110_043803_00011_m9gmv, FINISHED, 1 node
+Splits: 33 total, 33 done (100.00%)
+0:00 [73K rows, 0B] [86.8K rows/s, 0B/s]
+```
+
+## 2. Engine plugin installation
+
+### 2.1 Engine plugin preparation (choose one) [non-default engine](./overview.md)
+
+Method 1: Download the engine plug-in package directly
+
+[Linkis Engine Plugin Download](https://linkis.apache.org/zh-CN/blog/2022/04/15/how-to-download-engineconn-plugin)
+
+Method 2: Compile the engine plug-in separately (requires a `maven` environment)
+
+```
+# compile
+${linkis_code_dir}/linkis-enginepconn-pugins/engineconn-plugins/openlookeng/
+mvn clean install
+# The compiled engine plug-in package is located in the following directory
+${linkis_code_dir}/linkis-engineconn-plugins/openlookeng/target/out/
+```
+[EngineConnPlugin Engine Plugin Installation](../deployment/install-engineconn.md)
+
+### 2.2 Upload and load engine plugins
+
+Upload the engine plug-in package in 2.1 to the engine directory of the server
+```bash
+${LINKIS_HOME}/lib/linkis-engineplugins
+```
+The directory structure after uploading is as follows
+```
+linkis-engineconn-plugins/
+├── openlookeng
+│ ├── dist
+│ │ └── 1.5.0
+│ │ ├── conf
+│ │ └── lib
+│ └── plugin
+│ └── 1.5.0
+```
+
+### 2.3 Engine refresh
+
+#### 2.3.1 Restart and refresh
+Refresh the engine by restarting the `linkis-cg-linkismanager` service
+```bash
+cd ${LINKIS_HOME}/sbin
+sh linkis-daemon.sh restart cg-linkismanager
+```
+
+### 2.3.2 Check whether the engine is refreshed successfully
+You can check whether the `last_update_time` of the `linkis_engine_conn_plugin_bml_resources` table in the database is the time to trigger the refresh.
+
+```sql
+#Login to the `linkis` database
+select * from linkis_cg_engine_conn_plugin_bml_resources;
+```
+
+## 3. The use of the engine
+
+### 3.1 Submitting tasks via `Linkis-cli`
+
+```shell
+sh ./bin/linkis-cli -engineType openlookeng-1.5.0 \
+-codeType sql -code 'select * from tpcds.sf1.date_dim;' \
+-submitUser hadoop -proxyUser hadoop \
+-runtimeMap linkis.openlookeng.url=http://127.0.0.1:8080
+```
+
+More `Linkis-Cli` command parameter reference: [Linkis-Cli usage](../user-guide/linkiscli-manual.md)
+
+### 3.2 Submitting tasks through `Linkis SDK`
+
+`Linkis` provides `SDK` of `Java` and `Scala` to submit tasks to `Linkis` server. For details, please refer to [JAVA SDK Manual](../user-guide/sdk-manual.md).
+For `JDBC` tasks you only need to modify the `EngineConnType` and `CodeType` parameters in `Demo`:
+
+```java
+Map labels = new HashMap();
+labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "openlookeng-1.5.0"); // required engineType Label
+labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE");// required execute user and creator
+labels.put(LabelKeyConstant.CODE_TYPE_KEY, "sql"); // required codeType
+```
+
+## 4. Engine configuration instructions
+
+### 4.1 Default Configuration Description
+| Configuration | Default | Required | Description |
+| ------------------------ | ------------------- | ---| ------------------------------------------- |
+| linkis.openlookeng.url | http://127.0.0.1:8080 | yes | link address |
+| linkis.openlookeng.catalog | system | yes | catalog |
+| linkis.openlookeng.source | global | no | source |
+
+### 4.2 Configuration modification
+If the default parameters are not satisfied, there are the following ways to configure some basic parameters
+
+#### 4.2.1 Management console configuration
+
+![](./images/openlookeng-config.png)
+
+Note: After modifying the configuration under the IDE label, you need to specify -creator IDE to take effect (other labels are similar), such as:
+
+```shell
+sh ./bin/linkis-cli -creator IDE \
+-engineType openlookeng-1.5.0 -codeType sql \
+-code 'select * from tpcds.sf1.date_dim;' \
+-submitUser hadoop -proxyUser hadoop
+```
+
+#### 4.2.2 Task interface configuration
+Submit the task interface, configure it through the parameter `params.configuration.runtime`
+
+```shell
+Example of http request parameters
+{
+ "executionContent": {"code": "select * from tpcds.sf1.date_dim;", "runType": "sql"},
+ "params": {
+ "variable": {},
+ "configuration": {
+ "runtime": {
+ "linkis.openlookeng.url":"http://127.0.0.1:9090"
+ }
+ }
+ },
+ "labels": {
+ "engineType": "openlookeng-1.5.0",
+ "userCreator": "hadoop-IDE"
+ }
+}
+```
+
+### 4.3 Engine related data sheet
+
+`Linkis` is managed through the engine tag, and the data table information involved is shown below.
+
+```
+linkis_ps_configuration_config_key: key and default values of configuration parameters inserted into the engine
+linkis_cg_manager_label: insert engine label such as: openlookeng-1.5.0
+linkis_ps_configuration_category: Insert the directory association of the engine
+linkis_ps_configuration_config_value: Insert the configuration that the engine needs to display
+linkis_ps_configuration_key_engine_relation: The relationship between the configuration item and the engine
+```
+
+The initial data related to the engine in the table is as follows
+
+```sql
+-- set variable
+SET @OPENLOOKENG_LABEL="openlookeng-1.5.0";
+SET @OPENLOOKENG_ALL=CONCAT('*-*,',@OPENLOOKENG_LABEL);
+SET @OPENLOOKENG_IDE=CONCAT('*-IDE,',@OPENLOOKENG_LABEL);
+
+-- engine label
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType', @OPENLOOKENG_ALL, 'OPTIONAL', 2, now(), now());
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType', @OPENLOOKENG_IDE, 'OPTIONAL', 2, now(), now());
+
+select @label_id := id from linkis_cg_manager_label where `label_value` = @OPENLOOKENG_IDE;
+insert into linkis_ps_configuration_category (`label_id`, `level`) VALUES (@label_id, 2);
+
+-- configuration key
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.openlookeng.url', 'eg: http://127.0.0.1:8080', 'connection address', 'http://127.0.0.1:8080', 'Regex', '^\\s *http://([^:]+)(:\\d+)(/[^\\?]+)?(\\?\\S*)?$', 'openlookeng', 0, 0, 1, 'data source conf');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.openlookeng.catalog', 'catalog', 'catalog', 'system', 'None', '', 'openlookeng', 0, 0, 1, 'data source conf');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.openlookeng.source', 'source', 'source', 'global', 'None', '', 'openlookeng', 0, 0, 1, 'data source conf');
+
+-- key engine relation
+insert into `linkis_ps_configuration_key_engine_relation` (`config_key_id`, `engine_type_label_id`)
+(select config.id as `config_key_id`, label.id AS `engine_type_label_id` FROM linkis_ps_configuration_config_key config
+INNER JOIN linkis_cg_manager_label label ON config.engine_conn_type = 'openlookeng' and label_value = @OPENLOOKENG_ALL);
+
+-- openlookeng default configuration
+insert into `linkis_ps_configuration_config_value` (`config_key_id`, `config_value`, `config_label_id`)
+(select `relation`.`config_key_id` AS `config_key_id`, '' AS `config_value`, `relation`.`engine_type_label_id` AS `config_label_id` FROM linkis_ps_configuration_key_engine_relation relation
+INNER JOIN linkis_cg_manager_label label ON relation.engine_type_label_id = label.id AND label.label_value = @OPENLOOKENG_ALL);
+
+```
diff --git a/versioned_docs/version-1.4.0/engine-usage/overview.md b/versioned_docs/version-1.4.0/engine-usage/overview.md
new file mode 100644
index 00000000000..0ba409a4d46
--- /dev/null
+++ b/versioned_docs/version-1.4.0/engine-usage/overview.md
@@ -0,0 +1,27 @@
+---
+title: Overview
+sidebar_position: 0
+---
+## 1 Overview
+As a powerful computing middleware, Linkis can easily interface with different computing engines. By shielding the usage details of different computing engines, Linkis provides a set of unified user interface upwards.
+The operation and maintenance cost of deploying and applying Linkis' big data platform is greatly reduced. At present, Linkis has connected several mainstream computing engines, which basically cover the data requirements of production.
+In order to provide better scalability, Linkis also provides relevant interfaces for accessing new engines, which can be used to access new computing engines.
+
+The engine is a component that provides users with data processing and analysis capabilities. Currently, it has been connected to the Linkis engine, including mainstream big data computing engines such as Spark, Hive, and Presto, as well as engines with scripting capabilities such as python and Shell.
+DataSphereStudio is a one-stop data operation platform connected to Linkis. Users can easily use the engine supported by Linkis in DataSphereStudio to complete interactive data analysis tasks and workflow tasks.
+
+Supported engines and version information are as follows:
+
+| Engine | Default Engine | Default Version |
+|--------------| -- | ---- |
+| [Spark](./spark.md) | Yes | 3.2.1 |
+| [Hive](./hive.md) | yes | 3.1.3 |
+| [Python](./python.md) | yes | python2 |
+| [Shell](./shell.md) | Yes | 1 |
+| [JDBC](./jdbc.md) | No | 4 |
+| [Flink](./flink.md) | No | 1.12.2 |
+| [openLooKeng](./openlookeng.md) | No | 1.5.0 |
+| [Pipeline](./pipeline.md) | No | 1 |
+| [Presto](./presto.md) | No | 0.234 |
+| [Sqoop](./sqoop.md) | No | 1.4.6 |
+| [Elasticsearch](./elasticsearch.md) | No | 7.6.2 |
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/engine-usage/pipeline.md b/versioned_docs/version-1.4.0/engine-usage/pipeline.md
new file mode 100644
index 00000000000..b1fdbbcf556
--- /dev/null
+++ b/versioned_docs/version-1.4.0/engine-usage/pipeline.md
@@ -0,0 +1,175 @@
+---
+title: Pipeline Engine
+sidebar_position: 10
+---
+`Pipeline` is mainly used to import and export files. This article mainly introduces the installation, use and configuration of the `Hive` engine plugin in `Linkis`.
+
+## 1. Engine plugin installation
+
+### 1.1 Engine plugin preparation (choose one) [non-default engine](./overview.md)
+
+Method 1: Download the engine plug-in package directly
+
+[Linkis Engine Plugin Download](https://linkis.apache.org/zh-CN/blog/2022/04/15/how-to-download-engineconn-plugin)
+
+Method 2: Compile the engine plug-in separately (maven environment is required)
+
+```
+# compile
+cd ${linkis_code_dir}/linkis-engineconn-plugins/pipeline/
+mvn clean install
+# The compiled engine plug-in package is located in the following directory
+${linkis_code_dir}/linkis-engineconn-plugins/pipeline/target/out/
+```
+[EngineConnPlugin engine plugin installation](../deployment/install-engineconn.md)
+
+### 1.2 Uploading and loading of engine plugins
+
+Upload the engine plug-in package in 1.1 to the engine directory of the server
+```bash
+${LINKIS_HOME}/lib/linkis-engineplugins
+```
+The directory structure after uploading is as follows
+```
+linkis-engineconn-plugins/
+├── pipeline
+│ ├── dist
+│ │ └── 1
+│ │ ├── conf
+│ │ └── lib
+│ └── plugin
+│ └── 1
+```
+### 1.3 Engine refresh
+
+#### 1.3.1 Restart and refresh
+Refresh the engine by restarting the `linkis-cg-linkismanager` service
+```bash
+cd ${LINKIS_HOME}/sbin
+sh linkis-daemon.sh restart cg-linkismanager
+```
+
+### 1.3.2 Check if the engine is refreshed successfully
+You can check whether the `last_update_time` of the `linkis_engine_conn_plugin_bml_resources` table in the database is the time to trigger the refresh.
+
+```sql
+#Log in to the linkis database
+select * from linkis_cg_engine_conn_plugin_bml_resources;
+```
+
+## 2 Engine usage
+
+Because the `pipeline` engine is mainly used to import and export files, now we assume that importing files from A to B is an introduction case
+
+### 2.1 Submit tasks through `Linkis-cli`
+
+```shell
+sh bin/linkis-cli -submitUser Hadoop \
+-engineType pipeline-1 -codeType pipeline \
+-code "from hdfs:///000/000/000/A.dolphin to file:///000/000/000/B.csv"
+```
+`from hdfs:///000/000/000/A.dolphin to file:///000/000/000/B.csv` This content is explained in 2.3
+
+More `Linkis-Cli` command parameter reference: [Linkis-Cli usage](../user-guide/linkiscli-manual.md)
+
+## 3. Engine configuration instructions
+
+### 3.1 Default configuration description
+
+| Configuration | Default | Required | Description |
+| ------------------------ | ------------------- | ---| ------------------------------------------- |
+| pipeline.output.mold | csv | no | result set export type |
+| pipeline.field.split | , |no | csv separator |
+| pipeline.output.charset | gbk | no | result set export character set |
+| pipeline.output.isoverwrite | true | no | overwrite |
+| wds.linkis.rm.instance | 3 | No | Maximum concurrent number of pipeline engines |
+| pipeline.output.shuffle.null.type | NULL | No | Null replacement |
+| wds.linkis.engineconn.java.driver.memory | 2g | no | pipeline engine initialization memory size |
+
+### 4.2 Configuration modification
+If the default parameters are not satisfied, there are the following ways to configure some basic parameters
+
+#### 4.2.1 Management console configuration
+
+![](./images/pipeline-conf.png)
+
+Note: After modifying the configuration under the `IDE` tag, you need to specify `-creator IDE` to take effect (other tags are similar), such as:
+
+```shell
+sh bin/linkis-cli -creator IDE \
+-submitUser hadoop \
+-engineType pipeline-1 \
+-codeType pipeline \
+-code "from hdfs:///000/000/000/A.dolphin to file:///000/000/000/B.csv"
+```
+
+#### 4.2.2 Task interface configuration
+Submit the task interface, configure it through the parameter `params.configuration.runtime`
+
+```shell
+Example of http request parameters
+{
+ "executionContent": {"code": "from hdfs:///000/000/000/A.dolphin to file:///000/000/000/B.csv", "runType": "pipeline"},
+ "params": {
+ "variable": {},
+ "configuration": {
+ "runtime": {
+ "pipeline.output.mold":"csv",
+ "pipeline.output.charset":"gbk"
+ }
+ }
+ },
+ "labels": {
+ "engineType": "pipeline-1",
+ "userCreator": "hadoop-IDE"
+ }
+}
+```
+
+### 4.3 Engine related data sheet
+
+`Linkis` is managed through engine tags, and the data table information involved is as follows.
+
+```
+linkis_ps_configuration_config_key: key and default values of configuration parameters inserted into the engine
+linkis_cg_manager_label: insert engine label such as: pipeline-1
+linkis_ps_configuration_category: The directory association relationship of the insertion engine
+linkis_ps_configuration_config_value: Insert the configuration that the engine needs to display
+linkis_ps_configuration_key_engine_relation: The relationship between the configuration item and the engine
+```
+
+The initial data related to the engine in the table is as follows
+
+```sql
+-- set variable
+SET @PIPELINE_LABEL="pipeline-1";
+SET @PIPELINE_ALL=CONCAT('*-*,',@PIPELINE_LABEL);
+SET @PIPELINE_IDE=CONCAT('*-IDE,',@PIPELINE_LABEL);
+
+-- engine label
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType', @PIPELINE_ALL, 'OPTIONAL', 2, now(), now());
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType', @PIPELINE_IDE, 'OPTIONAL', 2, now(), now());
+
+select @label_id := id from linkis_cg_manager_label where `label_value` = @PIPELINE_IDE;
+insert into linkis_ps_configuration_category (`label_id`, `level`) VALUES (@label_id, 2);
+
+-- configuration key
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('pipeline.output.mold', 'Value range: csv or excel', 'Result set export type','csv', 'OFT', '[\"csv\",\"excel\"]' , '0', '0', '1', 'pipeline engine settings', 'pipeline');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('pipeline.field.split', 'value range:, or \\t', 'csv delimiter',',', 'OFT', '[\",\",\"\\\\ t\"]', '0', '0', '1', 'pipeline engine settings', 'pipeline');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('pipeline.output.charset', 'value range: utf-8 or gbk', 'result set export character set','gbk', 'OFT', '[\"utf-8\",\" gbk\"]', '0', '0', '1', 'pipeline engine settings', 'pipeline');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('pipeline.output.isoverwrite', 'Value range: true or false', 'Whether to overwrite','true', 'OFT', '[\"true\",\"false\"]', '0', '0', '1', 'pipeline engine settings', 'pipeline');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.rm.instance', 'Range: 1-3, Unit: Piece', 'Maximum concurrent number of pipeline engines','3', 'NumInterval', '[1,3]', '0 ', '0', '1', 'pipeline engine settings', 'pipeline');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.engineconn.java.driver.memory', 'value range: 1-10, unit: G', 'pipeline engine initialization memory size','2g', 'Regex', '^([ 1-9]|10)(G|g)$', '0', '0', '1', 'pipeline resource settings', 'pipeline');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('pipeline.output.shuffle.null.type', 'Value range: NULL or BLANK', 'Null value replacement','NULL', 'OFT', '[\"NULL\",\"BLANK\ "]', '0', '0', '1', 'pipeline engine settings', 'pipeline');
+
+-- key engine relation
+insert into `linkis_ps_configuration_key_engine_relation` (`config_key_id`, `engine_type_label_id`)
+(select config.id as `config_key_id`, label.id AS `engine_type_label_id` FROM linkis_ps_configuration_config_key config
+INNER JOIN linkis_cg_manager_label label ON config.engine_conn_type = 'pipeline' and label_value = @PIPELINE_ALL);
+
+-- engine default configuration
+insert into `linkis_ps_configuration_config_value` (`config_key_id`, `config_value`, `config_label_id`)
+(select `relation`.`config_key_id` AS `config_key_id`, '' AS `config_value`, `relation`.`engine_type_label_id` AS `config_label_id` FROM linkis_ps_configuration_key_engine_relation relation
+INNER JOIN linkis_cg_manager_label label ON relation.engine_type_label_id = label.id AND label.label_value = @PIPELINE_ALL);
+
+```
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/engine-usage/presto.md b/versioned_docs/version-1.4.0/engine-usage/presto.md
new file mode 100644
index 00000000000..ac2959d9def
--- /dev/null
+++ b/versioned_docs/version-1.4.0/engine-usage/presto.md
@@ -0,0 +1,231 @@
+---
+title: Presto Engine
+sidebar_position: 11
+---
+
+This article mainly introduces the installation, usage and configuration of the Presto engine plugin in `Linkis` .
+
+
+## 1. Preliminary work
+
+### 1.1 Engine installation
+
+If you want to use the `Presto` engine on your `Linkis` service, you need to install the `Presto` service and make sure the service is available.
+
+### 1.2 Service Authentication
+
+```shell
+# prepare presto-cli
+wget https://repo1.maven.org/maven2/com/facebook/presto/presto-cli/0.234/presto-cli-0.234-executable.jar
+mv presto-cli-0.234-executable.jar presto-cli
+chmod + x presto-cli
+
+# execute task
+./presto-cli --server localhost:8082 --execute 'show tables from system.jdbc'
+
+# Get the following output to indicate that the service is available
+"attributes"
+"catalogs"
+"columns"
+"procedure_columns"
+"procedures"
+"pseudo_columns"
+"schemas"
+"super_tables"
+"super_types"
+"table_types"
+"tables"
+"types"
+"udts"
+```
+
+## 2. Engine plugin deployment
+
+### 2.1 Engine plugin preparation (choose one) [non-default engine](./overview.md)
+
+Method 1: Download the engine plug-in package directly
+
+[Linkis Engine Plugin Download](https://linkis.apache.org/zh-CN/blog/2022/04/15/how-to-download-engineconn-plugin)
+
+Method 2: Compile the engine plug-in separately (maven environment is required)
+
+```
+# compile
+cd ${linkis_code_dir}/linkis-engineconn-plugins/presto/
+mvn clean install
+# The compiled engine plug-in package is located in the following directory
+${linkis_code_dir}/linkis-engineconn-plugins/presto/target/out/
+```
+[EngineConnPlugin Engine Plugin Installation](../deployment/install-engineconn.md)
+
+### 2.2 Upload and load engine plugins
+
+Upload the engine package in 2.1 to the engine directory of the server
+```bash
+${LINKIS_HOME}/lib/linkis-engineplugins
+```
+The directory structure after uploading is as follows
+```
+linkis-engineconn-plugins/
+├── soon
+│ ├── dist
+│ │ └── 0.234
+│ │ ├── conf
+│ │ └── lib
+│ └── plugin
+│ └── 0.234
+```
+
+### 2.3 Engine refresh
+
+#### 2.3.1 Restart and refresh
+Refresh the engine by restarting the `linkis-cg-linkismanager` service
+```bash
+cd ${LINKIS_HOME}/sbin
+sh linkis-daemon.sh restart cg-linkismanager
+```
+
+### 2.3.2 Check if the engine is refreshed successfully
+You can check whether the `last_update_time` of the `linkis_engine_conn_plugin_bml_resources` table in the database is the time to trigger the refresh.
+
+```sql
+#Login to the `linkis` database
+select * from linkis_cg_engine_conn_plugin_bml_resources;
+```
+
+## 3 The use of the engine
+
+### 3.1 Submit tasks through `Linkis-cli`
+
+```shell
+ sh ./bin/linkis-cli -engineType presto-0.234 \
+ -codeType psql -code 'show tables;' \
+ -submitUser hadoop -proxyUser hadoop
+```
+
+If the management console, task interface, and configuration file are not configured (see 4.2 for the configuration method), they can be configured through the `-runtimeMap` attribute in the `Linkis-cli` client
+
+```shell
+sh ./bin/linkis-cli -engineType presto-0.234 \
+-codeType tsql -code 'show tables;' \
+-runtimeMap wds.linkis.presto.url=http://127.0.0.1:8080 \
+-runtimeMap wds.linkis.presto.catalog=hive \
+-runtimeMap wds.linkis.presto.schema=default \
+-submitUser hadoop -proxyUser hadoop
+```
+
+More `Linkis-Cli` command parameter reference: [Linkis-Cli usage](../user-guide/linkiscli-manual.md)
+
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('wds.linkis.presto.url', 'Presto 集群连接', 'presto连接地址', 'http://127.0.0.1:8080', 'None', NULL, @PRESTO_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('wds.linkis.presto.catalog', '查询的 Catalog ', 'presto连接的catalog', 'hive', 'None', NULL, @PRESTO_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('wds.linkis.presto.schema', '查询的 Schema ', '数据库连接schema', '', 'None', NULL, @PRESTO_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('wds.linkis.presto.source', '查询使用的 source ', '数据库连接source', '', 'None', NULL, @PRESTO_NAME, 0, 0, 1, '数据源配置');
+
+### 4.1 Default Configuration Description
+
+| Configuration | Default | Description | Required |
+| -------------------------------------- | ---------- ----------- | -------------------------------------- ----- | -------- |
+| wds.linkis.presto.url | http://127.0.0.1:8080 | Presto Cluster Connection | true |
+| wds.linkis.presto.username | default | Presto cluster username | false |
+| wds.linkis.presto.password | none | Presto cluster password | false |
+| wds.linkis.presto.catalog | system | Query Catalog | true |
+| wds.linkis.presto.schema | None | Schema to query | true |
+| wds.linkis.presto.source | global | source used for query | false |
+| presto.session.query_max_total_memory | 8GB | query uses maximum memory | false |
+| wds.linkis.presto.http.connectTimeout | 60 | Presto client connect timeout (unit: second) | false |
+| wds.linkis.presto.http.readTimeout | 60 | Presto client read timeout (unit: seconds) | false |
+| wds.linkis.engineconn.concurrent.limit | 100 | The maximum number of concurrent Presto engines | false |
+
+### 4.2 Configuration modification
+
+If the default parameters are not satisfied, there are the following ways to configure some basic parameters
+
+#### 4.2.1 Management console configuration
+
+![](./images/presto-console.png)
+
+Note: After modifying the configuration under the `IDE` tag, you need to specify `-creator IDE` to take effect (other tags are similar), such as:
+
+```shell
+sh ./bin/linkis-cli -creator IDE \
+-engineType presto-0.234 -codeType tsql \
+-code 'show tables;' \
+-submitUser hadoop -proxyUser hadoop
+```
+
+#### 4.2.2 Task interface configuration
+Submit the task interface, configure it through the parameter `params.configuration.runtime`
+
+```shell
+Example of http request parameters
+{
+ "executionContent": {"code": "show teblas;", "runType": "psql"},
+ "params": {
+ "variable": {},
+ "configuration": {
+ "runtime": {
+ "wds.linkis.presto.url":"http://127.0.0.1:9090",
+ "wds.linkis.presto.catalog ":"hive",
+ "wds.linkis.presto.schema ":"default",
+ "wds.linkis.presto.source ":""
+ }
+ }
+ },
+ "source": {"scriptPath": "file:///mnt/bdp/hadoop/1.sql"},
+ "labels": {
+ "engineType": "presto-0.234",
+ "userCreator": "hadoop-IDE"
+ }
+}
+```
+
+#### 4.2.3 File Configuration
+Configure by modifying the `linkis-engineconn.properties` file in the directory `install path/lib/linkis-engineconn-plugins/presto/dist/0.234/conf/`, as shown below:
+
+![](./images/presto-file.png)
+
+### 4.3 Engine related data sheet
+
+`Linkis` is managed through the engine tag, and the data table information involved is shown below.
+
+```
+linkis_ps_configuration_config_key: key and default values of configuration parameters inserted into the engine
+linkis_cg_manager_label: Insert engine label such as: presto-0.234
+linkis_ps_configuration_category: The directory association relationship of the insertion engine
+linkis_ps_configuration_config_value: Insert the configuration that the engine needs to display
+linkis_ps_configuration_key_engine_relation: The relationship between the configuration item and the engine
+```
+
+The initial data related to the engine in the table is as follows
+
+
+```sql
+-- set variable
+SET @PRESTO_LABEL="presto-0.234";
+SET @PRESTO_ALL=CONCAT('*-*,',@PRESTO_LABEL);
+SET @PRESTO_IDE=CONCAT('*-IDE,',@PRESTO_LABEL);
+SET @PRESTO_NAME="presto";
+
+-- engine label
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType',@PRESTO_ALL, 'OPTIONAL', 2, now(), now());
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType',@PRESTO_IDE, 'OPTIONAL', 2, now(), now());
+
+select @label_id := id from `linkis_cg_manager_label` where `label_value` = @PRESTO_IDE;
+insert into `linkis_ps_configuration_category` (`label_id`, `level`) VALUES (@label_id, 2);
+
+-- configuration key
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('wds.linkis.presto.url', 'Presto cluster connection', 'presto connection address', 'http://127.0.0.1:8080', 'None', NULL, @PRESTO_NAME, 0, 0, 1 , 'data source conf');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('wds.linkis.presto.catalog', 'Query's Catalog', 'presto-connected catalog', 'hive', 'None', NULL, @PRESTO_NAME, 0, 0, 1, 'Datasource configuration') ;
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('wds.linkis.presto.schema', 'Query Schema', 'Database connection schema', '', 'None', NULL, @PRESTO_NAME, 0, 0, 1, 'data source conf');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, `is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('wds.linkis.presto.source', 'source for query', 'database connection source', '', 'None', NULL, @PRESTO_NAME, 0, 0, 1, 'data source conf');
+
+-- key engine relation
+insert into `linkis_ps_configuration_key_engine_relation` (`config_key_id`, `engine_type_label_id`)
+(select config.id as `config_key_id`, label.id AS `engine_type_label_id` FROM linkis_ps_configuration_config_key config
+INNER JOIN linkis_cg_manager_label label ON config.engine_conn_type = @PRESTO_NAME and label_value = @PRESTO_ALL);
+
+-- engine default configuration
+insert into `linkis_ps_configuration_config_value` (`config_key_id`, `config_value`, `config_label_id`)
+(select `relation`.`config_key_id` AS `config_key_id`, '' AS `config_value`, `relation`.`engine_type_label_id` AS `config_label_id` FROM linkis_ps_configuration_key_engine_relation relation
+INNER JOIN linkis_cg_manager_label label ON relation.engine_type_label_id = label.id AND label.label_value = @PRESTO_ALL);
+```
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/engine-usage/python.md b/versioned_docs/version-1.4.0/engine-usage/python.md
new file mode 100644
index 00000000000..905ad56ad67
--- /dev/null
+++ b/versioned_docs/version-1.4.0/engine-usage/python.md
@@ -0,0 +1,182 @@
+---
+title: Python Engine
+sidebar_position: 5
+---
+
+This article mainly introduces the installation, use and configuration of the `Python` engine plugin in `Linkis`.
+
+## 1. Preliminary work
+### 1.1 Environment Installation
+
+If you want to use the `python` engine on your server, you need to ensure that the user's `PATH` has the `python` execution directory and execution permissions.
+
+### 1.2 Environment verification
+```
+python --version
+```
+Normal output of `Python` version information means `Python` environment is available
+```
+Python 3.6.0
+```
+
+## 2. Engine plugin installation [default engine](./overview.md)
+
+The binary installation package released by `linkis` includes the `Python` engine plug-in by default, and users do not need to install it additionally.
+
+[EngineConnPlugin Engine Plugin Installation](../deployment/install-engineconn.md)
+
+## 3. Engine usage
+
+### 3.1 Submitting tasks via `Linkis-cli`
+
+```shell
+sh ./bin/linkis-cli -engineType python-python2 \
+-codeType python -code "print(\"hello\")" \
+-submitUser hadoop -proxyUser hadoop
+```
+More `Linkis-Cli` command parameter reference: [Linkis-Cli usage](../user-guide/linkiscli-manual.md)
+
+### 3.2 Submit tasks through `Linkis SDK`
+
+`Linkis` provides `SDK` of `Java` and `Scala` to submit tasks to `Linkis` server. For details, please refer to [JAVA SDK Manual](../user-guide/sdk-manual.md). For` For Python` tasks, you only need to modify `EngineConnType` and `CodeType` parameters.
+
+```java
+Map labels = new HashMap();
+labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "python-python2"); // required engineType Label
+labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE");// required execute user and creator
+labels.put(LabelKeyConstant.CODE_TYPE_KEY, "python"); // required codeType
+```
+
+## 4. Engine configuration instructions
+
+### 4.1 Configuration modification
+The `Python` engine plug-in supports `python2` and `python3`, you can simply change the configuration to complete the switch of `Python` version without recompiling the `python` engine version. The `Python` engine supports a variety of configuration modification methods, the specific operations are as follows.
+
+#### 4.1.1 Display designation through command parameters (only the current command takes effect)
+
+```shell
+#1: Submit tasks via cli to switch versions, and set the version python.version=python3 at the end of the command (python3: the name of the file generated when creating a soft link, which can be customized)
+sh ./bin/linkis-cli -engineType python-python2 \
+-codeType python -code "print(\"hello\")" \
+-submitUser hadoop -proxyUser hadoop \
+-confMap python.version=python3
+
+#2: The cli method is used to submit tasks for version switching, and the command is set to add the version path python.version=/usr/bin/python (/usr/bin/python: the path to the file generated when creating a soft link)
+sh ./bin/linkis-cli -engineType python-python2 \
+-codeType python -code "print(\"hello\")" \
+-submitUser hadoop -proxyUser hadoop \
+-confMap python.version=/usr/bin/python
+
+```
+
+#### 4.1.2 Management console configuration
+
+![](./images/python-config.png)
+
+Note: After modifying the configuration under the IDE tag, you need to specify `-creator IDE` to take effect (other tags are similar), such as:
+
+```shell
+sh ./bin/linkis-cli -creator IDE -engineType \
+python-python2 -codeType python -code "print(\"hello\")" \
+-submitUser hadoop -proxyUser hadoop \
+-confMap python.version=python3
+```
+
+#### 4.2.2 Task interface configuration
+Submit the task interface, configure it through the parameter `params.configuration.runtime`
+
+```shell
+Example of http request parameters
+{
+ "executionContent": {"code": "print(\"hello\")", "runType": "python"},
+ "params": {
+ "variable": {},
+ "configuration": {
+ "runtime": {
+ "python.version":"python2",
+ "wds.linkis.engineconn.max.free.time":"1h"
+ }
+ }
+ },
+ "labels": {
+ "engineType": "python-python2",
+ "userCreator": "IDE"
+ }
+}
+```
+
+#### 4.2.3 File Configuration
+Configure by modifying the `linkis-engineconn.properties` file in the directory `${LINKIS_HOME}/lib/linkis-engineconn-plugins/python/dist/python2/conf/`, as shown below:
+
+![](./images/python-conf.png)
+
+### 4.3 Engine related data sheet
+
+`Linkis` is managed through engine tags, and the data table information involved is as follows.
+
+```
+linkis_ps_configuration_config_key: Insert the key and default values of the configuration parameters of the engine
+linkis_cg_manager_label: Insert engine label such as: python-python2
+linkis_ps_configuration_category: Insert the directory association of the engine
+linkis_ps_configuration_config_value: The configuration that the insertion engine needs to display
+linkis_ps_configuration_key_engine_relation: The relationship between the configuration item and the engine
+```
+
+The initial data related to the engine in the table is as follows
+
+```sql
+-- set variable
+SET @PYTHON_LABEL="python-python2";
+SET @PYTHON_ALL=CONCAT('*-*,',@PYTHON_LABEL);
+SET @PYTHON_IDE=CONCAT('*-IDE,',@PYTHON_LABEL);
+
+-- engine label
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType', @PYTHON_ALL, 'OPTIONAL', 2, now(), now());
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType', @PYTHON_IDE, 'OPTIONAL', 2, now(), now());
+
+select @label_id := id from linkis_cg_manager_label where `label_value` = @PYTHON_IDE;
+insert into linkis_ps_configuration_category (`label_id`, `level`) VALUES (@label_id, 2);
+
+-- configuration key
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.rm.client.memory.max', 'Value range: 1-100, unit: G', 'Python driver memory upper limit', '20G', 'Regex', '^([ 1-9]\\d{0,1}|100)(G|g)$', '0', '0', '1', 'queue resource', 'python');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.rm.client.core.max', 'Value range: 1-128, unit: a', 'Python drive core number upper limit', '10', 'Regex', '^( ?:[1-9]\\d?|[1234]\\d{2}|128)$', '0', '0', '1', 'queue resource', 'python');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.rm.instance', 'Range: 1-20, unit: a', 'Python engine maximum concurrent number', '10', 'NumInterval', '[1,20]', '0 ', '0', '1', 'queue resource', 'python');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.engineconn.java.driver.memory', 'value range: 1-2, unit: G', 'python engine initialization memory size', '1g', 'Regex', '^([ 1-2])(G|g)$', '0', '0', '1', 'python engine settings', 'python');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('python.version', 'Value range: python2,python3', 'python version','python2', 'OFT', '[\"python3\",\"python2\"]', '0' , '0', '1', 'python engine settings', 'python');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.engineconn.max.free.time', 'Value range: 3m,15m,30m,1h,2h', 'Engine idle exit time','1h', 'OFT', '[\ "1h\",\"2h\",\"30m\",\"15m\",\"3m\"]', '0', '0', '1', 'python engine settings', ' python');
+
+-- key engine relation
+insert into `linkis_ps_configuration_key_engine_relation` (`config_key_id`, `engine_type_label_id`)
+(select config.id as `config_key_id`, label.id AS `engine_type_label_id` FROM linkis_ps_configuration_config_key config
+INNER JOIN linkis_cg_manager_label label ON config.engine_conn_type = 'python' and label_value = @PYTHON_ALL);
+
+-- engine default configuration
+insert into `linkis_ps_configuration_config_value` (`config_key_id`, `config_value`, `config_label_id`)
+(select `relation`.`config_key_id` AS `config_key_id`, '' AS `config_value`, `relation`.`engine_type_label_id` AS `config_label_id` FROM linkis_ps_configuration_key_engine_relation relation
+INNER JOIN linkis_cg_manager_label label ON relation.engine_type_label_id = label.id AND label.label_value = @PYTHON_ALL);
+```
+
+
+### 4.4 other python code demo
+
+```python
+import pandas as pd
+
+data = {'name': ['aaaaaa', 'bbbbbb', 'cccccc'], 'pay': [4000, 5000, 6000]}
+frame = pd.DataFrame(data)
+show.show(frame)
+
+
+print('new reuslt')
+
+from matplotlib import pyplot as plt
+
+x=[4,8,10]
+y=[12,16,6]
+x2=[6,9,11]
+y2=[6,15,7]
+plt.bar(x,y,color='r',align='center')
+plt.bar(x2,y2,color='g',align='center')
+plt.show()
+
+```
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/engine-usage/seatunnel.md b/versioned_docs/version-1.4.0/engine-usage/seatunnel.md
new file mode 100644
index 00000000000..4de3b20b4c9
--- /dev/null
+++ b/versioned_docs/version-1.4.0/engine-usage/seatunnel.md
@@ -0,0 +1,254 @@
+---
+title: Seatunnel Engine
+sidebar_position: 14
+---
+
+This article mainly introduces the installation, usage and configuration of the `Seatunnel` engine plugin in `Linkis`.
+
+## 1. Pre-work
+
+### 1.1 Engine installation
+
+If you want to use `Seatunnel` engine on your `Linkis` service, you need to install `Seatunnel` engine. Moreover, `Seatunnel` depends on the `Spark` or `Flink` environment. Before using the `linkis-seatunnel` engine, it is strongly recommended to run through the `Seatunnel` environment locally.
+
+`Seatunnel 2.1.2` download address: https://dlcdn.apache.org/seatunnel/2.1.2/apache-seatunnel-incubating-2.1.2-bin.tar.gz
+
+| Environment variable name | Environment variable content | Required or not |
+|-----------------|----------------|-------------- -----------------------------|
+| JAVA_HOME | JDK installation path | Required |
+| SEATUNNEL_HOME | Seatunnel installation path | required |
+|SPARK_HOME| Spark installation path| Seatunnel needs to run based on Spark |
+|FLINK_HOME| Flink installation path| Seatunnel execution is based on Flink |
+
+Table 1-1 Environment configuration list
+
+| Linkis variable name| variable content| required |
+| --------------------------- | --------------------- -------------------------------------- | ------------ --------------------------------------------------- |
+| wds.linkis.engine.seatunnel.plugin.home | Seatunnel installation path | Yes |
+
+### 1.2 Engine Environment Verification
+
+Take the execution of `Spark` tasks as an example
+
+```shell
+cd $SEATUNNEL_HOME
+./bin/start-seatunnel-spark.sh --master local[4] --deploy-mode client --config ./config/spark.batch.conf.template
+```
+The output is as follows:
+
+![](./images/check-seatunnel.png)
+
+## 2. Engine plugin deployment
+
+### 2.1 Engine plugin preparation (choose one) [non-default engine](./overview.md)
+
+Method 1: Download the engine plug-in package directly
+
+[Linkis Engine Plugin Download](https://linkis.apache.org/zh-CN/blog/2022/04/15/how-to-download-engineconn-plugin)
+
+Method 2: Compile the engine plug-in separately (requires `maven` environment)
+
+```
+# compile
+cd ${linkis_code_dir}/linkis-engineconn-plugins/seatunnel/
+mvn clean install
+# The compiled engine plug-in package is located in the following directory
+${linkis_code_dir}/linkis-engineconn-plugins/seatunnel/target/out/
+```
+[EngineConnPlugin Engine Plugin Installation](../deployment/install-engineconn.md)
+
+### 2.2 Upload and load engine plugins
+
+Upload the engine package in 2.1 to the engine directory of the server
+```bash
+${LINKIS_HOME}/lib/linkis-engineplugins
+```
+The directory structure after uploading is as follows
+```
+linkis-engineconn-plugins/
+├── seat tunnel
+│ ├── dist
+│ │ └── 2.1.2
+│ │ ├── conf
+│ │ └── lib
+│ └── plugin
+│ └── 2.1.2
+```
+
+### 2.3 Engine refresh
+
+#### 2.3.1 Restart and refresh
+Refresh the engine by restarting the `linkis-cg-linkismanager` service
+```bash
+cd ${LINKIS_HOME}/sbin
+sh linkis-daemon.sh restart cg-linkismanager
+```
+
+### 2.3.2 Check whether the engine is refreshed successfully
+You can check whether the `last_update_time` of the `linkis_engine_conn_plugin_bml_resources` table in the database is the time to trigger the refresh.
+
+```sql
+#login to `linkis` database
+select * from linkis_cg_engine_conn_plugin_bml_resources;
+```
+
+## 3. Engine usage
+
+### 3.1 Submit tasks through `Linkis-cli`
+
+
+```shell
+sh ./bin/linkis-cli --mode once -code 'test' -engineType seatunnel-2.1.2 -codeType sspark -labelMap userCreator=hadoop-seatunnel -labelMap engineConnMode=once -jobContentMap code='env {
+ spark.app.name = "SeaTunnel"
+ spark.executor.instances = 2
+ spark.executor.cores = 1
+ spark.executor.memory = "1g"
+ }
+ source {
+ Fake {
+ result_table_name = "my_dataset"
+ }
+ }
+ transform {}
+ sink {Console {}}' -jobContentMap master=local[4] -jobContentMap deploy-mode=client -sourceMap jobName=OnceJobTest -submitUser hadoop -proxyUser hadoop
+```
+
+### 3.2 Submit tasks through OnceEngineConn
+
+OnceEngineConn calls LinkisManager's createEngineConn interface through LinkisManagerClient, and sends the code to the created Seatunnel engine, and then Seatunnel engine starts to execute. The use of Client is also very simple, first create a new maven project, or introduce the following dependencies in the project
+
+```xml
+
+ org.apache.linkis
+ linkis-computation-client
+ ${linkis.version}
+
+```
+
+**Example Code**
+```java
+package org.apache.linkis.computation.client;
+import org.apache.linkis.common.conf.Configuration;
+import org.apache.linkis.computation.client.once.simple.SubmittableSimpleOnceJob;
+import org.apache.linkis.computation.client.utils.LabelKeyUtils;
+public class SeatunnelOnceJobTest {
+ public static void main(String[] args) {
+ LinkisJobClient.config().setDefaultServerUrl("http://ip:9001");
+ String code =
+ "\n"
+ + "env {\n"
+ + " spark.app.name = \"SeaTunnel\"\n"
+ + "spark.executor.instances = 2\n"
+ + "spark.executor.cores = 1\n"
+ + " spark.executor.memory = \"1g\"\n"
+ + "}\n"
+ + "\n"
+ + "source {\n"
+ + "Fake {\n"
+ + " result_table_name = \"my_dataset\"\n"
+ + " }\n"
+ + "\n"
+ + "}\n"
+ + "\n"
+ + "transform {\n"
+ + "}\n"
+ + "\n"
+ + "sink {\n"
+ + " Console {}\n"
+ + "}";
+ SubmittableSimpleOnceJob onceJob =
+ LinkisJobClient.once()
+ .simple()
+ .builder()
+ .setCreateService("seatunnel-Test")
+ .setMaxSubmitTime(300000)
+ .addLabel(LabelKeyUtils.ENGINE_TYPE_LABEL_KEY(), "seatunnel-2.1.2")
+ .addLabel(LabelKeyUtils.USER_CREATOR_LABEL_KEY(), "hadoop-seatunnel")
+ .addLabel(LabelKeyUtils.ENGINE_CONN_MODE_LABEL_KEY(), "once")
+ .addStartupParam(Configuration.IS_TEST_MODE().key(), true)
+ .addExecuteUser("hadoop")
+ .addJobContent("runType", "sspark")
+ .addJobContent("code", code)
+ .addJobContent("master", "local[4]")
+ .addJobContent("deploy-mode", "client")
+ .addSource("jobName", "OnceJobTest")
+ .build();
+ onceJob. submit();
+ System.out.println(onceJob.getId());
+ onceJob. waitForCompleted();
+ System.out.println(onceJob.getStatus());
+ LinkisJobMetrics jobMetrics = onceJob. getJobMetrics();
+ System.out.println(jobMetrics.getMetrics());
+ }
+}
+```
+## 4. Engine configuration instructions
+
+### 4.1 Default Configuration Description
+
+| Configuration | Default | Description | Required |
+| ----------------------------------------- | ---------- ----------- | -------------------------------------- ----- | -------- |
+| wds.linkis.engine.seatunnel.plugin.home | /opt/linkis/seatunnel | Seatunnel installation path | true |
+### 4.2 Configuration modification
+
+If the default parameters are not satisfied, there are the following ways to configure some basic parameters
+
+#### 4.2.1 Client Configuration Parameters
+
+```shell
+sh ./bin/linkis-cli --mode once-code 'test' \
+-engineType seatunnel-2.1.2 -codeType sspark\
+-labelMap userCreator=hadoop-seatunnel -labelMap engineConnMode=once \
+-jobContentMap code='env {
+ spark.app.name = "SeaTunnel"
+ spark.executor.instances = 2
+ spark.executor.cores = 1
+ spark.executor.memory = "1g"
+ }
+ source {
+ Fake {
+ result_table_name = "my_dataset"
+ }
+ }
+ transform {}
+ sink {Console {}}' -jobContentMap master=local[4] \
+ -jobContentMap deploy-mode=client \
+ -sourceMap jobName=OnceJobTest\
+ -runtimeMap wds.linkis.engine.seatunnel.plugin.home=/opt/linkis/seatunnel \
+ -submitUser hadoop -proxyUser hadoop
+```
+
+#### 4.2.2 Task interface configuration
+Submit the task interface and configure it through the parameter `params.configuration.runtime`
+
+```shell
+Example of http request parameters
+{
+ "executionContent": {"code": 'env {
+ spark.app.name = "SeaTunnel"
+ spark.executor.instances = 2
+ spark.executor.cores = 1
+ spark.executor.memory = "1g"
+ }
+ source {
+ Fake {
+ result_table_name = "my_dataset"
+ }
+ }
+ transform {}
+ sink {Console {}}',
+ "runType": "sql"},
+ "params": {
+ "variable": {},
+ "configuration": {
+ "runtime": {
+ "wds.linkis.engine.seatunnel.plugin.home":"/opt/linkis/seatunnel"
+ }
+ }
+ },
+ "labels": {
+ "engineType": "seatunnel-2.1.2",
+ "userCreator": "hadoop-IDE"
+ }
+}
+```
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/engine-usage/shell.md b/versioned_docs/version-1.4.0/engine-usage/shell.md
new file mode 100644
index 00000000000..4fe69970e9d
--- /dev/null
+++ b/versioned_docs/version-1.4.0/engine-usage/shell.md
@@ -0,0 +1,55 @@
+---
+title: Shell Engine
+sidebar_position: 6
+---
+
+This article mainly introduces the installation, usage and configuration of the `Shell` engine plug-in in `Linkis`.
+
+## 1. Preliminary work
+
+### 1.1 Environment installation
+If you want to use the `shell` engine on your server, you need to ensure that the user's `PATH` has the executable directory and execution permission of `bash`.
+
+### 1.2 Environment verification
+```
+echo $SHELL
+```
+The following information is output to indicate that the shell environment is available
+```
+/bin/bash
+```
+or
+```
+/bin/sh
+```
+
+## 2. Engine plugin installation [default engine](./overview.md)
+
+The `Shell` engine plugin is included in the binary installation package released by `linkis` by default, and users do not need to install it additionally.
+
+[EngineConnPlugin engine plugin installation](../deployment/install-engineconn.md)
+
+## 3. Engine usage
+
+### 3.1 Submit tasks through `Linkis-cli`
+
+```shell
+sh ./bin/linkis-cli -engineType shell-1 \
+-codeType shell -code "echo \"hello\" " \
+-submitUser hadoop -proxyUser hadoop
+```
+More `Linkis-Cli` command parameter reference: [Linkis-Cli usage](../user-guide/linkiscli-manual.md)
+
+### 3.2 Submit tasks through Linkis SDK
+
+`Linkis` provides `SDK` for `Java` and `Scala` to submit tasks to the `Linkis` server. For details, please refer to [JAVA SDK Manual](../user-guide/sdk-manual.md). For the `Shell` task you only need to modify the `EngineConnType` and `CodeType` parameters in the `Demo`:
+
+```java
+Map labels = new HashMap();
+labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "shell-1"); // required engineType Label
+labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE");// required execute user and creator
+labels.put(LabelKeyConstant.CODE_TYPE_KEY, "shell"); // required codeType
+```
+## 4. Engine configuration instructions
+
+The `shell` engine can generally set the maximum memory of the engine `JVM`.
\ No newline at end of file
diff --git a/versioned_docs/version-1.4.0/engine-usage/spark.md b/versioned_docs/version-1.4.0/engine-usage/spark.md
new file mode 100644
index 00000000000..2c9c90b56a0
--- /dev/null
+++ b/versioned_docs/version-1.4.0/engine-usage/spark.md
@@ -0,0 +1,289 @@
+---
+title: Spark Engine
+sidebar_position: 1
+---
+
+This article mainly introduces the installation, use and configuration of the `Spark` engine plugin in `Linkis`.
+
+## 1. Preliminary work
+### 1.1 Engine installation
+
+If you wish to use the `spark` engine on your server, you need to ensure that the following environment variables are set correctly and that the engine's starting user has these environment variables.
+
+It is strongly recommended that you check these environment variables for the executing user before executing a `spark` job.
+
+| Environment variable name | Environment variable content | Remarks |
+|-----------------|----------------|-------------- -----------------------------|
+| JAVA_HOME | JDK installation path | Required |
+| HADOOP_HOME | Hadoop installation path | Required |
+| HADOOP_CONF_DIR | Hadoop configuration path | required |
+| HIVE_CONF_DIR | Hive configuration path | required |
+| SPARK_HOME | Spark installation path | Required |
+| SPARK_CONF_DIR | Spark configuration path | Required |
+| python | python | It is recommended to use anaconda's python as the default python |
+
+### 1.2 Environment verification
+Verify that `Spark` is successfully installed by `pyspark`
+```
+pyspark
+
+#After entering the pyspark virtual environment, the spark logo appears, indicating that the environment is successfully installed
+Welcome to
+ ______
+ /__/__ ___ _____/ /__
+ _\ \/ _ \/ _ `/ __/ '_/
+ /__ / .__/\_,_/_/ /_/\_\ version 3.2.1
+ /_/
+
+Using Python version 2.7.13 (default, Sep 30 2017 18:12:43)
+SparkSession available as 'spark'.
+```
+
+## 2. Engine plugin installation [default engine](./overview.md)
+
+The `Spark` engine plugin is included in the binary installation package released by `linkis` by default, and users do not need to install it additionally.
+
+In theory `Linkis` supports all versions of `spark2.x` and above. The default supported version is `Spark3.2.1`. If you want to use another version of `spark`, such as `spark2.1.0`, you just need to modify the version of the plugin `spark` and compile it. Specifically, you can find the `linkis-engineplugin-spark` module, change the value of the `` tag in the `maven` dependency to 2.1.0, and then compile this module separately.
+
+[EngineConnPlugin engine plugin installation](../deployment/install-engineconn.md)
+
+## 3. Using the `spark` engine
+
+### 3.1 Submitting tasks via `Linkis-cli`
+
+```shell
+# codeType correspondence py-->pyspark sql-->sparkSQL scala-->Spark scala
+sh ./bin/linkis-cli -engineType spark-3.2.1 -codeType sql -code "show databases" -submitUser hadoop -proxyUser hadoop
+
+# You can specify the yarn queue in the submission parameter by -confMap wds.linkis.yarnqueue=dws
+sh ./bin/linkis-cli -engineType spark-3.2.1 -codeType sql -confMap wds.linkis.yarnqueue=dws -code "show databases" -submitUser hadoop -proxyUser hadoop
+```
+More `Linkis-Cli` command parameter reference: [Linkis-Cli usage](../user-guide/linkiscli-manual.md)
+
+### 3.2 Submitting tasks through `Linkis SDK`
+
+`Linkis` provides `SDK` of `Java` and `Scala` to submit tasks to `Linkis` server. For details, please refer to [JAVA SDK Manual](../user-guide/sdk-manual.md).
+For `Spark` tasks you only need to modify the `EngineConnType` and `CodeType` parameters in `Demo`:
+
+```java
+Map labels = new HashMap