diff --git a/AudioQnA/kubernetes/intel/README.md b/AudioQnA/kubernetes/intel/README.md
index b68907642..27948ed8b 100644
--- a/AudioQnA/kubernetes/intel/README.md
+++ b/AudioQnA/kubernetes/intel/README.md
@@ -7,14 +7,14 @@
## Deploy On Xeon
```
-cd GenAIExamples/AudioQnA/kubernetes/manifests/xeon
+cd GenAIExamples/AudioQnA/kubernetes/intel/cpu/xeon/manifests
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" audioqna.yaml
kubectl apply -f audioqna.yaml
```
## Deploy On Gaudi
```
-cd GenAIExamples/AudioQnA/kubernetes/manifests/gaudi
+cd GenAIExamples/AudioQnA/kubernetes/intel/hpu/gaudi/manifests
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" audioqna.yaml
kubectl apply -f audioqna.yaml
diff --git a/ChatQnA/README.md b/ChatQnA/README.md
index 3d72e0161..258e12674 100644
--- a/ChatQnA/README.md
+++ b/ChatQnA/README.md
@@ -123,7 +123,7 @@ Currently we support two ways of deploying ChatQnA services with docker compose:
docker pull opea/chatqna-conversation-ui:latest
```
-2. Using the docker images `built from source`: [Guide](docker/xeon/README.md)
+2. Using the docker images `built from source`: [Guide](docker_compose/intel/cpu/xeon/README.md)
> Note: The **opea/chatqna-without-rerank:latest** docker image has not been published yet, users need to build this docker image from source.
@@ -139,7 +139,7 @@ By default, the embedding, reranking and LLM models are set to a default value a
Change the `xxx_MODEL_ID` in `docker/xxx/set_env.sh` for your needs.
-For customers with proxy issues, the models from [ModelScope](https://www.modelscope.cn/models) are also supported in ChatQnA. Refer to [this readme](docker/xeon/README.md) for details.
+For customers with proxy issues, the models from [ModelScope](https://www.modelscope.cn/models) are also supported in ChatQnA. Refer to [this readme](docker_compose/intel/cpu/xeon/README.md) for details.
### Setup Environment Variable
@@ -202,11 +202,11 @@ Refer to the [Xeon Guide](./docker_compose/intel/cpu/xeon/README.md) for more in
### Deploy ChatQnA on NVIDIA GPU
```bash
-cd GenAIExamples/ChatQnA/docker/gpu/
+cd GenAIExamples/ChatQnA/docker_compose/nvidia/gpu/
docker compose up -d
```
-Refer to the [NVIDIA GPU Guide](./docker/gpu/README.md) for more instructions on building docker images from source.
+Refer to the [NVIDIA GPU Guide](./docker_compose/nvidia/gpu/README.md) for more instructions on building docker images from source.
### Deploy ChatQnA into Kubernetes on Xeon & Gaudi with GMC
@@ -214,7 +214,7 @@ Refer to the [Kubernetes Guide](./kubernetes/intel/README_gmc.md) for instructio
### Deploy ChatQnA into Kubernetes on Xeon & Gaudi without GMC
-Refer to the [Kubernetes Guide](./kubernetes/kubernetes/intel/README.md) for instructions on deploying ChatQnA into Kubernetes on Xeon & Gaudi without GMC.
+Refer to the [Kubernetes Guide](./kubernetes/intel/README.md) for instructions on deploying ChatQnA into Kubernetes on Xeon & Gaudi without GMC.
### Deploy ChatQnA into Kubernetes using Helm Chart
@@ -224,7 +224,7 @@ Refer to the [ChatQnA helm chart](https://github.com/opea-project/GenAIInfra/tre
### Deploy ChatQnA on AI PC
-Refer to the [AI PC Guide](./docker/aipc/README.md) for instructions on deploying ChatQnA on AI PC.
+Refer to the [AI PC Guide](./docker_compose/intel/cpu/aipc/README.md) for instructions on deploying ChatQnA on AI PC.
### Deploy ChatQnA on Red Hat OpenShift Container Platform (RHOCP)
diff --git a/ChatQnA/docker_compose/intel/cpu/aipc/README.md b/ChatQnA/docker_compose/intel/cpu/aipc/README.md
index e8fd2b0b3..3c28d1c10 100644
--- a/ChatQnA/docker_compose/intel/cpu/aipc/README.md
+++ b/ChatQnA/docker_compose/intel/cpu/aipc/README.md
@@ -159,7 +159,7 @@ Note: Please replace with `host_ip` with you external IP address, do not use loc
> Before running the docker compose command, you need to be in the folder that has the docker compose yaml file
```bash
-cd GenAIExamples/ChatQnA/docker/aipc/
+cd GenAIExamples/ChatQnA/docker_compose/intel/cpu/aipc/
docker compose up -d
# let ollama service runs
diff --git a/ChatQnA/docker_compose/intel/cpu/xeon/README.md b/ChatQnA/docker_compose/intel/cpu/xeon/README.md
index ee60f7143..3096a5e65 100644
--- a/ChatQnA/docker_compose/intel/cpu/xeon/README.md
+++ b/ChatQnA/docker_compose/intel/cpu/xeon/README.md
@@ -147,7 +147,7 @@ cd ..
Build frontend Docker image via below command:
```bash
-cd GenAIExamples/ChatQnA/
+cd GenAIExamples/ChatQnA/ui
docker build --no-cache -t opea/chatqna-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./docker/Dockerfile .
cd ../../../..
```
diff --git a/ChatQnA/docker_compose/intel/cpu/xeon/README_qdrant.md b/ChatQnA/docker_compose/intel/cpu/xeon/README_qdrant.md
index 6a738dce8..9eb23ecc5 100644
--- a/ChatQnA/docker_compose/intel/cpu/xeon/README_qdrant.md
+++ b/ChatQnA/docker_compose/intel/cpu/xeon/README_qdrant.md
@@ -85,7 +85,7 @@ docker build --no-cache -t opea/retriever-qdrant:latest --build-arg https_proxy=
### 3. Build Rerank Image
```bash
-docker build --no-cache -t opea/reranking-tei:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/reranks/tei/Dockerfile .
+docker build --no-cache -t opea/reranking-tei:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/reranks/tei/Dockerfile .`
```
### 4. Build LLM Image
@@ -117,7 +117,7 @@ cd ../../..
Build frontend Docker image via below command:
```bash
-cd GenAIExamples/ChatQnA/
+cd GenAIExamples/ChatQnA/ui
docker build --no-cache -t opea/chatqna-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./docker/Dockerfile .
cd ../../../..
```
diff --git a/ChatQnA/docker_compose/intel/hpu/gaudi/README.md b/ChatQnA/docker_compose/intel/hpu/gaudi/README.md
index af854e847..26b34a8e9 100644
--- a/ChatQnA/docker_compose/intel/hpu/gaudi/README.md
+++ b/ChatQnA/docker_compose/intel/hpu/gaudi/README.md
@@ -128,7 +128,7 @@ cd ../..
Construct the frontend Docker image using the command below:
```bash
-cd GenAIExamples/ChatQnA/
+cd GenAIExamples/ChatQnA/ui
docker build --no-cache -t opea/chatqna-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./docker/Dockerfile .
cd ../../../..
```
@@ -150,7 +150,7 @@ cd ../../../..
To fortify AI initiatives in production, Guardrails microservice can secure model inputs and outputs, building Trustworthy, Safe, and Secure LLM-based Applications.
```bash
-cd GenAIExamples/ChatQnA/docker
+cd GenAIComps
docker build -t opea/guardrails-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/guardrails/llama_guard/langchain/Dockerfile .
cd ../../..
```
diff --git a/ChatQnA/docker_compose/nvidia/gpu/README.md b/ChatQnA/docker_compose/nvidia/gpu/README.md
index f5285869e..17b7dfd5e 100644
--- a/ChatQnA/docker_compose/nvidia/gpu/README.md
+++ b/ChatQnA/docker_compose/nvidia/gpu/README.md
@@ -59,7 +59,7 @@ cd ../../..
Construct the frontend Docker image using the command below:
```bash
-cd GenAIExamples/ChatQnA/
+cd GenAIExamples/ChatQnA/ui
docker build --no-cache -t opea/chatqna-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./docker/Dockerfile .
cd ../../../..
```
@@ -132,7 +132,7 @@ Note: Please replace with `host_ip` with you external IP address, do **NOT** use
### Start all the services Docker Containers
```bash
-cd GenAIExamples/ChatQnA/docker/gpu/
+cd GenAIExamples/ChatQnA/docker_compose/nvidia/gpu/
docker compose up -d
```
diff --git a/ChatQnA/kubernetes/intel/README.md b/ChatQnA/kubernetes/intel/README.md
index cb4ff7e12..86dde2c54 100644
--- a/ChatQnA/kubernetes/intel/README.md
+++ b/ChatQnA/kubernetes/intel/README.md
@@ -11,7 +11,7 @@
## Deploy On Xeon
```
-cd GenAIExamples/ChatQnA/kubernetes/manifests/xeon
+cd GenAIExamples/ChatQnA/kubernetes/intel/cpu/xeon/manifests
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" chatqna.yaml
kubectl apply -f chatqna.yaml
@@ -20,7 +20,7 @@ kubectl apply -f chatqna.yaml
## Deploy On Gaudi
```
-cd GenAIExamples/ChatQnA/kubernetes/manifests/gaudi
+cd GenAIExamples/ChatQnA/kubernetes/intel/hpu/gaudi/manifests
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" chatqna.yaml
kubectl apply -f chatqna.yaml
diff --git a/ChatQnA/tests/test_manifest_on_gaudi.sh b/ChatQnA/tests/test_manifest_on_gaudi.sh
index 6eb6f9b9c..a012981d7 100755
--- a/ChatQnA/tests/test_manifest_on_gaudi.sh
+++ b/ChatQnA/tests/test_manifest_on_gaudi.sh
@@ -166,7 +166,7 @@ case "$1" in
if [ $ret -ne 0 ]; then
exit $ret
fi
- pushd ChatQnA/kubernetes/manifests/gaudi
+ pushd ChatQnA/kubernetes/intel/hpu/gaudi/manifests
set +e
install_and_validate_chatqna_guardrail
popd
diff --git a/ChatQnA/tests/test_manifest_on_xeon.sh b/ChatQnA/tests/test_manifest_on_xeon.sh
index e51addad2..fbdb25e16 100755
--- a/ChatQnA/tests/test_manifest_on_xeon.sh
+++ b/ChatQnA/tests/test_manifest_on_xeon.sh
@@ -166,7 +166,7 @@ case "$1" in
if [ $ret -ne 0 ]; then
exit $ret
fi
- pushd ChatQnA/kubernetes/manifests/xeon
+ pushd ChatQnA/kubernetes/intel/cpu/xeon/manifests
set +e
install_and_validate_chatqna_guardrail
popd
diff --git a/CodeGen/kubernetes/intel/README.md b/CodeGen/kubernetes/intel/README.md
index 9a4383983..be18003b8 100644
--- a/CodeGen/kubernetes/intel/README.md
+++ b/CodeGen/kubernetes/intel/README.md
@@ -12,7 +12,7 @@
## Deploy On Xeon
```
-cd GenAIExamples/CodeGen/kubernetes/manifests/xeon
+cd GenAIExamples/CodeGen/kubernetes/intel/cpu/xeon/manifests
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
export MODEL_ID="meta-llama/CodeLlama-7b-hf"
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" codegen.yaml
@@ -23,7 +23,7 @@ kubectl apply -f codegen.yaml
## Deploy On Gaudi
```
-cd GenAIExamples/CodeGen/kubernetes/manifests/gaudi
+cd GenAIExamples/CodeGen/kubernetes/intel/hpu/gaudi/manifests
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" codegen.yaml
kubectl apply -f codegen.yaml
diff --git a/CodeGen/kubernetes/intel/cpu/xeon/manifest/README_react_ui.md b/CodeGen/kubernetes/intel/cpu/xeon/manifest/README_react_ui.md
index 01ed0becf..c9d2295be 100644
--- a/CodeGen/kubernetes/intel/cpu/xeon/manifest/README_react_ui.md
+++ b/CodeGen/kubernetes/intel/cpu/xeon/manifest/README_react_ui.md
@@ -17,7 +17,7 @@ Before deploying the react-codegen.yaml file, ensure that you have the following
```
# You may set the HUGGINGFACEHUB_API_TOKEN via method:
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
- cd GenAIExamples/CodeGen/kubernetes/manifests/xeon/ui/
+ cd GenAIExamples/CodeGen/kubernetes/intel/cpu/xeon/manifests/ui/
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" react-codegen.yaml
```
b. Set the proxies based on your network configuration
diff --git a/CodeTrans/kubernetes/intel/README.md b/CodeTrans/kubernetes/intel/README.md
index 5edc148cb..9d6e63f8b 100644
--- a/CodeTrans/kubernetes/intel/README.md
+++ b/CodeTrans/kubernetes/intel/README.md
@@ -21,7 +21,7 @@ Change the `MODEL_ID` in `codetrans.yaml` for your needs.
## Deploy On Xeon
```bash
-cd GenAIExamples/CodeTrans/kubernetes/manifests/xeon
+cd GenAIExamples/CodeTrans/kubernetes/intel/cpu/xeon/manifests
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" codetrans.yaml
kubectl apply -f codetrans.yaml
@@ -30,7 +30,7 @@ kubectl apply -f codetrans.yaml
## Deploy On Gaudi
```bash
-cd GenAIExamples/CodeTrans/kubernetes/manifests/gaudi
+cd GenAIExamples/CodeTrans/kubernetes/intel/hpu/gaudi/manifests
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" codetrans.yaml
kubectl apply -f codetrans.yaml
diff --git a/DocSum/README.md b/DocSum/README.md
index 9fbe6d9fa..540bb2558 100644
--- a/DocSum/README.md
+++ b/DocSum/README.md
@@ -21,7 +21,7 @@ Currently we support two ways of deploying Document Summarization services with
docker pull opea/docsum:latest
```
-2. Start services using the docker images `built from source`: [Guide](./docker)
+2. Start services using the docker images `built from source`: [Guide](./docker_compose)
### Required Models
diff --git a/DocSum/kubernetes/intel/README.md b/DocSum/kubernetes/intel/README.md
index ba0d012f8..dc81ee35e 100644
--- a/DocSum/kubernetes/intel/README.md
+++ b/DocSum/kubernetes/intel/README.md
@@ -11,7 +11,7 @@
## Deploy On Xeon
```
-cd GenAIExamples/DocSum/kubernetes/manifests/xeon
+cd GenAIExamples/DocSum/kubernetes/intel/cpu/xeon/manifests
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" docsum.yaml
kubectl apply -f docsum.yaml
@@ -20,7 +20,7 @@ kubectl apply -f docsum.yaml
## Deploy On Gaudi
```
-cd GenAIExamples/DocSum/kubernetes/manifests/gaudi
+cd GenAIExamples/DocSum/kubernetes/intel/hpu/gaudi/manifests
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" docsum.yaml
kubectl apply -f docsum.yaml
diff --git a/DocSum/kubernetes/intel/cpu/xeon/manifest/ui/README.md b/DocSum/kubernetes/intel/cpu/xeon/manifest/ui/README.md
index a1fffd4b7..de7419bc9 100644
--- a/DocSum/kubernetes/intel/cpu/xeon/manifest/ui/README.md
+++ b/DocSum/kubernetes/intel/cpu/xeon/manifest/ui/README.md
@@ -16,7 +16,7 @@ Before deploying the react-docsum.yaml file, ensure that you have the following
```
# You may set the HUGGINGFACEHUB_API_TOKEN via method:
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
- cd GenAIExamples/DocSum/kubernetes/manifests/xeon/ui/
+ cd GenAIExamples/DocSum/kubernetes/intel/cpu/xeon/manifests/ui/
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" react-docsum.yaml
```
b. Set the proxies based on your network configuration
diff --git a/FaqGen/kubernetes/intel/README.md b/FaqGen/kubernetes/intel/README.md
index 360691b5e..461941b33 100644
--- a/FaqGen/kubernetes/intel/README.md
+++ b/FaqGen/kubernetes/intel/README.md
@@ -17,7 +17,7 @@ If use gated models, you also need to provide [huggingface token](https://huggin
## Deploy On Xeon
```
-cd GenAIExamples/FaqGen/kubernetes/manifests/xeon
+cd GenAIExamples/FaqGen/kubernetes/intel/cpu/xeon/manifests
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" faqgen.yaml
kubectl apply -f faqgen.yaml
@@ -26,7 +26,7 @@ kubectl apply -f faqgen.yaml
## Deploy On Gaudi
```
-cd GenAIExamples/FaqGen/kubernetes/manifests/gaudi
+cd GenAIExamples/FaqGen/kubernetes/intel/hpu/gaudi/manifests
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" faqgen.yaml
kubectl apply -f faqgen.yaml
diff --git a/FaqGen/kubernetes/intel/cpu/xeon/manifest/README_react_ui.md b/FaqGen/kubernetes/intel/cpu/xeon/manifest/README_react_ui.md
index a3817e695..ff768c4ac 100644
--- a/FaqGen/kubernetes/intel/cpu/xeon/manifest/README_react_ui.md
+++ b/FaqGen/kubernetes/intel/cpu/xeon/manifest/README_react_ui.md
@@ -16,7 +16,7 @@ Before deploying the react-faqgen.yaml file, ensure that you have the following
```
# You may set the HUGGINGFACEHUB_API_TOKEN via method:
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
- cd GenAIExamples/FaqGen/kubernetes/manifests/xeon/ui/
+ cd GenAIExamples/FaqGen/kubernetes/intel/cpu/xeon/manifests/ui/
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" react-faqgen.yaml
```
b. Set the proxies based on your network configuration
diff --git a/ProductivitySuite/README.md b/ProductivitySuite/README.md
index 2d3eccea4..f7c2051d9 100644
--- a/ProductivitySuite/README.md
+++ b/ProductivitySuite/README.md
@@ -20,4 +20,4 @@ Refer to the [Keycloak Configuration Guide](./docker_compose/intel/cpu/xeon/keyc
Refer to the [Xeon Guide](./docker_compose/intel/cpu/xeon/README.md) for more instructions on building docker images from source and running the application via docker compose.
-Refer to the [Xeon Kubernetes Guide](./kubernetes/manifests/README.md) for more instruction on deploying the application via kubernetes.
+Refer to the [Xeon Kubernetes Guide](./kubernetes/intel/README.md) for more instruction on deploying the application via kubernetes.
diff --git a/ProductivitySuite/kubernetes/intel/README.md b/ProductivitySuite/kubernetes/intel/README.md
index b0ef3ef14..f4e058a71 100644
--- a/ProductivitySuite/kubernetes/intel/README.md
+++ b/ProductivitySuite/kubernetes/intel/README.md
@@ -27,7 +27,7 @@ To begin with, ensure that you have following prerequisites in place:
```
# You may set the HUGGINGFACEHUB_API_TOKEN via method:
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
- cd GenAIExamples/ProductivitySuite/kubernetes/manifests/xeon/
+ cd GenAIExamples/ProductivitySuite/kubernetes/intel/cpu/xeon/manifests/
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" *.yaml
```
@@ -48,7 +48,7 @@ To begin with, ensure that you have following prerequisites in place:
## Deploying ProductivitySuite
You can use yaml files in xeon folder to deploy ProductivitySuite with reactUI.
```
-cd GenAIExamples/ProductivitySuite/kubernetes/manifests/xeon/
+cd GenAIExamples/ProductivitySuite/kubernetes/intel/cpu/xeon/manifests/
kubectl apply -f *.yaml
```
diff --git a/README.md b/README.md
index d89aee874..c44f377d3 100644
--- a/README.md
+++ b/README.md
@@ -37,17 +37,17 @@ Deployment are based on released docker images by default, check [docker image l
#### Deploy Examples
-| Use Case | Docker Compose
Deployment on Xeon | Docker Compose
Deployment on Gaudi | Kubernetes with GMC | Kubernetes with Manifests | Kubernetes with Helm Charts |
-| ----------- | ------------------------------------------------------------------------ | -------------------------------------------------------------------------- | ------------------------------------------------------------------ | ---------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------ |
-| ChatQnA | [Xeon Instructions](ChatQnA/docker_compose/intel/cpu/xeon/README.md) | [Gaudi Instructions](ChatQnA/docker_compose/intel/hpu/gaudi/README.md) | [ChatQnA with GMC](ChatQnA/kubernetes/intel/README_gmc.md) | [ChatQnA with Manifests](ChatQnA/kubernetes/intel/README.md) | [ChatQnA with Helm Charts](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/chatqna/README.md) |
-| CodeGen | [Xeon Instructions](CodeGen/docker_compose/intel/cpu/xeon/README.md) | [Gaudi Instructions](CodeGen/docker_compose/intel/hpu/gaudi/README.md) | [CodeGen with GMC](CodeGen/kubernetes/intel/README_gmc.md) | [CodeGen with Manifests](CodeGen/kubernetes/intel/README.md) | [CodeGen with Helm Charts](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/codegen/README.md) |
-| CodeTrans | [Xeon Instructions](CodeTrans/docker_compose/intel/cpu/xeon/README.md) | [Gaudi Instructions](CodeTrans/docker_compose/intel/hpu/gaudi/README.md) | [CodeTrans with GMC](CodeTrans/kubernetes/intel/README_gmc.md) | [CodeTrans with Manifests](CodeTrans/kubernetes/intel/README.md) | [CodeTrans with Helm Charts](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/codetrans/README.md) |
-| DocSum | [Xeon Instructions](DocSum/docker_compose/intel/cpu/xeon/README.md) | [Gaudi Instructions](DocSum/docker_compose/intel/hpu/gaudi/README.md) | [DocSum with GMC](DocSum/kubernetes/intel/README_gmc.md) | [DocSum with Manifests](DocSum/kubernetes/intel/README.md) | [DocSum with Helm Charts](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/docsum/README.md) |
-| SearchQnA | [Xeon Instructions](SearchQnA/docker_compose/intel/cpu/xeon/README.md) | [Gaudi Instructions](SearchQnA/docker_compose/intel/hpu/gaudi/README.md) | [SearchQnA with GMC](SearchQnA/kubernetes/intel/README_gmc.md) | Not Supported | Not Supported |
-| FaqGen | [Xeon Instructions](FaqGen/docker_compose/intel/cpu/xeon/README.md) | [Gaudi Instructions](FaqGen/docker_compose/intel/hpu/gaudi/README.md) | [FaqGen with GMC](FaqGen/kubernetes/intel/README_gmc.md) | [FaqGen with Manifests](FaqGen/kubernetes/intel/README.md) | Not Supported |
-| Translation | [Xeon Instructions](Translation/docker_compose/intel/cpu/xeon/README.md) | [Gaudi Instructions](Translation/docker_compose/intel/hpu/gaudi/README.md) | [Translation with GMC](Translation/kubernetes/intel/README_gmc.md) | Not Supported | Not Supported |
-| AudioQnA | [Xeon Instructions](AudioQnA/docker_compose/intel/cpu/xeon/README.md) | [Gaudi Instructions](AudioQnA/docker_compose/intel/hpu/gaudi/README.md) | [AudioQnA with GMC](AudioQnA/kubernetes/intel/README_gmc.md) | [AudioQnA with Manifests](AudioQnA/kubernetes/intel/README.md) | Not Supported |
-| VisualQnA | [Xeon Instructions](VisualQnA/docker_compose/intel/cpu/xeon/README.md) | [Gaudi Instructions](VisualQnA/docker_compose/intel/hpu/gaudi/README.md) | [VisualQnA with GMC](VisualQnA/kubernetes/intel/README_gmc.md) | [VisualQnA with Manifests](VisualQnA/kubernetes/intel/README.md) | Not Supported |
+| Use Case | Docker Compose
Deployment on Xeon | Docker Compose
Deployment on Gaudi | Kubernetes with Manifests | Kubernetes with Helm Charts | Kubernetes with GMC |
+| ----------- | ------------------------------------------------------------------------ | -------------------------------------------------------------------------- | ---------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------ |
+| ChatQnA | [Xeon Instructions](ChatQnA/docker_compose/intel/cpu/xeon/README.md) | [Gaudi Instructions](ChatQnA/docker_compose/intel/hpu/gaudi/README.md) | [ChatQnA with Manifests](ChatQnA/kubernetes/intel/README.md) | [ChatQnA with Helm Charts](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/chatqna/README.md) | [ChatQnA with GMC](ChatQnA/kubernetes/intel/README_gmc.md) |
+| CodeGen | [Xeon Instructions](CodeGen/docker_compose/intel/cpu/xeon/README.md) | [Gaudi Instructions](CodeGen/docker_compose/intel/hpu/gaudi/README.md) | [CodeGen with Manifests](CodeGen/kubernetes/intel/README.md) | [CodeGen with Helm Charts](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/codegen/README.md) | [CodeGen with GMC](CodeGen/kubernetes/intel/README_gmc.md) |
+| CodeTrans | [Xeon Instructions](CodeTrans/docker_compose/intel/cpu/xeon/README.md) | [Gaudi Instructions](CodeTrans/docker_compose/intel/hpu/gaudi/README.md) | [CodeTrans with Manifests](CodeTrans/kubernetes/intel/README.md) | [CodeTrans with Helm Charts](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/codetrans/README.md) | [CodeTrans with GMC](CodeTrans/kubernetes/intel/README_gmc.md) |
+| DocSum | [Xeon Instructions](DocSum/docker_compose/intel/cpu/xeon/README.md) | [Gaudi Instructions](DocSum/docker_compose/intel/hpu/gaudi/README.md) | [DocSum with Manifests](DocSum/kubernetes/intel/README.md) | [DocSum with Helm Charts](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/docsum/README.md) | [DocSum with GMC](DocSum/kubernetes/intel/README_gmc.md) |
+| SearchQnA | [Xeon Instructions](SearchQnA/docker_compose/intel/cpu/xeon/README.md) | [Gaudi Instructions](SearchQnA/docker_compose/intel/hpu/gaudi/README.md) | Not Supported | Not Supported | [SearchQnA with GMC](SearchQnA/kubernetes/intel/README_gmc.md) |
+| FaqGen | [Xeon Instructions](FaqGen/docker_compose/intel/cpu/xeon/README.md) | [Gaudi Instructions](FaqGen/docker_compose/intel/hpu/gaudi/README.md) | [FaqGen with Manifests](FaqGen/kubernetes/intel/README.md) | Not Supported | [FaqGen with GMC](FaqGen/kubernetes/intel/README_gmc.md) |
+| Translation | [Xeon Instructions](Translation/docker_compose/intel/cpu/xeon/README.md) | [Gaudi Instructions](Translation/docker_compose/intel/hpu/gaudi/README.md) | Not Supported | Not Supported | [Translation with GMC](Translation/kubernetes/intel/README_gmc.md) |
+| AudioQnA | [Xeon Instructions](AudioQnA/docker_compose/intel/cpu/xeon/README.md) | [Gaudi Instructions](AudioQnA/docker_compose/intel/hpu/gaudi/README.md) | [AudioQnA with Manifests](AudioQnA/kubernetes/intel/README.md) | Not Supported | [AudioQnA with GMC](AudioQnA/kubernetes/intel/README_gmc.md) |
+| VisualQnA | [Xeon Instructions](VisualQnA/docker_compose/intel/cpu/xeon/README.md) | [Gaudi Instructions](VisualQnA/docker_compose/intel/hpu/gaudi/README.md) | [VisualQnA with Manifests](VisualQnA/kubernetes/intel/README.md) | Not Supported | [VisualQnA with GMC](VisualQnA/kubernetes/intel/README_gmc.md) |
## Supported Examples
diff --git a/RerankFinetuning/README.md b/RerankFinetuning/README.md
index bc89396ca..59b3d2136 100644
--- a/RerankFinetuning/README.md
+++ b/RerankFinetuning/README.md
@@ -6,7 +6,7 @@ Rerank model finetuning is the process of further training rerank model on a dat
### Deploy Rerank Model Finetuning Service on Xeon
-Refer to the [Xeon Guide](./docker/xeon/README.md) for detail.
+Refer to the [Xeon Guide](./docker_compose/intel/cpu/xeon/README.md) for detail.
## Consume Rerank Model Finetuning Service
diff --git a/SearchQnA/README.md b/SearchQnA/README.md
index 18b7ea689..433c46996 100644
--- a/SearchQnA/README.md
+++ b/SearchQnA/README.md
@@ -32,7 +32,7 @@ Currently we support two ways of deploying SearchQnA services with docker compos
docker pull opea/searchqna:latest
```
-2. Start services using the docker images `built from source`: [Guide](./docker)
+2. Start services using the docker images `built from source`: [Guide](./docker_compose)
### Setup Environment Variable
diff --git a/VisualQnA/docker/ui/svelte/src/lib/assets/imageData/images.json b/VisualQnA/docker/ui/svelte/src/lib/assets/imageData/images.json
deleted file mode 100644
index b3097c38d..000000000
--- a/VisualQnA/docker/ui/svelte/src/lib/assets/imageData/images.json
+++ /dev/null
@@ -1,10 +0,0 @@
-[
- {
- "src": "/extreme_ironing.jpg",
- "prompt": "what is unusual about this image?"
- },
- {
- "src": "/waterview.jpg",
- "prompt": "what are the things I should be cautious about when I visit here?"
- }
-]
diff --git a/VisualQnA/kubernetes/intel/README.md b/VisualQnA/kubernetes/intel/README.md
index aa92531f3..a09385bb8 100644
--- a/VisualQnA/kubernetes/intel/README.md
+++ b/VisualQnA/kubernetes/intel/README.md
@@ -8,14 +8,14 @@
## Deploy On Xeon
```
-cd GenAIExamples/visualqna/kubernetes/manifests/xeon
+cd GenAIExamples/visualqna/kubernetes/intel/cpu/xeon/manifests
kubectl apply -f visualqna.yaml
```
## Deploy On Gaudi
```
-cd GenAIExamples/visualqna/kubernetes/manifests/gaudi
+cd GenAIExamples/visualqna/kubernetes/intel/hpu/gaudi/manifests
kubectl apply -f visualqna.yaml
```
diff --git a/VisualQnA/docker/ui/svelte/src/lib/assets/imageData/extreme_ironing.jpg b/VisualQnA/ui/svelte/src/lib/assets/imageData/extreme_ironing.jpg
similarity index 100%
rename from VisualQnA/docker/ui/svelte/src/lib/assets/imageData/extreme_ironing.jpg
rename to VisualQnA/ui/svelte/src/lib/assets/imageData/extreme_ironing.jpg
diff --git a/VisualQnA/ui/svelte/src/lib/assets/imageData/images.json b/VisualQnA/ui/svelte/src/lib/assets/imageData/images.json
new file mode 100644
index 000000000..c3f383e7a
--- /dev/null
+++ b/VisualQnA/ui/svelte/src/lib/assets/imageData/images.json
@@ -0,0 +1,10 @@
+[
+ {
+ "src": "/extreme_ironing.jpg",
+ "prompt": "what is unusual about this image?"
+ },
+ {
+ "src": "/waterview.jpg",
+ "prompt": "what are the things I should be cautious about when I visit here?"
+ }
+]
diff --git a/VisualQnA/docker/ui/svelte/src/lib/assets/imageData/waterview.jpg b/VisualQnA/ui/svelte/src/lib/assets/imageData/waterview.jpg
similarity index 100%
rename from VisualQnA/docker/ui/svelte/src/lib/assets/imageData/waterview.jpg
rename to VisualQnA/ui/svelte/src/lib/assets/imageData/waterview.jpg