Skip to content

Commit

Permalink
remove qdrant url
Browse files Browse the repository at this point in the history
  • Loading branch information
epicchewy committed Aug 12, 2024
1 parent 1bc0afa commit 75de59c
Show file tree
Hide file tree
Showing 7 changed files with 13 additions and 35 deletions.
3 changes: 0 additions & 3 deletions charts/llamacloud/Chart.lock
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,5 @@ dependencies:
- name: redis
repository: oci://registry-1.docker.io/bitnamicharts
version: 19.6.2
- name: qdrant
repository: https://qdrant.github.io/qdrant-helm
version: 0.10.0
digest: sha256:9b42086e6b99c3798bae1face69d1340cb30506583a7b046687dc44a2e793ac1
generated: "2024-07-23T23:45:24.8916-04:00"
5 changes: 0 additions & 5 deletions charts/llamacloud/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@ sources:
- https://github.com/bitnami/charts/tree/main/bitnami/mongodb
- https://github.com/bitnami/charts/tree/main/bitnami/rabbitmq
- https://github.com/bitnami/charts/tree/main/bitnami/redis
- https://github.com/qdrant/qdrant-helm/tree/main/charts/qdrant

dependencies:
- name: postgresql
Expand All @@ -29,10 +28,6 @@ dependencies:
version: 19.6.2
repository: oci://registry-1.docker.io/bitnamicharts
condition: redis.enabled
- name: qdrant
version: 0.10.0
repository: https://qdrant.github.io/qdrant-helm
condition: qdrant.enabled

maintainers:
- name: Jerry Liu
Expand Down
23 changes: 11 additions & 12 deletions charts/llamacloud/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ For more information about using this chart, feel free to visit the [Official Ll
| ---------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- | ------------------------------- |
| `backend.name` | Name suffix of the Backend related resources | `backend` |
| `backend.config.logLevel` | Log level for the backend | `info` |
| `backend.config.openAiAPIKey` | (Required) OpenAI API key | `""` |
| `backend.config.openAiApiKey` | (Required) OpenAI API key | `""` |
| `backend.config.oidc.existingSecretName` | Name of the existing secret to use for OIDC configuration | `""` |
| `backend.config.oidc.discoveryUrl` | OIDC discovery URL | `""` |
| `backend.config.oidc.clientId` | OIDC client ID | `""` |
Expand Down Expand Up @@ -292,12 +292,12 @@ For more information about using this chart, feel free to visit the [Official Ll
| ------------------------------------------------------- | --------------------------------------------------------------------------- | ---------------------------------- |
| `llamaParse.name` | Name suffix of the LlamaParse related resources | `llamaparse` |
| `llamaParse.config.maxPdfPages` | Maximum number of pages to parse in a PDF | `1200` |
| `llamaParse.config.openAiAPIKey` | OpenAI API key | `""` |
| `llamaParse.config.openAiApiKey` | OpenAI API key | `""` |
| `llamaParse.config.anthropicApiKey` | Anthropic | `""` |
| `llamaParse.config.s3UploadBucket` | S3 bucket to upload files to | `llama-platform-file-parsing` |
| `llamaParse.config.s3OutputBucket` | S3 bucket to output files to | `llama-platform-file-parsing` |
| `llamaParse.config.s3OutputBucketTemp` | S3 bucket to output temporary files to | `llama-platform-file-parsing` |
| `llamaParse.replicas` | Number of replicas of LlamaParse Deployment | `4` |
| `llamaParse.replicas` | Number of replicas of LlamaParse Deployment | `2` |
| `llamaParse.image.registry` | LlamaParse Image registry | `docker.io` |
| `llamaParse.image.repository` | LlamaParse Image repository | `llamaindex/llamacloud-llamaparse` |
| `llamaParse.image.tag` | LlamaParse Image tag | `1.0.0` |
Expand All @@ -319,12 +319,12 @@ For more information about using this chart, feel free to visit the [Official Ll
| `llamaParse.podSecurityContext` | Pod security context | `{}` |
| `llamaParse.securityContext` | Security context for the container | `{}` |
| `llamaParse.resources.requests.memory` | Memory request for the LlamaParse container | `16Gi` |
| `llamaParse.resources.requests.cpu` | CPU request for the LlamaParse container | `4` |
| `llamaParse.resources.requests.cpu` | CPU request for the LlamaParse container | `2` |
| `llamaParse.resources.limits.memory` | Memory limit for the LlamaParse container | `20Gi` |
| `llamaParse.resources.limits.cpu` | CPU limit for the LlamaParse container | `10` |
| `llamaParse.autoscaling.enabled` | Enable autoscaling for the LlamaParse Deployment | `true` |
| `llamaParse.autoscaling.minReplicas` | Minimum number of replicas for the LlamaParse Deployment | `4` |
| `llamaParse.autoscaling.maxReplicas` | Maximum number of replicas for the LlamaParse Deployment | `20` |
| `llamaParse.autoscaling.minReplicas` | Minimum number of replicas for the LlamaParse Deployment | `2` |
| `llamaParse.autoscaling.maxReplicas` | Maximum number of replicas for the LlamaParse Deployment | `10` |
| `llamaParse.autoscaling.targetCPUUtilizationPercentage` | Target CPU utilization percentage for the LlamaParse Deployment | `80` |
| `llamaParse.podDisruptionBudget.enabled` | Enable PodDisruptionBudget for the LlamaParse Deployment | `true` |
| `llamaParse.podDisruptionBudget.maxUnavailable` | Maximum number of unavailable pods | `1` |
Expand All @@ -339,7 +339,7 @@ For more information about using this chart, feel free to visit the [Official Ll
| Name | Description | Value |
| ---------------------------------------------------------- | ------------------------------------------------------------------------------------------- | -------------------------------------- |
| `llamaParseOcr.name` | Name suffix of the LlamaParseOcr related resources | `llamaparse-ocr` |
| `llamaParseOcr.replicas` | Number of replicas of LlamaParseOcr Deployment | `4` |
| `llamaParseOcr.replicas` | Number of replicas of LlamaParseOcr Deployment | `2` |
| `llamaParseOcr.image.registry` | LlamaParseOcr Image registry | `docker.io` |
| `llamaParseOcr.image.repository` | LlamaParseOcr Image repository | `llamaindex/llamacloud-llamaparse-ocr` |
| `llamaParseOcr.image.tag` | LlamaParseOcr Image tag | `1.0.0` |
Expand All @@ -358,9 +358,9 @@ For more information about using this chart, feel free to visit the [Official Ll
| `llamaParseOcr.podSecurityContext` | Pod security context | `{}` |
| `llamaParseOcr.securityContext` | Security context for the container | `{}` |
| `llamaParseOcr.resources.requests.memory` | Memory request for the LlamaParse container | `2Gi` |
| `llamaParseOcr.resources.requests.cpu` | CPU request for the LlamaParse container | `2` |
| `llamaParseOcr.resources.requests.cpu` | CPU request for the LlamaParse container | `1` |
| `llamaParseOcr.resources.limits.memory` | Memory limit for the LlamaParse container | `10Gi` |
| `llamaParseOcr.resources.limits.cpu` | CPU limit for the LlamaParse container | `4` |
| `llamaParseOcr.resources.limits.cpu` | CPU limit for the LlamaParse container | `2` |
| `llamaParseOcr.livenessProbe.httpGet.path` | Path to hit for the liveness probe | `/health_check` |
| `llamaParseOcr.livenessProbe.httpGet.port` | Port to hit for the liveness probe | `8080` |
| `llamaParseOcr.livenessProbe.httpGet.scheme` | Scheme to use for the liveness probe | `HTTP` |
Expand All @@ -378,8 +378,8 @@ For more information about using this chart, feel free to visit the [Official Ll
| `llamaParseOcr.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded | `3` |
| `llamaParseOcr.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | `1` |
| `llamaParseOcr.autoscaling.enabled` | Enable autoscaling for the LlamaParseOcr Deployment | `true` |
| `llamaParseOcr.autoscaling.minReplicas` | Minimum number of replicas for the LlamaParseOcr Deployment | `4` |
| `llamaParseOcr.autoscaling.maxReplicas` | Maximum number of replicas for the LlamaParseOcr Deployment | `20` |
| `llamaParseOcr.autoscaling.minReplicas` | Minimum number of replicas for the LlamaParseOcr Deployment | `2` |
| `llamaParseOcr.autoscaling.maxReplicas` | Maximum number of replicas for the LlamaParseOcr Deployment | `10` |
| `llamaParseOcr.autoscaling.targetCPUUtilizationPercentage` | Target CPU utilization percentage for the LlamaParseOcr Deployment | `80` |
| `llamaParseOcr.podDisruptionBudget.enabled` | Enable PodDisruptionBudget for the LlamaParseOcr Deployment | `true` |
| `llamaParseOcr.podDisruptionBudget.maxUnavailable` | Maximum number of unavailable pods | `1` |
Expand Down Expand Up @@ -493,5 +493,4 @@ For more information about using this chart, feel free to visit the [Official Ll
| `redis.enabled` | Enable Redis | `true` |
| `redis.auth.enabled` | Enable Redis Auth (DO NOT SET TO TRUE) | `false` |
| `rabbitmq.enabled` | Enable RabbitMQ | `true` |
| `qdrant.enabled` | Enable Qdrant | `true` |

3 changes: 0 additions & 3 deletions charts/llamacloud/templates/backend/secret.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,6 @@ data:
AWS_SECRET_ACCESS_KEY: {{ .Values.global.config.awsSecretAccessKey | default "" | b64enc | quote }}
{{- end }}
LC_OPENAI_API_KEY: {{ .Values.backend.config.openAiApiKey | default "" | b64enc | quote }}
{{- if .Values.qdrant.enabled }}
QDRANT_CLOUD_URL: {{ printf "http://%s-qdrant:6333" (include "llamacloud.fullname" .) | b64enc | quote }}
{{- end }}
{{- if .Values.s3proxy.enabled }}
S3_ENDPOINT_URL: {{ printf "http://%s-%s:%d" (include "llamacloud.fullname" .) .Values.s3proxy.name (.Values.s3proxy.service.port | int) | b64enc | quote }}
{{- end }}
Expand Down
3 changes: 0 additions & 3 deletions charts/llamacloud/templates/jobs-service/secret.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,6 @@ data:
AWS_SECRET_ACCESS_KEY: {{ .Values.global.config.awsSecretAccessKey | default "" | b64enc | quote }}
{{- end }}
LC_OPENAI_API_KEY: {{ .Values.backend.config.openAiApiKey | default "" | b64enc | quote }}
{{- if .Values.qdrant.enabled }}
QDRANT_CLOUD_URL: {{ printf "http://%s-qdrant:6333" (include "llamacloud.fullname" .) | b64enc | quote }}
{{- end }}
{{- if .Values.s3proxy.enabled }}
S3_ENDPOINT_URL: {{ printf "http://%s-%s:%d" (include "llamacloud.fullname" .) .Values.s3proxy.name (.Values.s3proxy.service.port | int) | b64enc | quote }}
{{- end }}
Expand Down
3 changes: 0 additions & 3 deletions charts/llamacloud/templates/jobs-worker/secret.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,6 @@ data:
AWS_SECRET_ACCESS_KEY: {{ .Values.global.config.awsSecretAccessKey | default "" | b64enc | quote }}
{{- end }}
LC_OPENAI_API_KEY: {{ .Values.backend.config.openAiApiKey | default "" | b64enc | quote }}
{{- if .Values.qdrant.enabled }}
QDRANT_CLOUD_URL: {{ printf "http://%s-qdrant:6333" (include "llamacloud.fullname" .) | b64enc | quote }}
{{- end }}
{{- if .Values.s3proxy.enabled }}
S3_ENDPOINT_URL: {{ printf "http://%s-%s:%d" (include "llamacloud.fullname" .) .Values.s3proxy.name (.Values.s3proxy.service.port | int) | b64enc | quote }}
{{- end }}
Expand Down
8 changes: 2 additions & 6 deletions charts/llamacloud/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -202,7 +202,7 @@ backend:
## @param backend.config.logLevel Log level for the backend
logLevel: info

## @param backend.config.openAiAPIKey (Required) OpenAI API key
## @param backend.config.openAiApiKey (Required) OpenAI API key
openAiApiKey: ""

## Backend OpenID Connect configuration
Expand Down Expand Up @@ -741,7 +741,7 @@ llamaParse:

config:
## @param llamaParse.config.maxPdfPages Maximum number of pages to parse in a PDF
## @param llamaParse.config.openAiAPIKey OpenAI API key
## @param llamaParse.config.openAiApiKey OpenAI API key
## @param llamaParse.config.anthropicApiKey Anthropic
## @param llamaParse.config.s3UploadBucket S3 bucket to upload files to
## @param llamaParse.config.s3OutputBucket S3 bucket to output files to
Expand Down Expand Up @@ -1378,7 +1378,3 @@ redis:
rabbitmq:
## @param rabbitmq.enabled Enable RabbitMQ
enabled: true

qdrant:
## @param qdrant.enabled Enable Qdrant
enabled: true

0 comments on commit 75de59c

Please sign in to comment.