Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dag inspector #65

Open
wants to merge 85 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 77 commits
Commits
Show all changes
85 commits
Select commit Hold shift + click to select a range
365a5ee
[scan]update searcher builder after handle
fountainchen Aug 11, 2022
287ae9a
[scan]update searcher builder after handle
fountainchen Aug 12, 2022
6e7825c
Merge pull request #1 from MoveScan/feature/scan_after_handle
ssyuan Aug 12, 2022
3a7ec86
[config]update pg connect size
fountainchen Dec 14, 2022
1223fc3
Merge remote-tracking branch 'origin/main'
fountainchen Dec 14, 2022
268a104
[config]update pg config for migration
fountainchen Dec 14, 2022
1736b2d
[config]update config for info site migration
fountainchen Jan 12, 2023
91303e8
Initial commit
ValentineReese Oct 16, 2023
3517caf
Update README.md
ValentineReese Oct 16, 2023
4ad6d95
commit merged README
welbon Oct 16, 2023
815a633
change db url
ValentineReese Oct 16, 2023
6e347df
[dag inspector api] add index code for dag inspector
welbon Jun 13, 2024
7f5d48c
[dag inspector api] Use the new processing logic. The difference betw…
welbon Jun 15, 2024
b576c44
[dag inspector api] Solve the type reference problem of exception han…
welbon Jun 15, 2024
df0c7e2
[dag inspector api] Added Dag Inspector related data fetching logic t…
welbon Jun 15, 2024
02f43ec
[dag inspector api] add dag query from daa score
welbon Jun 16, 2024
948b209
[dag inspector api] commit all codes for committing to the right repo…
welbon Jun 18, 2024
c1910bc
[dag inspector] Remove useless code
welbon Jun 18, 2024
4d9c892
[dag inspector] revert project name
welbon Jun 18, 2024
d8161eb
[dag inspector] fixed save data to Elasticsearch DB
welbon Jun 18, 2024
9be238c
[dag inspector] fixed write height group
welbon Jun 19, 2024
c54e52d
[dag inspector] rename files
welbon Jun 19, 2024
128ef9c
[dag inspector] fixed indexer handle the parent block in edge save
welbon Jun 19, 2024
4f937ee
[dag inspector] fixed the unittest of dag API in Starcoin Scan service
welbon Jun 20, 2024
1991b4b
[dag inspector] commit the docker-compose.yaml for local enviroments
welbon Jun 20, 2024
2286eaf
[config] fixed db address
welbon Jun 20, 2024
70704aa
[repair] add es repair for halley network
welbon Jun 21, 2024
057b9c0
[db error] upgrade starcoin java sdk to 1.2.2 for halley dag data
welbon Jun 21, 2024
2883212
[db error] fixed metadata load error
welbon Jun 21, 2024
185b3c6
[db error] fixed metadata load error
welbon Jun 21, 2024
dfb03f4
[vega net] add indexer kube config file for vega net
welbon Jun 22, 2024
ca5ef6b
[vega net] add indexer kube config file for vega net
welbon Jun 22, 2024
a455f5f
[debug] add some logs for debug
welbon Jun 22, 2024
50ed2f8
[debug] update txn mapping for txn.info index
welbon Jun 22, 2024
fd67cbb
[env] add kibana for local enviroment
welbon Jun 23, 2024
107ed19
[indxer] Update ES component template for transaction info mapping
welbon Jun 23, 2024
0ccccbe
[indxer] Update ES component template for raw_txn info mapping
welbon Jun 23, 2024
28dea17
[dag inspector] Add ES component template and index template for dag …
welbon Jun 24, 2024
860ab0c
[dag inspector] Upgrade java sdk to 1.2.5
welbon Jun 24, 2024
43ed6bb
[dag inspector] fixed the controller api interface
welbon Jun 25, 2024
33ff998
[dag inspector] Change the indexer block fetching process from sequen…
welbon Jun 25, 2024
cbba8a2
[dag inspector] Added a new return structure to the getBlockByHeight …
welbon Jun 25, 2024
5e40cd5
[dag inspector] Remove useless code
welbon Jun 25, 2024
c66a0fb
[dag inspector] set in vritual selected parent chain
welbon Jun 25, 2024
9533817
[dag inspector] Fixed the error of block field name mismatch
welbon Jun 25, 2024
0cc2200
[dag inspector] Add README.md for local environment
welbon Jun 25, 2024
5f94f5e
use block replace blockInfo for easy understanding, because we have c…
nkysg Jun 25, 2024
b1fcd27
use Math.max replace if expression
nkysg Jun 25, 2024
ce6c784
add fetchParentsBlock
nkysg Jun 25, 2024
d568d08
[dag inspector] Merge from branch `main_fix_db`
welbon Jun 26, 2024
f1a00ee
[dag inspector] rename file
welbon Jun 26, 2024
94e4f3e
[dag inspector] fixed the template
welbon Jun 26, 2024
62b1f1d
[dag block scan-api] fixed if getBlocksByHeight throw exception
welbon Jun 26, 2024
15fdd63
[dag block indexer] fixed getting parents error for DFS algorithm
welbon Jun 26, 2024
9ed9e7c
[bug fix] Solved the problem that when the indexer parses txn info wi…
welbon Jun 27, 2024
99f28a8
[bug fix] upgrade starcoin-java sdk for debug
welbon Jun 27, 2024
102ffaa
[debug] add some configs
welbon Jun 27, 2024
b178572
[debug] fixed local env file
welbon Jun 28, 2024
d8b2cdf
[debug] fixed parameters error
welbon Jun 29, 2024
45cbed2
[bug fix] upgrade starcoin-java sdk for fixed parser error of some RP…
welbon Jul 1, 2024
e5a3fbe
[bug fix] fixed catch exception error of type error
welbon Jul 1, 2024
3ea2504
[dag indexer] Solve the problem of missing blocks
welbon Jul 1, 2024
809d225
[dag indexer] Fixed error of get max height api
welbon Jul 1, 2024
b13b7e2
[dag indexer] Fixed error of get max height api
welbon Jul 1, 2024
6d86d28
[dag indexer] add dag indexer for vega
welbon Jul 2, 2024
ba505fe
[dag scan-api] Solve the problem that when searching for a specific b…
welbon Jul 2, 2024
a96df94
[deployment] add Digitalocean k8s Deploy Config
welbon Jul 5, 2024
57aa506
[deployment] add Digitalocean k8s Deploy Config for postgresql
welbon Jul 5, 2024
16060d1
[Deployment] DigtalOcean, Add network cross namespace access config f…
welbon Jul 6, 2024
fc6373a
[Deployment] DigtalOcean, Add api policy and namespace
welbon Jul 6, 2024
7ef85fb
[Deployment] reformat files
welbon Jul 6, 2024
7ec7a2b
[Deployment] reformat files
welbon Jul 6, 2024
0f18c96
[Deployment] reformat files
welbon Jul 6, 2024
42e7009
[Deployment] Fixed DigitalOcean deploy error
welbon Jul 6, 2024
4fe719d
[Deployment] Fixed elasticsearch k8s access policy
welbon Jul 8, 2024
5c04432
[Deployment] Fixed some configs
welbon Jul 8, 2024
82b0dbd
[Deployment] Set elasticsearch pv class to do-block-storage-retain
welbon Jul 10, 2024
feb9555
[Deployment] Path namespace and network configuration for other Nets
welbon Jul 12, 2024
4ba90d0
[Deployment] for Digital ocean
welbon Aug 14, 2024
d8bbc67
[test] 1. add index test; 2. change index job into Handler class
welbon Aug 28, 2024
b20fb4d
[digital ocean] Add backup pvc
welbon Sep 19, 2024
258f43c
[digital ocean] Add privileged for elasticsearch-deployment.yaml
welbon Sep 19, 2024
9a804b6
[digital ocean] Add some config for elasticsearch-deployment.yaml
welbon Sep 19, 2024
b1633d2
[digital ocean] Add some config for elasticsearch-deployment.yaml
welbon Sep 26, 2024
942bf87
[digital ocean] Fixed swap indexer configuration
welbon Sep 26, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 0 additions & 5 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,6 @@ jobs:
uses: actions/setup-java@v1
with:
java-version: 11
- name: maven-settings-xml-action
uses: whelk-io/maven-settings-xml-action@v18
with:
repositories: '[{ "id": "github", "url": "https://maven.pkg.github.com/starcoinorg/*" }]'
servers: '[{ "id": "github", "username": "${{ github.actor }}", "password": "${{ secrets.GIT_PACKAGE_TOKEN }}" }]'
- name: Cache Maven packages
uses: actions/cache@v1
with:
Expand Down
8 changes: 1 addition & 7 deletions .github/workflows/docker_build_indexer.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ jobs:
id: docker_meta
uses: crazy-max/ghaction-docker-meta@v1
with:
images: starcoin/starcoin_indexer,ghcr.io/starcoinorg/starcoin_indexer
images: starcoin/starcoin_indexer
tag-sha: true
- name: Set up Docker Buildx
uses: docker/[email protected]
Expand All @@ -33,12 +33,6 @@ jobs:
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GIT_PACKAGE_TOKEN }}
- name: maven-settings-xml-action
uses: whelk-io/maven-settings-xml-action@v18
with:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/docker_build_scanapi.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ jobs:
id: docker_meta
uses: crazy-max/ghaction-docker-meta@v1
with:
images: starcoin/starcoin_scan,ghcr.io/starcoinorg/starcoin_scan
images: starcoin/starcoin_scan
tag-sha: true
- name: Set up Docker Buildx
uses: docker/[email protected]
Expand Down
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1 +1,3 @@
# starcoin-search
# stc-scan
Established some independent handles for indexing data with scan to customize statistics data for swap, used in conjunction with the [swap-stat-api](https://github.com/Elements-Studio/swap-stat-api) repository.

63 changes: 63 additions & 0 deletions docker-compose/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# How to build local environment

## 1. Install "Docker" and "Docker Compose"
## 2. Run the command to start the database and components
```bash
docker-compose up
```
## 3. Start starcoin-index project

### Config the startup environment variable
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Start sentences with a verb for clarity.

The heading "Config the startup environment variable" should start with a verb to improve clarity and readability.

- ### Config the startup environment variable
+ ### Configure the startup environment variables
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
### Config the startup environment variable
### Configure the startup environment variables
Tools
LanguageTool

[grammar] ~10-~10: This sentence should probably be started with a verb instead of the noun ‘Config’. If not, consider inserting a comma for better clarity. (SENT_START_NN_DT)
Context: ...## 3. Start starcoin-index project ### Config the startup environment variable ```dot...

Markdownlint

10-10: Expected: 1; Actual: 0; Below (MD022, blanks-around-headings)
Headings should be surrounded by blank lines

```dotenv
HOSTS=localhost
NETWORK=halley # select which network to scan
BG_TASK_JOBS=dag_inspector
TXN_OFFSET=0
BULK_SIZE=100
STARCOIN_ES_PWD=
STARCOIN_ES_URL=localhost
STARCOIN_ES_PROTOCOL=http
STARCOIN_ES_PORT=9200
STARCOIN_ES_USER=
SWAP_API_URL=https://swap-api.starswap.xyz
SWAP_CONTRACT_ADDR=0x8c109349c6bd91411d6bc962e080c4a3
DS_URL=jdbc:postgresql://localhost/starcoin
DB_SCHEMA=halley
DB_USER_NAME=starcoin
DB_PWD=starcoin
PROGRAM_ARGS=
# auto_repair 9411700
```

### Configuration Elasticsearch template
[IMPORTANT!!] Make sure your template has added to Elastic search service before add data, including component template and index template to ES.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct verb form for better readability.

The sentence structure in the instruction about adding templates to Elasticsearch is awkward and the verb form is incorrect.

- [IMPORTANT!!] Make sure your template has added to Elastic search service before add data, including component template and index template to ES.
+ [IMPORTANT!!] Make sure your template is added to the Elasticsearch service before adding data, including the component template and index template.
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
[IMPORTANT!!] Make sure your template has added to Elastic search service before add data, including component template and index template to ES.
[IMPORTANT!!] Make sure your template is added to the Elasticsearch service before adding data, including the component template and index template.
Tools
LanguageTool

[uncategorized] ~33-~33: This verb may not be in the correct form. Consider using a different form for this context. (AI_EN_LECTOR_REPLACEMENT_VERB_FORM)
Context: ... added to Elastic search service before add data, including component template and ...

Following file: [[es_pipeline.scripts](..%2Fkube%2Fmappings%2Fes_pipeline.scripts)]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fix typographical error in URL.

There's a typographical error with double dots in the URL which might lead to a broken link.

- Following file: [[es_pipeline.scripts](..%2Fkube%2Fmappings%2Fes_pipeline.scripts)]
+ Following file: [[es_pipeline.scripts](../kube/mappings/es_pipeline.scripts)]
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Following file: [[es_pipeline.scripts](..%2Fkube%2Fmappings%2Fes_pipeline.scripts)]
Following file: [[es_pipeline.scripts](../kube/mappings/es_pipeline.scripts)]
Tools
LanguageTool

[typographical] ~34-~34: Two consecutive dots (DOUBLE_PUNCTUATION)
Context: ...S. Following file: [[es_pipeline.scripts](..%2Fkube%2Fmappings%2Fes_pipeline.scripts...


1. Open the 'Kibana' site has been started in the docker-compose environment, usually the url is http://localhost:5601
2. Navigate to 'Dev Tools'
3. Follow the instructions in the file of giving above to add the template to ES

### Add SQL tables for network
[IMPORTANT!!] Add the [tables](../starcoin-indexer/deploy/create_table.sql) for the network you want to scan, including main, barnard, halley, etc.

## 4. Start starcoin-scan-api project

### Config the startup enviroment variable
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Start sentences with a verb for clarity.

Similar to a previous comment, the heading should start with a verb to improve clarity.

- ### Config the startup enviroment variable
+ ### Configure the startup environment variables
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
### Config the startup enviroment variable
### Configure the startup environment variables
Tools
LanguageTool

[grammar] ~42-~42: This sentence should probably be started with a verb instead of the noun ‘Config’. If not, consider inserting a comma for better clarity. (SENT_START_NN_DT)
Context: ...4. Start starcoin-scan-api project ### Config the startup enviroment variable ```dote...

Markdownlint

42-42: Expected: 1; Actual: 0; Below (MD022, blanks-around-headings)
Headings should be surrounded by blank lines

```dotenv
STARCOIN_ES_URL=localhost
STARCOIN_ES_PROTOCOL=http
STARCOIN_ES_PORT=9200
STARCOIN_ES_USER=
STARCOIN_ES_INDEX_VERSION=
STARCOIN_ES_PWD=
MAIN_DS_URL=jdbc:postgresql://localhost/starcoin?currentSchema=main
BARNARD_DS_URL=jdbc:postgresql://localhost/starcoin?currentSchema=barnard
HALLEY_DS_URL=jdbc:postgresql://localhost/starcoin?currentSchema=halley
DS_URL=jdbc:postgresql://localhost/starcoin
STARCOIN_USER_DS_URL="jdbc:postgresql://localhost/starcoin?currentSchema=starcoin_user"
DB_USER_NAME=starcoin
DB_PWD=starcoin
```

### Add SQL tables for network
[IMPORTANT!!] Add the [tables](../starcoin-scan-api/deploy/create_table.sql) for the network you want to scan, including main, barnard, halley, etc.
67 changes: 67 additions & 0 deletions docker-compose/docker-compose.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
# This composer file is used to configure the local environment for debugging
version: '3.8'

services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.2
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name=docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata:/usr/share/elasticsearch/data
ports:
- "9200:9200"

hazelcast:
image: hazelcast/hazelcast:latest
container_name: hazelcast
ports:
- "5701:5701"
environment:
- HZ_CLUSTERNAME=stcscan-hazelcast-cluster
- HZ_NETWORK_JOIN_MULTICAST_ENABLED=false
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里是个分布式锁?咱们项目里是哪里需要用到?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个是scan-api中用到的

- HZ_NETWORK_JOIN_TCPIP_ENABLED=true
- HZ_NETWORK_JOIN_TCPIP_MEMBERS=hazelcast
- HZ_CACHE_CODE_CACHE_STATISTICS_ENABLED=true
- HZ_CACHE_CODE_CACHE_EVICTION_SIZE=10000
- HZ_CACHE_CODE_CACHE_EVICTION_MAX_SIZE_POLICY=ENTRY_COUNT
- HZ_CACHE_CODE_CACHE_EVICTION_EVICTION_POLICY=LFU
- HZ_CACHE_SESSION_STATISTICS_ENABLED=true
- HZ_CACHE_SESSION_EVICTION_SIZE=50000
- HZ_CACHE_SESSION_EVICTION_MAX_SIZE_POLICY=ENTRY_COUNT
- HZ_CACHE_SESSION_EVICTION_EVICTION_POLICY=LRU

kibana:
image: docker.elastic.co/kibana/kibana:7.17.2
container_name: kibana
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
ELASTICSEARCH_HOSTS: http://elasticsearch:9200
ports:
- 5601:5601
depends_on:
- elasticsearch

postgresql:
image: postgres:13.2
container_name: postgres_db
environment:
POSTGRES_USER: starcoin
POSTGRES_PASSWORD: starcoin
POSTGRES_DB: starcoin
volumes:
- db_data:/var/lib/postgresql/data
ports:
- 5432:5432

volumes:
esdata:
driver: local
92 changes: 92 additions & 0 deletions kube/base-components/allowaccess-network-policy.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-access-pg-from-vega
namespace: default
spec:
podSelector:
matchLabels:
app: postgres-service
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: starcoin-vega
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-access-es-from-vega
namespace: default
spec:
podSelector:
matchLabels:
app: elasticsearch
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: starcoin-vega


---
# Postgres service for starcoin-api
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-access-pg-from-api
namespace: default
spec:
podSelector:
matchLabels:
app: postgres-service
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: starcoin-api

---
# Elasticsearch service for starcoin-api
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-access-es-from-api
namespace: default
spec:
podSelector:
matchLabels:
app: elasticsearch
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: starcoin-api

---
# Default namespace access for elasticsearch
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-internal-elasticsearch
namespace: default
spec:
podSelector:
matchLabels:
app: elasticsearch
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {}
ports:
- protocol: TCP
port: 9200
100 changes: 100 additions & 0 deletions kube/base-components/elasticsearch-deployment.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
# elasticsearch-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
initContainers:
- name: init-permissions
image: busybox
command: [ "sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data" ]
volumeMounts:
- name: es-data
mountPath: /usr/share/elasticsearch/data
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0
resources:
requests:
cpu: "1"
memory: "4Gi"
limits:
cpu: "2"
memory: "6Gi"
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: transport
volumeMounts:
- name: es-data
mountPath: /usr/share/elasticsearch/data
- name: elasticsearch-config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
env:
- name: discovery.type
value: single-node
- name: ELASTIC_USERNAME
valueFrom:
secretKeyRef:
name: elasticsearch
key: username
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch
key: password
Comment on lines +51 to +62
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Use a dedicated secret for Elasticsearch credentials

While using secrets for storing credentials is good practice, it's better to use a dedicated secret for Elasticsearch credentials to enhance security and maintain a clear separation of concerns.

Create a new secret specifically for Elasticsearch:

apiVersion: v1
kind: Secret
metadata:
  name: elasticsearch-credentials
type: Opaque
data:
  username: <base64-encoded-username>
  password: <base64-encoded-password>

Then update the env section to reference this new secret:

valueFrom:
  secretKeyRef:
-   name: elasticsearch
+   name: elasticsearch-credentials
    key: username

Make the same change for the password environment variable.

volumes:
- name: es-data
persistentVolumeClaim:
claimName: es-pvc
- name: elasticsearch-config
configMap:
name: elasticsearch-config
---
# Elasticsearch Configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch-config
data:
elasticsearch.yml: |
xpack.security.enabled: true
xpack.license.self_generated.type: basic
network.host: 0.0.0.0

---
# Elasticsearch Persistent Volume Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: es-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: do-block-storage-retain

---
# Elasticsearch Service
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
spec:
ports:
- port: 9200
selector:
app: elasticsearch
Loading
Loading