Skip to content

Latest commit

 

History

History

assemblyline

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Assemblyline

Please see: https://cybercentrecanada.github.io/assemblyline4_docs/installation/cluster/general/

Configuration

Parameter Description Default
APIInstances Minimum number of API server pods 1
APIInstancesMax Maximum number of API server pods 15
APITargetUsage Target CPU usage that should trigger HPA scaling 50
alerterLimCPU CPU limit for Alerter pods 1
alerterReqCPU CPU requested for Alerter pods 50m
apmILM ILM policy for APM indices See: Values.yaml
apmLimCPU CPU limit for Elastic APM pods 1
apmReqCPU CPU requested for Elastic APM pods 250m
archiverLimCPU CPU limit for Archiver pods 1
archiverReqCPU CPU requested for Archiver pods 50m
assemblylineCoreImage Container image name for core components. cccs/assemblyline-core
assemblylineFrontendImage Container image name for frontend component. cccs/assemblyline-ui-frontend
assemblylineServiceAPIImage Container image name for Service Server component. cccs/assemblyline-service-server
assemblylineServiceImagePrefix Container image name prefix for services. Used for autoInstallServices. cccs/assemblyline-service-
assemblylineServiceVersion Container image tag for services. Used for autoInstallServices. 4.4.stable
assemblylineSocketIOImage Container image name for SocketIO component. cccs/assemblyline-socketio
assemblylineUIImage Container image name for API server component. cccs/assemblyline-ui
autoInstallServices A list of services to install on helm install. Used in conjuction with assemblylineServiceImagePrefix. See: Values.yaml
configuration Assemblyline's Configuration Block See: Values.yaml
nodeAffinity Affinity applied to all core pods. null
coreEnv Environment variables that are added to all core containers []
coreMounts Mounts that are added to all core containers []
coreVolumes Volumes that are added to all containers []
createAdminAccount When do we create the Admin account? on-install
customLogstashPipeline Custom defined Logstash pipeline that logs from Filebeat go through None
datastore An internal Elasticsearch instance that acts as the primary database. Helm Chart See: Values.yaml
defaultLimCPU The default CPU limit for all pods (unless specified otherwise) 1
defaultLimRam The default RAM limit for all pods (unless specified otherwise) 1Gi
defaultReqCPU The default CPU requested for all pods (unless specified otherwise) 50m
defaultReqRam The default RAM requested for all pods (unless specified otherwise) 128Mi
dispatcherFinalizeThreads The number of threads Dispatcher assigns for finalizing results 6
dispatcherInstances Minimum number of Dispatcher pods 1
dispatcherInstancesMax Maximum number of Dispatcher pods 15
dispatcherLimCPU CPU limit for Dispatcher pods 1
dispatcherLimRam RAM limit for Dispatcher pods 1Gi
dispatcherReqCPU CPU requested for Dispatcher pods 500m
dispatcherReqRam RAM requested for Dispatcher pods 256Mi
dispatcherResultThreads The number of threads Dispatcher assigns for processing results 6
dispatcherTargetUsage Target CPU usage that should trigger HPA scaling 70
disptacherShutdownGrace How much time should be given to Dispatcher pods to shutdown? 60
elasticAPMImage Container image name for Elastic's APM server docker.elastic.co/apm/apm-server
elasticAlertShards How many shards should we assign for the Alert index? 2
elasticDefaultReplicas How many replicas should we assign by default for all indices? (Unless otherwise specified) 1
elasticDefaultShards How many shards should we assign by default for all indices? (Unless otherwise specified) 1
elasticEmptyResultShards How many shards should we assign for the EmptyResult index? 4
elasticFileScoreShards How many shards should we assign for the FileScore index? 4
elasticFileShards How many shards should we assign for the File index? 4
elasticHelperLimCPU CPU limit for Elastic Helper pod 250m
elasticHelperLimRam RAM limit for Elastic Helper pod 256Mi
elasticHelperReqCPU CPU requested for Elastic Helper pod 25m
elasticHelperReqRam RAM requested for Elastic Helper pod 32Mi
elasticLogstashImage Container image name for Elastic's Logstash docker.elastic.co/logstash/logstash
elasticLogstashTag Container image tag for Elastic's Logstash 7.17.3
elasticResultShards How many shards should we assign for the Result index? 18
elasticSafelistShards How many shards should we assign for the Safelist index? 4
elasticSubmissionShards How many shards should we assign for the Submission index? 6
enableAPM Enable APM (Application Performance Metrics)? False
enableCoreDebugging Enable core debugging (Beta)? False
enableInternalEncryption Enable internal encryption between Assemblyline pods? False
enableLogging Enable logging? True
enableMetricbeat Enable Metricbeat? True
enableMetrics Enable metric collection? True
enableVacuum Enable Vacuum? False
esMetricsLimCPU CPU limit for Elasticsearch metric gathering pods 1
esMetricsReqCPU CPU requested for Elasticsearch metric gathering pods 50m
expiryLimCPU CPU limit for Expiry pods 1
expiryReqCPU CPU requested for Expiry pods 150m
filebeat Filebeat Helm Configuration See: Values.yaml
filestore Internal filestore configuration for MinIO. Used when internalFilestore: true See: Values.yaml
frontendLimCPU CPU limit for Frontend pod 1
frontendLimRam RAM limit for Frontend pod 1Gi
frontendReqCPU CPU requested for Frontend pod 50m
frontendReqRam RAM requested for Frontend pod 128Mi
heartbeatLimCPU CPU limit for Heartbeat pods 1
heartbeatReqCPU CPU requested for Heartbeat pods 500m
ingestAPIInstances Minimum number of dedicated Ingest API pods 1
ingestAPIInstancesMax Maximum number of dedicated Ingest API pods 15
ingestAPITargetUsage Target CPU usage that should trigger HPA scaling 30
ingestUILimCPU CPU limit for Ingest API pods 1
ingestUILimRam RAM limit for Ingest API pods 2Gi
ingestUIReqCPU CPU requested for Ingest API pods 500m
ingestUIReqRam RAM requested for Ingest API pods 1Gi
ingesterInstances Minimum number of Ingester pods 1
ingesterInstancesMax Maximum number of Ingester pods 10
ingesterLimCPU CPU limit for Ingester pods 1
ingesterReqCPU CPU requested for Ingester pods 500m
ingesterTargetUsage Target CPU usage that should trigger HPA scaling 50
ingressAnnotations Annotations to assign to Ingress {}
ingressHost Ingress Hostname, assuming it's a different value than configuration.ui.fqdn. None
installJobLimCPU CPU limit for initial service registration pods 1
installJobReqCPU CPU requested for initial service registration pods 100m
internalAPIInstances Minimum number of internal API pods 1
internalAPIInstancesMax Maximum number of internal API pods 2
internalAPITargetUsage Target CPU usage that should trigger HPA scaling 70
internalELKStack Are we using an internal ELK stack for logging & metrics? True
internalFilestore Are we hosting an internal filestore? True
internalUILimCPU CPU limit for internal UI pods 1
internalUILimRam RAM limit for internal UI pods 2Gi
internalUIReqCPU CPU requested for internal UI pods 100m
internalUIReqRam RAM requested for internal UI pods 1Gi
kibanaHost The URL to the Kibana host (used by Elastic APM) http://kibana/kibana
kibana Kibana Helm Configuration See: Values.yaml
log-storage A separate Elasticsearch instances for retaining logs from Filebeat. Helm Chart See: Values.yaml
loggingHost The name of the logging host None
loggingTLSVerify Perform TLS verification against logging host? full
loggingUsername Username to authenticate to logging host elastic
logstashLimCPU CPU limit for Logstash pods 1
logstashLimRam RAM limit for Logstash pods 1536Mi
logstashMounts Mounts for Logstash instances None
logstashReqCPU CPU requested for Logstash pods 100m
logstashReqRam RAM requested for Logstash pods 1536Mi
logstashVolumes Volumes for Logstash instances None
metricbeatIndexPrefix Prefix to name of Elasticsearch index containing Metricbeat data metricbeat
metricbeat Metricbeat Helm Configuration See: Values.yaml
metricsLimCPU CPU limit for Assemblyline metric collection pods 1
metricsReqCPU CPU requested for Assemblyline metric collection pods 500m
persistantStorageClass A storage class used to maintain persistance for services like elastic-helper & vacuum None
plumberLimCPU CPU limit for Plumber pods 1
plumberReqCPU CPU requested for Plumber pods 50m
privilegedSafelistedCIDRs List of CIDRs privileged services are allowed to access []
redisImage Container image name for Redis redis
redisPersistentIOThreads The number of IO threads for persistent Redis 1
redisPersistentLimCPU CPU limit for persistent Redis pod 1
redisPersistentLimRam RAM limit for persistent Redis pod 8Gi
redisPersistentReqCPU CPU requested for persistent Redis pod 500m
redisPersistentReqRam RAM requested for persistent Redis pod 1Gi
redisStorageClass A storage class used by redis-persistent None
redisStorageSize How much storage should be requested for redis-persistent? None
redisVolatileIOThreads The number of IO threads for volatile Redis 1
redisVolatileLimCPU CPU limit for volatile Redis pod 1
redisVolatileLimRam RAM limit for volatile Redis pod 8Gi
redisVolatileReqCPU CPU requested for volatile Redis pod 750m
release Release of Assemblyline 4.3.stable
replayLimCPU CPU limit for Replay orchestrator pod 1
replayLimRam RAM limit for Replay orchestrator pod 512Mi
replayLivenessCommand A liveness command that's assigned to Replay pods if [[! find /tmp/heartbeat -newermt '-30 seconds']]; then false; fi
replayLoaderUser Which user to run the Replay Loader as? 1000
replayLoaderVolume What volume does Replay Loader mount to import into Assemblyline? See: Values.yaml
replayMode What mode should Replay be in on this deployment? creator
replayReqCPU CPU requested for Replay orchestrator pod 250m
replayReqRam RAM requested for Replay orchestrator pod 256Mi
replayTargetUsage Target CPU usage that should trigger HPA scaling 50
replayWorkerInstances Minimum number of Replay worker pods 1
replayWorkerInstancesMax Maximum number of Replay worker pods 10
replayWorkerLimCPU CPU limit for Replay worker pods 1
replayWorkerLimRam RAM limit for Replay worker pods 2Gi
replayWorkerReqCPU CPU requested for Replay worker pods 1
replayWorkerReqRam RAM requested for Replay worker pods 1Gi
replay Assemblyline's Replay Configuration See: Values.yaml
revisionCount Revision History Limit for Deployments 2
scalerLimCPU CPU limit for Scaler pod 1
scalerLimRam RAM limit for Scaler pod 4Gi
scalerReqCPU CPU requested for Scaler pod 500m
scalerReqRam RAM requested for Scaler pod 512Mi
separateIngestAPI Do we want to have separate pods from Ingest API calls? False
seperateInternalELKStack Do we want to have a separate ELK stack specifically for logging? True
serviceServerInstances Minimum number of Service Server pods 1
serviceServerInstancesMax Maximum number of Service Server pods 100
serviceServerLimCPU CPU limit for Service Server pods 1
serviceServerLimRam RAM limit for Service Server pods 1Gi
serviceServerReqCPU CPU requested for Service Server pods 500m
serviceServerReqRam RAM requested for Service Server pods 1Gi
serviceServerTargetUsage Target CPU usage that should trigger HPA scaling 50
socketIOLimCPU CPU limit for SocketIO pod 1
socketIOLimRam RAM limit for SocketIO pod 2Gi
socketIOReqCPU CPU requested for SocketIO pod 100m
socketIOReqRam RAM requested for SocketIO pod 256Mi
statisticsLimCPU CPU limit for Assemblyline Stats pod 1
statisticsReqCPU CPU requested for Assemblyline Stats pod 50m
tlsSecretName The name of the Secret containing TLS details for HTTPS None
uiLimCPU CPU limit for UI/API server pods 1
uiLimRam RAM limit for UI/API server pods 2Gi
uiReqCPU CPU requested for UI/API server pods 500m
uiReqRam RAM requested for UI/API server pods 1Gi
updaterLimCPU CPU limit for Updater pod 0.5
updaterReqCPU CPU requested for Updater pod 100m
useAutoScaler Enable use of HPAs for dynamic scaling? True
useLogstash Use Logstash? False
useReplay Use Replay? False
vacuumCacheSize Size of the cache volume for Vacuum 100Gi
vacuumInstances Minimum number of Vacuum pods 1
vacuumInstancesMax Maximum number of Vacuum pods 10
vacuumMounts Mounts to add to Vacuum None
vacuumReqCPU CPU requested for Vacuum pods 0m
vacuumReqRam RAM requested for Vacuum pods 5Gi
vacuumTargetUsage Target CPU usage that should trigger HPA scaling 70
vacuumUser User to run Vacuum as 1000
vacuumVolumes Volumes to add to Vacuum None
vacuumWorkerLimCPU CPU limit for Vacuum worker pods 1
vacuumWorkerLimRam RAM limit for Vacuum worker pods 5Gi
vacuumWorkerReqCPU CPU requested for Vacuum worker pods 0m
vacuumWorkerReqRam RAM requested for Vacuum worker pods 1Gi
workflowLimCPU CPU limit for Workflow pods 1
workflowReqCPU CPU requested for Workflow pods 50m

Node selection

You may want to control which nodes assemblyline assigns work to, the configuration for this is set in several different places:

  • Core pods can be controlled via the nodeAffinity key in the values file.
  • Service pods can be controlled via the configuration.core.scaler.linux_node_selector field. Note this uses a syntax slightly different than the typical kubernetes affinity fields.
  • System elasticsearch is set via datastore.nodeAffinity
  • If the seperateInternalELKStack condition is true, the logging elasticsearch instance is controlled via log-storage.nodeAffinity
  • If the enableLogging condition is true filebeat pods can be controlled with filebeat.daemonset.affinity and filebeat.deployment.affinity
  • If the enableMetricbeat condition is true metricbeat pods can be controlled with metricbeat.daemonset.affinity and metricbeat.deployment.affinity
  • If the internalELKStack condition is true kibana pods can be controlled with kibana.affinity
  • If the internalFilestore condition is true minio pods can be controlled with filestore.affinity