diff --git a/docs/EN_US/ContainerizedHPCC/ContainerizedMods/ConfigureValues.xml b/docs/EN_US/ContainerizedHPCC/ContainerizedMods/ConfigureValues.xml index e7349941fa8..c4f60001e04 100644 --- a/docs/EN_US/ContainerizedHPCC/ContainerizedMods/ConfigureValues.xml +++ b/docs/EN_US/ContainerizedHPCC/ContainerizedMods/ConfigureValues.xml @@ -1107,212 +1107,621 @@ thor: components such as Thor have multiple resources. The manager, worker, eclagent components all have different resource requirements. + - - Taints, Tolerations, and Placements + + Environment Values - This is an important consideration for containerized systems. - Taints and Tolerations are types of Kubernetes node constraints also - referred to by Node Affinity. Node - affinity is a way to constrain pods to nodes. Only one "affinity" can - be applied to a pod. If a pod matches multiple placement 'pods' lists, - then only the last "affinity" definition will apply. + You can define environment variables in a YAML file. The + environment values are defined under the global.env + portion of the provided HPCC Systems values.yaml file. These values are + specified as a list of name value pairs as illustrated below. - Taints and tolerations work together to ensure that pods are not - scheduled onto inappropriate nodes. Tolerations are applied to pods, - and allow (but do not require) the pods to schedule onto nodes with - matching taints. Taints are the opposite -- they allow a node to repel - a set of pods. + global: + env: + - name: SMTPserver + value: mysmtpserver - For example, Thor workers should all be on the appropriate type - of VM. If a big Thor job comes along – then the taints level comes - into play. + The global.env section of the supplied values.yaml file adds + default environment variables for all components. You can also specify + environment variables for the individual components. Refer to the schema + for setting this value for individual components. - For more information and examples of our Taints, Tolerations, - and Placements please review our developer documentation: + To add environment values you can insert them into your + customization configuration YAML file when you deploy your containerized + HPCC Systems. - https://github.com/hpcc-systems/HPCC-Platform/blob/master/helm/hpcc/docs/placements.md + + Environment Variables for Containerized Systems + + There are several settings in environment.conf for bare-metal + systems. While many environment.conf settings are not valid for + containers, some can be useful. In a cloud deployment, these settings + are inherited from environment variables. These environment variables + are configurable using the values yaml either globally, or at the + component level. + + Some of those variables are available for container and cloud + deployments and can be set using the Helm chart. The following + bare-metal environment.conf values have these equivalent values which + can be set for containerized instances. + + + + + + Environment.conf + Value + + Helm Environment + Variable + + + + skipPythonCleanup + + SKIP_PYTHON_CLEANUP + + + + jvmlibpath + + JAVA_LIBRARY_PATH + + + + jvmoptions + + JVM_OPTIONS + + + + classpath + + CLASSPATH + + + + + + The following example sets the environment variable to skip + Python cleanup on the Thor component: + + thor: + env: + - name: SKIP_PYTHON_CLEANUP + value: true + + + + + Index Build Plane + + Define the indexBuildPlane value as a helm + chart option to allow index files to be written by default to a + different data plane. Unlike flat files, index files have different + requirements. The index files benefit from quick random access storage. + Ordinarily flat files and index files are output to the defined default + data plane(s). Using this option you can define that index files are + built on a separate data plane from other common files. This chart value + can be supplied at a component or global level. + + For example, adding the value to a global level under + globlal.storage : + + global: + storage: + indexBuildPlane: myindexplane - - Placements + Optionally, you could add it at the component level, as + follows: - The Placement is responsible for finding the best node for a - pod. Most often placement is handled automatically by Kubernetes. - You can constrain a Pod so that it can only run on particular set of - Nodes. Using placements you can configure the Kubernetes scheduler - to use a "pods" list to apply settings to pods. For example: + thor: +- name: thor + prefix: thor + numWorkers: 2 + maxJobs: 4 + maxGraphs: 2 + indexBuildPlane: myindexplane - placements: + When this value is set at the component level it would override + the value set at the global level. + + + + + Pods and Nodes + + One of the key features of Kubernetes is its ability to schedule + pods on to nodes in the cluster. A pod is the smallest and simplest unit + in the Kubernetes environment that you can create or deploy. A node is + either a physical or virtual "worker" machine in Kubernetes. + + The task of scheduling pods to specific nodes in the cluster is + handled by the kube-scheduler. The default behavior of this component is + to filter nodes based on the resource requests and limits of each + container in the created pod. Feasible nodes are then scored to find the + best candidate for the pod placement. The scheduler also takes into + account other factors such as pod affinity and anti-affinity, taints and + tolerations, pod topology spread constraints, and the node selector + labels. The scheduler can be configured to use these different algorithms + and policies to optimize the pod placement according to your cluster’s + needs. + + You can deploy these values either using the values.yaml file or you + can place into a file and have Kubernetes instead read the values from the + supplied file. See the above section Customization + Techniques for more information about customizing your + deployment. + + + Placements + + Placements is a term used by HPCC Systems, which Kubernetes refers + to as the scheduler or scheduling/assigning. In order to avoid confusion + within the HPCC Systems and ECL specific scheduler terms, refer to + Kubernetes scheduling as placements. Placements are a value in an HPCC + Systems configuration which is at a level above items, such as the + nodeSelector, Toleration, Affinity and Anti-Affinity, and + TopologySpreadConstraints. + + The placement is responsible for finding the best node for a pod. + Most often placement is handled automatically by Kubernetes. You can + constrain a Pod so that it can only run on particular set of + Nodes. + + Placements would then be used to ensure that pods or jobs that + want nodes with specific characteristics are placed on those + nodes. + + For instance a Thor cluster could be targeted for machine learning + using nodes with a GPU. Another job may want nodes with a good amount + more memory or another for more CPU. + + Using placements you can configure the Kubernetes scheduler to use + a "pods" list to apply settings to pods. + + For example: + + placements: - pods: [list] placement: <supported configurations> - The pods: [list] can contain a variety of items. + + Placement Scope - - - HPCC Systems component types, using the prefix - type: this can be: dali, esp, eclagent, eclccserver, - roxie, thor. For example "type:esp" - + Use pod patterns to apply the scope for the placements. - - Target; the name of an array item from the above types - using prefix "target:" For example "target:roxie" or - "target:thor". - + The pods: [list] item can contain one of the following: - - Pod, "Deployment" metadata name from the name of the array - item of a type. For example, "eclwatch", "mydali", - "thor-thoragent" - + + + - - Job name regular expression: For example "compile-" or - "compile-." or exact match "^compile-.$" - + - - All: to apply for all HPCC Systems components. The default - placements for pods we deliver is [all] - - - - Placements – in Kubernetes - the Placement concept allows you to spread your pods across types of - nodes with particular characteristics. Placements would be used to - ensure that pods or jobs that want nodes with specific - characteristics are placed on them. - - For instance a Thor cluster could be targeted for machine - learning using nodes with a GPU. Another job may want nodes with a - good amount more memory or another for more CPU. You can use - placements to ensure that pods with specific requirements are placed - on appropriate nodes. + + + Type: <component> + + Covers all pods/jobs under this type of component. This + is commonly used for HPCC Systems components. For example, the + type:thor which will apply to any of the + Thor type components; thoragent, thormanager, thoragent and + thorworker, etc. + + + + Target: <name> + + The "name" field of each component, typical usage for + HPCC Systems components referrs to the cluster name. For + example Roxie: -name: roxie which will be + the "Roxie" target (cluster). You can also define multiple + targets with each having a unique name such as "roxie", + "roxie2", "roxie-web" etc. + + + + Pod: <name> + + This is the "Deployment" metadata name from the name of + the array item of a type. For example, "eclwatch-", "mydali-", + "thor-thoragent" using a regular expression is preferred since + Kubernetes will use the metadata name as a prefix and + dynamically generate the pod name such as, + eclwatch-7f4dd4dd44cb-c0w3x. + + + + Job name: + + The job name is typically a regular expression as well, + since the job name is generated dynamically. For example, a + compile job compile-54eB67e567e, could use "compile-" or + "compile-.*" or "^compile-.*$" + + + + All: + + Applies for all HPCC Systems components. The default + placements for pods delivered is [all] + + + + + + Regardless of the order the placements appear in the + configuration, they will be processed in the following order: "all", + "type", "target", and then "pod"/"job". + + + Mixed combinations + + NodeSelector, taints and tolerations, and other values can all + be placed on the same pods: [list] both per zone and per node on + Azure placements: +- pods: ["eclwatch","roxie-workunit","^compile-.*$","mydali"] + placement: + nodeSelector: + name: npone + - - Environment Values + + Node Selection - You can define environment variables in a YAML file. The - environment values are defined under the - global.env portion of the provided HPCC Systems - values.yaml file. These values are specified as a list of name value - pairs as illustrated below. + In a Kubernetes container environment, there are several ways to + schedule your nodes. The recommended approaches all use label selectors + to facilitate the selection. Generally, you may not need to set such + constraints; as the scheduler usually does reasonably acceptable + placement automatically. However, with some deployments you may want + more control over specific pods. - global: - env: - - name: SMTPserver - value: mysmtpserver + Kubernetes uses the following methods to choose where to schedule + pods: - The global.env section of the supplied values.yaml file adds - default environment variables for all components. You can also specify - environment variables for the individual components. Refer to the - schema for setting this value for individual components. + + + nodeSelector field matching against node labels + - To add environment values you can insert them into your - customization configuration YAML file when you deploy your - containerized HPCC Systems. + + Affinity and anti-affinity + - - Environment Variables for Containerized Systems + + Taints and Tolerations + - There are several settings in environment.conf for bare-metal - systems. While many environment.conf settings are not valid for - containers, some can be useful. In a cloud deployment, these - settings are inherited from environment variables. These environment - variables are configurable using the values yaml either globally, or - at the component level. + + nodeName field + - Some of those variables are available for container and cloud - deployments and can be set using the Helm chart. The following - bare-metal environment.conf values have these equivalent values - which can be set for containerized instances. + + Pod topology spread constraints + + - - - - - Environment.conf - Value + + Node Labels - Helm Environment - Variable - + Kubernetes nodes have labels. Kubernetes has a standard set of + labels used for nodes in a cluster. You can also manually attach + labels which is recommended as the value of these labels is + cloud-provider specific and not guaranteed to be reliable. + + Adding labels to nodes allows you to schedule pods to nodes or + groups of nodes. You can then use this functionality to ensure that + specific pods only run on nodes with certain properties. + - - skipPythonCleanup + + The nodeSelector + + The nodeSelector is a field in the Pod specification that allows + you to specify a set of node labels that must be present on the target + node for the Pod to be scheduled there. It is the simplest form of + node selection constraint. It selects nodes based on the labels, but + it has some limitations. It only supports one label key and one label + value. If you wanted to match multiple labels or use more complex + expressions, you need to use node Affinity. + + Add the nodeSelector field to your pod specification and specify + the node labels you want the target node to have. You must have the + node labels defined in the job and pod. Then you need to specify each + node group the node label to use. Kubernetes only schedules the pod + onto nodes that have the labels you specify. + + The following example shows the nodeSelector placed in the pods + list. This example schedules "all" HPCC components to use the node + pool with the label group: "hpcc". + + placements: + - pods: ["all"] + placement: + nodeSelector: + group: "hpcc" + + Note: The label group:hpcc + matches the node pool label:hpcc. + + This next example shows how to configure a node pool to prevent + scheduling a Dali component onto this node pool labelled with the key + spot: via the value false. As this kind of node is not always + available and could get revoked therefore you would not want to use + the spot node pool for Dali components. This is an example for how to + configure a specific type (Dali) of HPCC Systems component not to use + a particular node pool. + + placements: + - pods: ["type:dali"] + placement: + nodeSelector: + spot: "false" - SKIP_PYTHON_CLEANUP - + When using nodeSelector, multiple nodeSelectors can be applied. + If duplicate keys are defined, only the last one prevails. + - - jvmlibpath + + Taints and Tolerations - JAVA_LIBRARY_PATH - + Taints and Tolerations are types of Kubernetes node constraints + also referred to by node Affinity. Only one affinity can be applied to + a pod. If a pod matches multiple placement 'pods' lists, then only the + last affinity definition will apply. - - jvmoptions + Taints and tolerations work together to ensure that pods are not + scheduled onto inappropriate nodes. Tolerations are applied to pods, + and allow (but do not require) the pods to schedule onto nodes with + matching taints. Taints are the opposite -- they allow a node to repel + a set of pods. One way to deploy using taints, is to set to repel all + but a specific node. Then that pod can be scheduled onto another node + that is tolerate. - JVM_OPTIONS - + For example, Thor workers should all be on the appropriate type + of VM. If a big Thor job comes along – then the taints level repels + any pods that attempt to be scheduled onto a node that does not meet + the requirements. - - classpath + For more information and examples of our Taints, Tolerations, + and Placements please review our developer documentation: - CLASSPATH - - - - + https://github.com/hpcc-systems/HPCC-Platform/blob/master/helm/hpcc/docs/placements.md - The following example sets the environment variable to skip - Python cleanup on the Thor component: + + Taints and Tolerations Examples + + The following examples illustrate how some taints and + tolerations can be applied. + + Kubernetes can schedule a pod on to any node pool without a + taint. In the following examples Kubernetes can only schedule the + two components to the node pools with these exact labels, group and + gpu. + + placements: + - pods: ["all"] + tolerations: + key: "group" + operator: "Equal" + value: "hpcc" + effect: "NoSchedule" + + placements: + - pods: ["type:thor"] + tolerations: + key: "gpu" + operator: "Equal" + value: "true" + effect: "NoSchedule" + + Multiple tolerations can also be used. The following example + has two tolerations, group and gpu. + + #The settings will be applied to all thor pods/jobs and myeclccserver pod and job +- pods: ["thorworker-", "thor-thoragent", "thormanager-","thor-eclagent","hthor-"] + placement: + nodeSelector: + app: tf-gpu + tolerations: + - key: "group" + operator: "Equal" + value: "hpcc" + effect: "NoSchedule" + - key: "gpu" + operator: "Equal" + value: "true" + effect: "NoSchedule" + - thor: - env: - - name: SKIP_PYTHON_CLEANUP - value: true + In this example the nodeSelector is preventing the Kubernetes + scheduler from deploying any/all to this node pool. Without taints + the scheduler could deploy to any pods onto the node pool. By + utilizing the nodeSelector, the taint will force the pod to deploy + only to the pods who match that node label. There are two + constraints then, in this example one from the node pool and the + other from the pod. - - Index Build Plane + + Topology Spread Constraints - Define the indexBuildPlane value as a helm - chart option to allow index files to be written by default to a - different data plane. Unlike flat files, index files have different - requirements. The index files benefit from quick random access - storage. Ordinarily flat files and index files are output to the - defined default data plane(s). Using this option you can define that - index files are built on a separate data plane from other common - files. This chart value can be supplied at a component or global - level. + You can use topology spread constraints to control how pods are + spread across your cluster among failure-domains such as regions, + zones, nodes, and other user-defined topology domains. This can help + to achieve high availability as well as efficient resource + utilization. You can set cluster-level constraints as a default, or + configure topology spread constraints for individual workloads. The + Topology Spread Constraints topologySpreadConstraints requires Kubernetes + v1.19+.or better. - For example, adding the value to a global level under - globlal.storage : + For more information see: - global: - storage: - indexBuildPlane: myindexplane + https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ + and - Optionally, you could add it at the component level, as - follows: + https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/ - thor: -- name: thor - prefix: thor - numWorkers: 2 - maxJobs: 4 - maxGraphs: 2 - indexBuildPlane: myindexplane + Using the "topologySpreadConstraints" example, there are two + node pools which have "hpcc=nodepool1" and "hpcc=nodepool2" + respectively. The Roxie pods will be evenly scheduled on the two node + pools. + + After deployment you can verify by issuing the following + command: + + kubectl get pod -o wide | grep roxie + + The placements code: + + - pods: ["type:roxie"] + placement: + topologySpreadConstraints: + - maxSkew: 1 + topologyKey: hpcc + whenUnsatisfiable: ScheduleAnyway + labelSelector: + matchLabels: + roxie-cluster: "roxie" + + + + Affinity and Anti-Affinity` + + Affinity and anti-affinity expands the types of constraints that + you can define. The affinity and anti-affinity rules are still based + on the labels. In addition to the labels, they provide rules that + guide Kubernetes’ scheduler where to place pods based on specific + criteria. The affinity/anti-affinity language is more expressive than + simple labels and gives you more control over the selection + logic. + + The are two main kinds of affinity, Node Affinity and Pod + Affinity. + + + Node Affinity + + Node affinity is similar to the nodeSelector concept that + allows you to constrain which nodes your pod can be scheduled onto + based on the node labels. These are used to constrain the nodes that + can receive a pod by matching labels of those nodes. Node affinity + and anti-affinity can only be used to set positive affinities that + attract pods to the node. These are used to constrain the nodes that + can receive a pod by matching labels to those nodes. Node affinity + and anti-affinity can only be used to set positive affinities that + attract pods to the node. + + There is no schema check for the content of affinity. Only one + affinity can be applied to a pod or job. If a pod/job matches + multiple placement pods lists, then only the last affinity + definition applies. + + For more information, see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/ + + There are two types of node affinity: + + requiredDuringSchedulingIgnoredDuringExecution: + The scheduler can't schedule the pod unless this rule is met. This + function is similar to the nodeSelector, but with a more expressive + syntax. + + preferredDuringSchedulingIgnoredDuringExecution: + The scheduler tries to find a node that meets the rule. If a + matching node is not available, the scheduler still schedules the + pod. + + You can specify node affinities using the + .spec.affinity.nodeAffinity field in your pod + spec. + - When this value is set at the component level it would override - the value set at the global level. + + Pod Affinity + + Pod affinity or Inter-Pod Affinity is used to constrain the + nodes that can receive a pod by matching the labels of the existing + pods already running on to those nodes. Pod affinity and + anti-affinity can be either an attracting affinity or a repelling + anti-affinity. + + Inter-Pod Affinity works very similarly to Node Affinity but + have some important differences. The "hard" and "soft" modes are + indicated using the same + requiredDuringSchedulingIgnoredDuringExecution + and + preferredDuringSchedulingIgnoredDuringExecution + fields. However, these should be nested under the + spec.affinity.podAffinity or + spec.affinity.podAntiAffinity fields depending + on whether you want to increase or reduce the Pod's affinity. + + + + Affinity Example + + The following code illustrates an example of affinity: + + - pods: ["thorworker-.*"] + placement: + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: kubernetes.io/e2e-az-name + operator: In + values: + - e2e-az1 + - e2e-az2 + + In the following schedulerName section the, the "affinity" + settings can also be included with that example. + + Note: The "affinity" value in + the "schedulerName" field is only supported in Kubernetes 1.20.0 + beta and later versions. + + + + + schedulerName + + The schedulerName field + specifies the name of the scheduler that is responsible for scheduling + a pod or a task. In Kubernetes, you can configure multiple schedulers + with different names and profiles to run simultaneously in the + cluster. + + Only one "schedulerName" can be applied to any pod/job. + + A schedulerName example: + + - pods: ["target:roxie"] + placement: + schedulerName: "my-scheduler" +#The settings will be applied to all thor pods/jobs and myeclccserver pod and job +- pods: ["target:myeclccserver", "type:thor"] + placement: + nodeSelector: + app: "tf-gpu" + tolerations: + - key: "gpu" + operator: "Equal" + value: "true" + effect: "NoSchedule" + diff --git a/docs/EN_US/ECLLanguageReference/ECLR_mods/BltInFunc-SOAPCALL.xml b/docs/EN_US/ECLLanguageReference/ECLR_mods/BltInFunc-SOAPCALL.xml index 8eaf63b23af..8d3d648cb03 100644 --- a/docs/EN_US/ECLLanguageReference/ECLR_mods/BltInFunc-SOAPCALL.xml +++ b/docs/EN_US/ECLLanguageReference/ECLR_mods/BltInFunc-SOAPCALL.xml @@ -6,23 +6,26 @@ SOAPCALL - result := - SOAPCALL - SOAPCALL Function - ( [ recset, ] url, service, instructure, - [ transform, - ] - DATASET - DATASET - (outstructure) | outstructure [, options ] [, UNORDERED | + result := SOAPCALL( [ recset, + ] url, service, + instructure, [ + transform, ] + DATASET(outstructure) + | outstrucuture [,options [, UNORDERED | ORDERED( bool ) ] [, STABLE | UNSTABLE ] [, PARALLEL [ ( numthreads ) ] ] [, ALGORITHM( - name ) ] ); + name )] [, PERSIST [ ( + option ) ] ] ) ; + + PERSIST + + SOAPCALL Function + + DATASET + SOAPCALL( [ recset, ] url, service, @@ -34,7 +37,9 @@ ORDERED( bool ) ] [, STABLE | UNSTABLE ] [, PARALLEL [ ( numthreads ) ] ] [, ALGORITHM( - name ) ] ); + name )] [, PERSIST [ ( + option ) ] ] ) ; + @@ -202,6 +207,22 @@ UNSTABLE options. + + PERSIST + + Optional. Use persistent connections. + + + + option + + Optional. If omitted, it uses the default number of + connections. If TRUE, it enables persistent connections. If FALSE + or 0, it disables persistent connections. If set to an integer, it + enables persistent connections and sets the number of active + connections. + + Return: @@ -509,6 +530,9 @@ DATASET(outRecord), ONFAIL(genDefault1()))); OUTPUT(SOAPCALL(ds, ip, svc, inRecord, t(LEFT),DATASET(outRecord), ONFAIL(genDefault2(LEFT)))); OUTPUT(SOAPCALL(ds, ip, svc, inRecord, t(LEFT),DATASET(outRecord), ONFAIL(SKIP))); +OUTPUT(SOAPCALL(ds, ip, svc, inRecord, t(LEFT),DATASET(outRecord), ONFAIL(SKIP),PERSIST(12))); + //use 12 persistent connections + //Using HTTPHEADER to pass Authorization info OUTPUT(SOAPCALL(ds, ip, svc, inRecord, t(LEFT),DATASET(outRecord), ONFAIL(SKIP), HTTPHEADER('Authorization','Basic dXNlcm5hbWU6cGFzc3dvcmQ='), diff --git a/docs/EN_US/ECLStandardLibraryReference/SLR-Mods/Copy.xml b/docs/EN_US/ECLStandardLibraryReference/SLR-Mods/Copy.xml index 13fc1afdd2b..bca266d1967 100644 --- a/docs/EN_US/ECLStandardLibraryReference/SLR-Mods/Copy.xml +++ b/docs/EN_US/ECLStandardLibraryReference/SLR-Mods/Copy.xml @@ -28,6 +28,8 @@ [ ,preserveCompression ] [ ,noSplit ] [ ,expireDays ] [ + ,ensure ]); dfuwuid := @@ -55,7 +57,9 @@ [ ,preserveCompression ] [ ,noSplit ] [ ,expireDays ]); + role="bold">] [ + ,ensure]); + @@ -204,6 +208,13 @@ role="bold">expiryDefault setting. + + ensure + + Optional. Copies logical file, but does not copy file parts + if they already exist. Default is FALSE. + + dfuwuid diff --git a/docs/EN_US/HPCCClientTools/CT_Mods/CT_Comm_Line_DFU.xml b/docs/EN_US/HPCCClientTools/CT_Mods/CT_Comm_Line_DFU.xml index 07dc6469d5b..d919f00466b 100644 --- a/docs/EN_US/HPCCClientTools/CT_Mods/CT_Comm_Line_DFU.xml +++ b/docs/EN_US/HPCCClientTools/CT_Mods/CT_Comm_Line_DFU.xml @@ -645,7 +645,7 @@ dfuplus action=spray srcplane=lzstorageplane If omitted, the information must be supplied by the dstxml or dstplane parameter. Deprecated, you should use - dstplane instead. + dstplane instead. @@ -868,6 +868,13 @@ dfuplus action=despray srcname=mytest::test:spraytest to preserve the compression of the source file. If omitted, the default is 1. + + + ensure + + Optional. Copies logical file, but does not copy file + parts if they already exist. Default is FALSE. + diff --git a/ecl/hql/hqlgram.y b/ecl/hql/hqlgram.y index 6956abeb212..9e3e190beb7 100644 --- a/ecl/hql/hqlgram.y +++ b/ecl/hql/hqlgram.y @@ -3822,6 +3822,19 @@ soapFlag $3.unwindCommaList(args); $$.setExpr(createExprAttribute(jsonAtom, args), $1); } + | PERSIST + { + $$.setExpr(createAttribute(persistAtom), $1); + } + | PERSIST '(' expression ')' + { + //Allow either bool or an integer as the parameter + if ($3.queryExprType()->isBoolean()) + parser->normalizeExpression($3, type_boolean, true); + else + parser->normalizeExpression($3, type_int, true); + $$.setExpr(createExprAttribute(persistAtom, $3.getExpr()), $1); + } ; onFailAction diff --git a/ecl/hqlcpp/hqlhtcpp.cpp b/ecl/hqlcpp/hqlhtcpp.cpp index 383f0c57005..09c2813ddfc 100644 --- a/ecl/hqlcpp/hqlhtcpp.cpp +++ b/ecl/hqlcpp/hqlhtcpp.cpp @@ -18231,6 +18231,20 @@ ABoundActivity * HqlCppTranslator::doBuildActivitySOAP(BuildCtx & ctx, IHqlExpre OwnedHqlExpr xpathHintsExpr = createConstant(createStringValue(xpathHints.str(), xpathHints.length())); doBuildVarStringFunction(instance->startctx, "getXpathHintsXml", xpathHintsExpr); } + + IHqlExpression * persistArg = queryAttributeChild(expr, persistAtom, 0); + if (persistArg) + { + // PERSIST(false) maps to PERSIST(0). + if (matchesBoolean(persistArg, false)) + persistArg = queryZero(); // Do not generate the function, but set the pool flag so the default base function is called. + // PERSIST(true) maps to PERSIST with no argument, + else if (matchesBoolean(persistArg, true)) + persistArg = nullptr; + else if (!matchesConstantValue(persistArg, 0)) // Avoid generating 0 since that is the default implementation + doBuildUnsignedFunction(instance->createctx, "getPersistPoolSize", persistArg); + } + //virtual unsigned getFlags() { StringBuffer flags; @@ -18280,6 +18294,12 @@ ABoundActivity * HqlCppTranslator::doBuildActivitySOAP(BuildCtx & ctx, IHqlExpre flags.append("|SOAPFformEncoded"); break; } + if (expr->hasAttribute(persistAtom)) + { + flags.append("|SOAPFpersist"); + if (persistArg) + flags.append("|SOAPFpersistPool"); + } if (flags.length()) doBuildUnsignedFunction(instance->classctx, "getFlags", flags.str()+1); } diff --git a/ecl/hthor/hthorkey.cpp b/ecl/hthor/hthorkey.cpp index 16ea4aee6ab..da82a9ef7cf 100644 --- a/ecl/hthor/hthorkey.cpp +++ b/ecl/hthor/hthorkey.cpp @@ -3985,7 +3985,6 @@ class CHThorKeyedJoinActivity : public CHThorThreadedActivityBase, implements I helper.createSegmentMonitors(manager, row); manager->finishSegmentMonitors(); manager->reset(); - manager->resetCounts(); } virtual void doneManager(IKeyManager * manager) diff --git a/ecl/regress/soapcall9.ecl b/ecl/regress/soapcall9.ecl new file mode 100644 index 00000000000..d773fdb7994 --- /dev/null +++ b/ecl/regress/soapcall9.ecl @@ -0,0 +1,40 @@ +/*############################################################################## + + HPCC SYSTEMS software Copyright (C) 2023 HPCC Systems®. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +############################################################################## */ + +outRecord := + RECORD + string name{xpath('Name')}; + unsigned6 id{xpath('ADL')}; + real8 score; + END; + +string myUserName := '' : stored('myUserName'); +string myPassword := '' : stored('myPassword'); + +outRecord genDefault1() := TRANSFORM + SELF.name := FAILMESSAGE; + SELF.id := FAILCODE; + SELF.score := (real8)FAILMESSAGE('ip'); + END; + + +//Test all the different PERSIST variants. +output(SOAPCALL('ip', 'service', { string20 surname := 'Hawthorn', string20 forename := 'Gavin';}, DATASET(outRecord), PERSIST)); +output(SOAPCALL('ip', 'service', { string20 surname := 'Hawthorn', string20 forename := 'Gavin';}, DATASET(outRecord), PERSIST(true))); +output(SOAPCALL('ip', 'service', { string20 surname := 'Hawthorn', string20 forename := 'Gavin';}, DATASET(outRecord), PERSIST(false))); +output(SOAPCALL('ip', 'service', { string20 surname := 'Hawthorn', string20 forename := 'Gavin';}, DATASET(outRecord), PERSIST(0))); +output(SOAPCALL('ip', 'service', { string20 surname := 'Hawthorn', string20 forename := 'Gavin';}, DATASET(outRecord), PERSIST(99))); diff --git a/esp/services/ws_fs/ws_fsService.cpp b/esp/services/ws_fs/ws_fsService.cpp index b7fe8b6bcbe..691bd2db6a1 100644 --- a/esp/services/ws_fs/ws_fsService.cpp +++ b/esp/services/ws_fs/ws_fsService.cpp @@ -2691,11 +2691,24 @@ bool CFileSprayEx::onCopy(IEspContext &context, IEspCopy &req, IEspCopyResponse StringBuffer destFolder, destTitle, defaultFolder, defaultReplicateFolder; StringBuffer srcNodeGroup, destNodeGroup; bool bRoxie = false; + const char* srcDali = req.getSourceDali(); const char* destNodeGroupReq = req.getDestGroup(); - if(!destNodeGroupReq || !*destNodeGroupReq) + if (isEmptyString(destNodeGroupReq)) { - getNodeGroupFromLFN(context, srcname, destNodeGroup); - DBGLOG("Destination node group not specified, using source node group %s", destNodeGroup.str()); + CDfsLogicalFileName lfn; + lfn.set(srcname); + if (!isEmptyString(srcDali) || lfn.isForeign() || lfn.isRemote()) + { + //makes no sense to get the srcname's current group, if a logical file is from a different environment. + if (!getDefaultStoragePlane(destNodeGroup)) + throw makeStringException(ECLWATCH_INVALID_INPUT, "Destination node group not specified. And no default node group found."); + DBGLOG("Destination node group not specified, using default node group %s", destNodeGroup.str()); + } + else + { + getNodeGroupFromLFN(context, srcname, destNodeGroup); + DBGLOG("Destination node group not specified, using source node group %s", destNodeGroup.str()); + } } else { @@ -2721,7 +2734,6 @@ bool CFileSprayEx::onCopy(IEspContext &context, IEspCopy &req, IEspCopyResponse constructFileMask(destTitle.str(), fileMask); Owned udesc=createUserDescriptor(); - const char* srcDali = req.getSourceDali(); const char* srcu = req.getSrcusername(); if (!isEmptyString(srcDali) && !isEmptyString(srcu)) { diff --git a/esp/src/eclwatch/ClusterProcessesQueryWidget.js b/esp/src/eclwatch/ClusterProcessesQueryWidget.js index dff1e020ede..87ac7d5b130 100644 --- a/esp/src/eclwatch/ClusterProcessesQueryWidget.js +++ b/esp/src/eclwatch/ClusterProcessesQueryWidget.js @@ -17,7 +17,9 @@ define([ "hpcc/DelayLoadWidget", "hpcc/PreflightDetailsWidget", "hpcc/MachineInformationWidget", - "hpcc/IFrameWidget" + "hpcc/IFrameWidget", + + "dijit/Dialog", ], function (declare, nlsHPCCMod, topic, registry, tree, selector, diff --git a/esp/src/eclwatch/DFUQueryWidget.js b/esp/src/eclwatch/DFUQueryWidget.js index 2e000b9300d..6f781fe76c0 100644 --- a/esp/src/eclwatch/DFUQueryWidget.js +++ b/esp/src/eclwatch/DFUQueryWidget.js @@ -759,7 +759,8 @@ define([ return ""; } }, - Modified: { label: this.i18n.ModifiedUTCGMT, width: 162 }, + Modified: { label: this.i18n.ModifiedUTCGMT, width: 160 }, + Accessed: { label: this.i18n.LastAccessed, width: 160 }, AtRestCost: { label: nlsHPCC.FileCostAtRest, width: 100, formatter: function (cost, row) { diff --git a/esp/src/eclwatch/DiskUsageWidget.js b/esp/src/eclwatch/DiskUsageWidget.js index 30ffdb68914..a48bc1a1e50 100644 --- a/esp/src/eclwatch/DiskUsageWidget.js +++ b/esp/src/eclwatch/DiskUsageWidget.js @@ -16,6 +16,7 @@ define([ "dijit/layout/BorderContainer", "dijit/layout/TabContainer", "dijit/layout/ContentPane", + "dijit/Dialog", "dijit/Toolbar", "dijit/ToolbarSeparator", "dijit/form/Button", diff --git a/esp/src/eclwatch/TargetClustersQueryWidget.js b/esp/src/eclwatch/TargetClustersQueryWidget.js index 5487b39ad5c..c58d4b175ee 100644 --- a/esp/src/eclwatch/TargetClustersQueryWidget.js +++ b/esp/src/eclwatch/TargetClustersQueryWidget.js @@ -17,7 +17,9 @@ define([ "hpcc/DelayLoadWidget", "src/ESPUtil", "hpcc/MachineInformationWidget", - "hpcc/IFrameWidget" + "hpcc/IFrameWidget", + + "dijit/Dialog", ], function (declare, nlsHPCCMod, topic, registry, tree, selector, diff --git a/esp/src/eclwatch/css/hpcc.css b/esp/src/eclwatch/css/hpcc.css index e11e8c1978c..c13d45e88a2 100644 --- a/esp/src/eclwatch/css/hpcc.css +++ b/esp/src/eclwatch/css/hpcc.css @@ -1322,7 +1322,7 @@ table.miniSelect span { left: 0; right: 0; height: auto; - border-color: rgb(158, 158, 158); + border-style: none; } .flat .dgrid-page-size { @@ -1928,22 +1928,28 @@ span.dijitReset.dijitInline.dijitIcon.fa.disabled { border-top: 4px solid white; } -.flat .phosphor_WidgetAdapter { - background: inherit; +.flat .phosphor_WidgetAdapter.p-DockPanel-widget { + border-top: none; + box-shadow: 0 -1px var(--colorNeutralStroke1), var(--shadow2); } -.flat .p-TabBar-tab { - background: rgb(229, 229, 229); +.flat .phosphor_WidgetAdapter { + background-color: var(--colorNeutralBackground1); + border-color: var(--colorNeutralStroke1) } -.flat-dark .p-TabBar-tab { - background: rgb(48, 48, 48); +.flat .p-TabBar-tab { + background-color: var(--colorNeutralBackground3); + border-bottom-style: solid; + border-color: var(--colorNeutralStroke1); + border-width: 1px; } -.flat-dark .p-TabBar-tab:hover:not(.p-mod-current) { - background: rgb(72, 72, 72); +.flat .p-TabBar-tab:hover:not(.p-mod-current) { + background-color: var(--colorNeutralBackground3Hover); } .flat .p-TabBar-tab.p-mod-current { - background: inherit; + background-color: var(--colorNeutralBackground1); + border-bottom-color: var(--colorNeutralStroke1); } \ No newline at end of file diff --git a/esp/src/eclwatch/stub.js b/esp/src/eclwatch/stub.js index f56fe35a010..e5bca2b0447 100644 --- a/esp/src/eclwatch/stub.js +++ b/esp/src/eclwatch/stub.js @@ -10,8 +10,6 @@ define([ "src/Utility", "src/Session", - "src/KeyValStore", - "src/BuildInfo", "hpcc/LockDialogWidget", "dojox/html/entities", @@ -22,7 +20,7 @@ define([ "css!hpcc/css/hpcc.css" ], function (fx, dom, domStyle, ioQuery, ready, lang, arrayUtil, topic, - Utility, Session, KeyValStore, BuildInfo, LockDialogWidget, + Utility, Session, LockDialogWidget, entities, Toaster) { Session.initSession(); @@ -30,38 +28,15 @@ define([ const params = ioQuery.queryToObject(dojo.doc.location.search.substr((dojo.doc.location.search.substr(0, 1) === "?" ? 1 : 0))); const hpccWidget = params.Widget ? params.Widget : "HPCCPlatformWidget"; - Session.fetchModernMode().then(modernMode => { - if (modernMode === String(true) && hpccWidget !== "IFrameWidget") { - switch (hpccWidget) { - case "WUDetailsWidget": - window.location.replace(`/esp/files/index.html#/workunits/${params.Wuid}`); - break; - case "GraphsWUWidget": - window.location.replace(`/esp/files/index.html#/workunits/${params.Wuid}/metrics`); - break; - case "TopologyWidget": - case "DiskUsageWidget": - case "TargetClustersQueryWidget": - case "ClusterProcessesQueryWidget": - case "SystemServersQueryWidget": - case "LogWidget": - loadUI(); - break; - default: - window.location.replace("/esp/files/index.html"); - } - } else { - loadUI(); + Session.needsRedirectV5().then(redirected => { + if (!redirected) { + ready(function () { + parseUrl(); + initUI(); + }); } }); - function loadUI() { - ready(function () { - parseUrl(); - initUI(); - }); - } - function startLoading(targetNode) { domStyle.set(dom.byId("loadingOverlay"), "display", "block"); domStyle.set(dom.byId("loadingOverlay"), "opacity", "255"); diff --git a/esp/src/src-react/components/DiskUsage.tsx b/esp/src/src-react/components/DiskUsage.tsx index cee84daa9e8..f98e4cc90e7 100644 --- a/esp/src/src-react/components/DiskUsage.tsx +++ b/esp/src/src-react/components/DiskUsage.tsx @@ -1,8 +1,18 @@ import * as React from "react"; -import { ComponentDetails as ComponentDetailsWidget, Details as DetailsWidget, Summary as SummaryWidget } from "src/DiskUsage"; +import { Link } from "@fluentui/react"; +import { MachineService } from "@hpcc-js/comms"; +import { scopedLogger } from "@hpcc-js/util"; +import { ComponentDetails as ComponentDetailsWidget, Summary as SummaryWidget } from "src/DiskUsage"; +import nlsHPCC from "src/nlsHPCC"; +import * as Utility from "src/Utility"; import { AutosizeHpccJSComponent } from "../layouts/HpccJSAdapter"; -import { pushUrl } from "../util/history"; import { ReflexContainer, ReflexElement, ReflexSplitter, classNames, styles } from "../layouts/react-reflex"; +import { pushUrl } from "../util/history"; +import { FluentGrid, useFluentStoreState } from "./controls/Grid"; + +const logger = scopedLogger("src-react/components/DiskUsage.tsx"); + +const machineService = new MachineService({ baseUrl: "" }); interface SummaryProps { cluster?: string; @@ -31,17 +41,82 @@ interface DetailsProps { export const Details: React.FunctionComponent = ({ cluster }) => { - const summary = React.useMemo(() => { - const retVal = new DetailsWidget(cluster) - .refresh() - .on("componentClick", component => { - pushUrl(`/machines/${component}/usage`); + + const { refreshTable } = useFluentStoreState({}); + + // Grid --- + const columns = React.useMemo(() => { + return { + PercentUsed: { + label: nlsHPCC.PercentUsed, width: 50, formatter: (percent) => { + let className = ""; + + if (percent <= 70) { className = "bgFilled bgGreen"; } + else if (percent > 70 && percent < 80) { className = "bgFilled bgOrange"; } + else { className = "bgFilled bgRed"; } + + return {percent}; + } + }, + Component: { label: nlsHPCC.Component, width: 90 }, + Type: { label: nlsHPCC.Type, width: 40 }, + IPAddress: { + label: nlsHPCC.IPAddress, width: 140, + formatter: (ip) => {ip} + }, + Path: { label: nlsHPCC.Path, width: 220 }, + InUse: { label: nlsHPCC.InUse, width: 50 }, + Total: { label: nlsHPCC.Total, width: 50 }, + }; + }, []); + + type Columns = typeof columns; + type Row = { __hpcc_id: string } & { [K in keyof Columns]: string | number }; + const [data, setData] = React.useState([]); + + const refreshData = React.useCallback(() => { + machineService.GetTargetClusterUsageEx([cluster]) + .then(response => { + const _data: Row[] = []; + if (response) { + response.forEach(component => { + component.ComponentUsages.forEach(cu => { + cu.MachineUsages.forEach(mu => { + mu.DiskUsages.forEach((du, i) => { + _data.push({ + __hpcc_id: `__usage_${i}`, + PercentUsed: Math.round((du.InUse / du.Total) * 100), + Component: cu.Name, + IPAddress: mu.Name, + Type: du.Name, + Path: du.Path, + InUse: Utility.convertedSize(du.InUse), + Total: Utility.convertedSize(du.Total) + }); + }); + }); + }); + }); + } + setData(_data); }) + .catch(err => logger.error(err)) ; - return retVal; }, [cluster]); - return ; + React.useEffect(() => { + refreshData(); + }, [refreshData]); + + return null} + setTotal={() => null} + refresh={refreshTable} + >; }; interface MachineUsageProps { diff --git a/esp/src/src-react/components/Files.tsx b/esp/src/src-react/components/Files.tsx index 177d0910b05..6d8de3172cc 100644 --- a/esp/src/src-react/components/Files.tsx +++ b/esp/src/src-react/components/Files.tsx @@ -200,6 +200,7 @@ export const Files: React.FunctionComponent = ({ label: nlsHPCC.MaxSkew, width: 60, formatter: (value, row) => value ? `${Utility.formatDecimal(value / 100)}%` : "" }, Modified: { label: nlsHPCC.ModifiedUTCGMT }, + Accessed: { label: nlsHPCC.LastAccessed }, AtRestCost: { label: nlsHPCC.FileCostAtRest, formatter: (cost, row) => { diff --git a/esp/src/src-react/components/controls/Grid.tsx b/esp/src/src-react/components/controls/Grid.tsx index 73b979e7b3c..53e222ec745 100644 --- a/esp/src/src-react/components/controls/Grid.tsx +++ b/esp/src/src-react/components/controls/Grid.tsx @@ -101,7 +101,13 @@ const gridStyles = (height: string): Partial => { height, minHeight: height, maxHeight: height, - selectors: { ".ms-DetailsHeader-cellName": { fontSize: "13.5px" } } + selectors: { + ".ms-DetailsHeader-cellName": { fontSize: "13.5px" }, + ".ms-DetailsRow-cell:has(.bgFilled)": { color: "white", boxShadow: "inset 1px 0 var(--colorNeutralBackground1), inset -1px 1px var(--colorNeutralBackground1)" }, + ".ms-DetailsRow-cell:has(.bgGreen)": { background: "green" }, + ".ms-DetailsRow-cell:has(.bgOrange)": { background: "orange" }, + ".ms-DetailsRow-cell:has(.bgRed)": { background: "red" } + } }, headerWrapper: { position: "sticky", diff --git a/esp/src/src-react/components/forms/landing-zone/DelimitedImportForm.tsx b/esp/src/src-react/components/forms/landing-zone/DelimitedImportForm.tsx index 3017c45c7d7..13ffd9b8ccf 100644 --- a/esp/src/src-react/components/forms/landing-zone/DelimitedImportForm.tsx +++ b/esp/src/src-react/components/forms/landing-zone/DelimitedImportForm.tsx @@ -385,7 +385,7 @@ export const DelimitedImportForm: React.FunctionComponent} /> diff --git a/esp/src/src-react/index.tsx b/esp/src/src-react/index.tsx index d9ffe2cb05b..b1ecffcc918 100644 --- a/esp/src/src-react/index.tsx +++ b/esp/src/src-react/index.tsx @@ -3,7 +3,7 @@ import * as ReactDOM from "react-dom"; import { initializeIcons } from "@fluentui/react"; import { scopedLogger } from "@hpcc-js/util"; import { cookieKeyValStore } from "src/KeyValStore"; -import { fetchModernMode } from "src/Session"; +import { needsRedirectV9 } from "src/Session"; import { ECLWatchLogger } from "./hooks/logging"; import { replaceUrl } from "./util/history"; @@ -32,41 +32,43 @@ dojoConfig.urlInfo = { }; dojoConfig.disableLegacyHashing = true; -fetchModernMode().then(async modernMode => { - if (modernMode === String(false)) { - window.location.replace("/esp/files/stub.htm"); +needsRedirectV9().then(async redirected => { + if (!redirected) { + loadUI(); + } +}); + +async function loadUI() { + const authTypeResp = await fetch("/esp/getauthtype"); + const authType = await authTypeResp?.text() ?? "None"; + const userStore = cookieKeyValStore(); + const userSession = await userStore.getAll(); + if (authType.indexOf("None") < 0 && (userSession["ESPSessionState"] === "false" || userSession["ECLWatchUser"] === "false" || (!userSession["Status"] || userSession["Status"] === "Locked"))) { + if (window.location.hash.indexOf("login") < 0) { + replaceUrl("/login"); + } + import("./components/forms/Login").then(_ => { + try { + ReactDOM.render( + <_.Login />, + document.getElementById("placeholder") + ); + document.getElementById("loadingOverlay").remove(); + } catch (e) { + logger.error(e); + } + }); } else { - const authTypeResp = await fetch("/esp/getauthtype"); - const authType = await authTypeResp?.text() ?? "None"; - const userStore = cookieKeyValStore(); - const userSession = await userStore.getAll(); - if (authType.indexOf("None") < 0 && (userSession["ESPSessionState"] === "false" || userSession["ECLWatchUser"] === "false" || (!userSession["Status"] || userSession["Status"] === "Locked"))) { - if (window.location.hash.indexOf("login") < 0) { - replaceUrl("/login"); + import("./components/Frame").then(_ => { + try { + ReactDOM.render( + <_.Frame />, + document.getElementById("placeholder") + ); + document.getElementById("loadingOverlay").remove(); + } catch (e) { + logger.error(e); } - import("./components/forms/Login").then(_ => { - try { - ReactDOM.render( - <_.Login />, - document.getElementById("placeholder") - ); - document.getElementById("loadingOverlay").remove(); - } catch (e) { - logger.error(e); - } - }); - } else { - import("./components/Frame").then(_ => { - try { - ReactDOM.render( - <_.Frame />, - document.getElementById("placeholder") - ); - document.getElementById("loadingOverlay").remove(); - } catch (e) { - logger.error(e); - } - }); - } + }); } -}); +} diff --git a/esp/src/src-react/routes.tsx b/esp/src/src-react/routes.tsx index f8c86624c4f..5344fdfcb2b 100644 --- a/esp/src/src-react/routes.tsx +++ b/esp/src/src-react/routes.tsx @@ -437,9 +437,12 @@ export const routes: RoutesEx = [ }) }, { - path: "/:Machine/usage", action: (ctx, params) => import("./components/DiskUsage").then(_ => { - return <_.MachineUsage machine={params.Machine as string} />; - }) + path: "/machines", + children: [{ + path: "/:Machine/usage", action: (ctx, params) => import("./components/DiskUsage").then(_ => { + return <_.MachineUsage machine={params.Machine as string} />; + }) + }] }, { diff --git a/esp/src/src/Session.ts b/esp/src/src/Session.ts index 26e48278a93..94a30108e11 100644 --- a/esp/src/src/Session.ts +++ b/esp/src/src/Session.ts @@ -5,6 +5,7 @@ import { format as d3Format } from "@hpcc-js/common"; import { SMCService } from "@hpcc-js/comms"; import { cookieKeyValStore, sessionKeyValStore, userKeyValStore } from "src/KeyValStore"; import { singletonDebounce } from "../src-react/util/throttle"; +import { parseSearch } from "../src-react/util/history"; import { ModernMode } from "./BuildInfo"; import * as ESPUtil from "./ESPUtil"; import { scopedLogger } from "@hpcc-js/util"; @@ -34,6 +35,62 @@ export async function fetchModernMode(): Promise { }); } +const isV5DirectURL = () => !!parseSearch(window.location.search)?.["Widget"]; +const isV9DirectURL = () => window.location.hash && window.location.hash.indexOf("#/stub/") !== 0; + +export async function needsRedirectV5(): Promise { + if (isV9DirectURL()) { + window.location.replace(`/esp/files/index.html${window.location.hash}`); + return true; + } + if (isV5DirectURL()) { + return false; + } + + const v9Mode = await fetchModernMode() === String(true); + if (v9Mode) { + const params = parseSearch(window.location.search); + if (params?.["hpccWidget"] !== "IFrameWidget") { + switch (params?.["hpccWidget"]) { + case "WUDetailsWidget": + window.location.replace(`/esp/files/index.html#/workunits/${params.Wuid}`); + break; + case "GraphsWUWidget": + window.location.replace(`/esp/files/index.html#/workunits/${params.Wuid}/metrics`); + break; + case "TopologyWidget": + case "DiskUsageWidget": + case "TargetClustersQueryWidget": + case "ClusterProcessesQueryWidget": + case "SystemServersQueryWidget": + case "LogWidget": + return false; + default: + window.location.replace("/esp/files/index.html"); + } + return true; + } + } + return false; +} + +export async function needsRedirectV9(): Promise { + if (isV5DirectURL()) { + window.location.replace(`/esp/files/stub.htm${window.location.search}`); + return true; + } + if (isV9DirectURL()) { + return false; + } + + const v5Mode = await fetchModernMode() === String(false); + if (v5Mode) { + window.location.replace(`/esp/files/stub.htm${window.location.search}`); + return true; + } + return false; +} + const smc = new SMCService({ baseUrl: "" }); export type BuildInfo = { [key: string]: string }; diff --git a/esp/src/src/nls/hpcc.ts b/esp/src/src/nls/hpcc.ts index e026a6d7e7f..13bcce618df 100644 --- a/esp/src/src/nls/hpcc.ts +++ b/esp/src/src/nls/hpcc.ts @@ -441,6 +441,7 @@ export = { Largest: "Largest", LargestFile: "Largest File", LargestSize: "Largest Size", + LastAccessed: "Last Accessed", LastEdit: "Last Edit", LastEditedBy: "Last Edited By", LastEditTime: "Last Edit Time", diff --git a/helm/examples/tracing/README.md b/helm/examples/tracing/README.md index 5caba096c10..bf5cbcd1003 100644 --- a/helm/examples/tracing/README.md +++ b/helm/examples/tracing/README.md @@ -11,21 +11,26 @@ All configuration options detailed here are part of the HPCC Systems Helm chart, - disabled - (default: false) disables tracking and reporting of internal traces and spans - alwaysCreateGlobalIds - If true, assign newly created global ID to any requests that do not supply one. - optAlwaysCreateTraceIds - If true components generate trace/span ids if none are provided by the remote caller. -- logSpanStart - If true, generate a log entry whenever a span is started (default: false) -- logSpanFinish - If true, generate a log entry whenever a span is finished (default: true) - exporter - Defines The type of exporter in charge of forwarding span data to target back-end - - type - (defalt: NONE) "OTLP-HTTP" | "OTLP-GRCP" | "OS" | "NONE" + - type - (default: JLOG) "OTLP-HTTP" | "OTLP-GRCP" | "OS" | "JLOG" | "NONE" + - JLOG + - logSpanDetails - Log span details such as description, status, kind + - logParentInfo - Log the span's parent info such as ParentSpanId, and TraceState + - logAttributes - Log the span's attributes + - logEvents - Log the span's events + - logLinks - Log the span's links + - logResources - Log the span's resources such as telemetry.sdk version, name, language - OTLP-HTTP - - endpoint - (default localhost:4318) Specifies the target OTLP-HTTP backend - - timeOutSecs - (default 10secs) - - consoleDebug - (default false) + - endpoint - (default localhost:4318) Specifies the target OTLP-HTTP backend + - timeOutSecs - (default 10secs) + - consoleDebug - (default false) - OTLP-GRCP - - endpoint: (default localhost:4317) The endpoint to export to. By default the OpenTelemetry Collector's default endpoint. - - useSslCredentials - By default when false, uses grpc::InsecureChannelCredentials; If true uses sslCredentialsCACertPath - - sslCredentialsCACertPath - Path to .pem file to be used for SSL encryption. - - timeOutSeconds - (default 10secs) Timeout for grpc deadline + - endpoint: (default localhost:4317) The endpoint to export to. By default the OpenTelemetry Collector's default endpoint. + - useSslCredentials - By default when false, uses grpc::InsecureChannelCredentials; If true uses sslCredentialsCACertPath + - sslCredentialsCACertPath - Path to .pem file to be used for SSL encryption. + - timeOutSeconds - (default 10secs) Timeout for grpc deadline - processor - Controls span processing style. One by one as available, or in batches. - - type - (default: simple) "simple" | "batch" + - type - (default: simple) "simple" | "batch" ### Sample configuration Below is a sample helm values block directing the HPCC tracing framework to process span information serially, and export the data over OTLP/HTTP protocol to localhost:4318 and output export debug information to console: @@ -49,51 +54,65 @@ helm install myTracedHPCC hpcc/hpcc -f otlp-http-collector-default.yaml ## Tracing information HPCC tracing information includes data needed to trace requests as they traverse over distributed components, and detailed information pertaining to important request subtasks in the form of span information. Each trace and all its related spans are assigned unique IDs which follow the Open Telemetry standard. -The start and end of spans are reported to HPCC component logs regardless of any exporter related configuration. +Tracing information can be exported to various Open Telemetry compatible endpoints including HPCC component logs, or OTLP collectors, etc. By default, tracing information is configured to be exported to HPCC component logs. Sample span reported as log event: ```console -000000A3 MON EVT 2023-10-10 22:12:23.827 24212 25115 Span start: {"Type":"Server","Name":"propagatedServerSpan","GlobalID":"IncomingUGID","CallerID":"IncomingCID","LocalID":"JDbF4xnv7LSWDV4Eug1SpJ","TraceID":"beca49ca8f3138a2842e5cf21402bfff","SpanID":"4b960b3e4647da3f"} - -000000FF MON EVT 2023-10-10 22:12:24.927 24212 25115 Span end: {"Type":"Server","Name":"propagatedServerSpan","GlobalID":"IncomingUGID","CallerID":"IncomingCID","LocalID":"JDbF4xnv7LSWDV4Eug1SpJ","TraceID":"beca49ca8f3138a2842e5cf21402bfff","SpanID":"4b960b3e4647da3f"} +00000165 MON EVT 2023-12-01 17:19:07.270 8 688 UNK "{ "name": "HTTPRequest", "trace_id": "891070fc4a9ef5a3751c19c555d7d4a8", "span_id": "23a47b5bb486ce58", "start": 1701451147269962337, "duration": 652093, "Attributes": {"http.request.method": "GET","hpcc.localid": "JJmSnTeFWTQL8ft9DcbYDK","hpcc.globalid": "JJmSnTedcRZ99RtnwWGwPN" } }"" ``` -Each log statement denotes the time of the tracing event (start or stop), the span type, name, trace and span id, and any HPCC specific attribute such as legacy GlobalID (if any), HPCC CallerID (if any), LocalID (if any). +Each log statement includes a timestamp denoting the span start time, and a duration along with the span name, trace and span id, and any HPCC specific attribute such as legacy GlobalID (if any), HPCC CallerID (if any), LocalID (if any). +The span info logged can be expanded to include span resources, events, and other details (see configuration details). Spans exported via exporters will contain more detailed information such as explicit start time, duration, and any other attribute assigned to the span by the component instrumentation. Sample exported span data: ```json { - "Name":"propagatedServerSpan", - "TraceId":"beca49ca8f3138a2842e5cf21402bfff", - "SpanId":"6225221529c24252", - "kind":"Server", - "ParentSpanId":"4b960b3e4647da3f", - "Start":1696983526105561763, - "Duration":1056403, - "Description":"", - "Status":"Unset", - "TraceState":"hpcc=4b960b3e4647da3f", - "Attributes":{ - "hpcc.callerid":"IncomingCID", - "hpcc.globalid":"IncomingUGID" - }, - "Events":{ + "name": "HTTPRequest", + "trace_id": "53f47047517e9dd9f5ad8c318b4b4fe0", + "span_id": "b9489283b66c1073", + "start": 1701456073994968800, + "duration": 1002426, + "attributes": { + "http.request.method": "GET", + "hpcc.localid": "JJmvRRBJ1QYU8o4xe1sgxJ", + "hpcc.globalid": "JJmvRRBjnJGY6vgkjkAjJc" + }, + "events": [ + { + "name": "Acquiring lock", + "time_stamp": 1701456073995175400, + "attributes": { + "lock_name": "resourcelock" + } }, - "Links":{ + { + "name": "Got lock, doing work...", + "time_stamp": 1701456073995269400, + "attributes": { + "lock_name": "resourcelock" + } }, - "Resources":{ - "service.name":"unknown_service", - "telemetry.sdk.version":"1.9.1", - "telemetry.sdk.name":"opentelemetry", - "telemetry.sdk.language":"cpp" - }, - "InstrumentedLibrary":"esp" - + { + "name": "Release lock", + "time_stamp": 1701456073996269400, + "attributes": { + "lock_name": "resourcelock" + } + } + ], + "resources": { + "service.name": "unknown_service", + "telemetry.sdk.version": "1.9.1", + "telemetry.sdk.name": "opentelemetry", + "telemetry.sdk.language": "cpp" + } } ``` ## Directory Contents - 'otlp-http-collector-default.yaml' - Sample tracing configuration targeting OTLP/HTTP trace collector +- 'otlp-grcp-collector-default.yaml' - Sample tracing configuration targeting OTLP/GRCP trace collector +- 'jlog-collector-fulloutput.yaml' - Sample tracing configuration targeting HPCC component logs diff --git a/helm/examples/tracing/jlog-collector-fulloutput.yaml b/helm/examples/tracing/jlog-collector-fulloutput.yaml new file mode 100644 index 00000000000..c3ce5704e49 --- /dev/null +++ b/helm/examples/tracing/jlog-collector-fulloutput.yaml @@ -0,0 +1,10 @@ +global: + tracing: + exporter: + type: JLog + logSpanDetails: true + logParentInfo: true + logAttributes: true + logEvents: true + logLinks: true + logResources: true \ No newline at end of file diff --git a/helm/hpcc/values.schema.json b/helm/hpcc/values.schema.json index 4bd72fa1754..0b07509ec32 100644 --- a/helm/hpcc/values.schema.json +++ b/helm/hpcc/values.schema.json @@ -1121,7 +1121,7 @@ "properties": { "type": { "type": "string", - "enum": ["OTLP-HTTP", "OTLP-GRCP", "OS", "NONE"], + "enum": ["OTLP-HTTP", "OTLP-GRCP", "OS", "JLOG", "NONE"], "description": "The type of exporter in charge of forwarding span data to target back-end" } } @@ -1135,16 +1135,6 @@ "description": "Defines the manner in which trace data is processed - in batches, or simple as available" } } - }, - "logSpanStart": { - "type": "boolean", - "description": "If true, generate a log entry whenever a span is started", - "default": false - }, - "logSpanFinish": { - "type": "boolean", - "description": "If true, generate a log entry whenever a span is finished", - "default": true } }, "additionalProperties": { "type": ["integer", "string", "boolean"] } diff --git a/initfiles/bin/check_executes b/initfiles/bin/check_executes index b4980e8339b..e6ffd1c318a 100755 --- a/initfiles/bin/check_executes +++ b/initfiles/bin/check_executes @@ -109,6 +109,9 @@ if [ $PMD_ALWAYS = true ] || [ $retVal -ne 0 ]; then fi dmesg -xT > $POST_MORTEM_DIR/dmesg.log if [[ -n "${PMD_DALISERVER}" ]] && [[ -n "${PMD_WORKUNIT}" ]]; then + if [[ -s wuid ]]; then # takes precedence over command line option + PMD_WORKUNIT=$(cat wuid) + fi wutool postmortem ${PMD_WORKUNIT} DALISERVER=${PMD_DALISERVER} PMD=${POST_MORTEM_DIR} echo Updated workunit ${PMD_WORKUNIT} fi diff --git a/roxie/topo/toposerver.cpp b/roxie/topo/toposerver.cpp index 79a6c425d0a..01bd6975198 100644 --- a/roxie/topo/toposerver.cpp +++ b/roxie/topo/toposerver.cpp @@ -78,9 +78,9 @@ unsigned lastTimeoutCheck = 0; unsigned lastTopologyReport = 0; unsigned timeoutCheckInterval = 1000; // How often we check to see what has expired -unsigned heartbeatInterval = 5000; // How often nodes send heartbeats +unsigned heartbeatInterval = 10000; // How often nodes send heartbeats unsigned timeoutHeartbeatServer = 60000; // How long before a server is marked as down -unsigned timeoutHeartbeatAgent = 10000; // How long before an agent is marked as down +unsigned timeoutHeartbeatAgent = 20000; // How long before an agent is marked as down unsigned removeHeartbeatInterval = 120000; // How long before a node is removed from list unsigned topologyReportInterval = 60000; // How often topology is reported to logging (if traceLevel >= 2) bool aborted = false; diff --git a/roxie/udplib/udptopo.cpp b/roxie/udplib/udptopo.cpp index bccd20f4f7a..6f8c775f4e3 100644 --- a/roxie/udplib/udptopo.cpp +++ b/roxie/udplib/udptopo.cpp @@ -596,7 +596,7 @@ extern UDPLIB_API void createStaticTopology(const std::vector static std::thread topoThread; static Semaphore abortTopo; -unsigned heartbeatInterval = 5000; // How often roxie servers update topo server +unsigned heartbeatInterval = 10000; // How often roxie servers update topo server extern UDPLIB_API void initializeTopology(const StringArray &topoValues, const std::vector &myRoles) { diff --git a/roxie/udplib/udptrs.cpp b/roxie/udplib/udptrs.cpp index e9aad21ce78..a37aef7963e 100644 --- a/roxie/udplib/udptrs.cpp +++ b/roxie/udplib/udptrs.cpp @@ -275,11 +275,20 @@ class UdpReceiverEntry : public IUdpReceiverEntry activeFlowSequence = seq; return seq; } + bool hasDataToSend() const { return (packetsQueued.load(std::memory_order_relaxed) || (resendList && resendList->numActive())); } + void setRequestExpiryTime(unsigned newExpiryTime) + { + //requestExpiryTime 0 should only be used if there is no data to send. Ensure it is non zero otherwise. + if (newExpiryTime == 0) + newExpiryTime == 1; + requestExpiryTime.store(newExpiryTime); + } + void sendStart(unsigned packets) { UdpRequestToSendMsg msg; @@ -325,7 +334,7 @@ class UdpReceiverEntry : public IUdpReceiverEntry msg.cmd = flowType::request_to_send; msg.packets = 0; msg.flowSeq = nextFlowSequence(); - requestExpiryTime = msTick() + udpFlowAckTimeout; + setRequestExpiryTime(msTick() + udpFlowAckTimeout); block.leave(); sendRequest(msg, false); } @@ -338,7 +347,7 @@ class UdpReceiverEntry : public IUdpReceiverEntry //The flow event is sent on the data socket, so it needs to wait for all the data to be sent before being received //therefore use the updDataSendTimeout instead of udpFlowAckTimeout - requestExpiryTime = msTick() + updDataSendTimeout; + setRequestExpiryTime(msTick() + updDataSendTimeout); block.leave(); sendRequest(msg, true); } @@ -366,7 +375,7 @@ class UdpReceiverEntry : public IUdpReceiverEntry msg.sendSeq = nextSendSequence; msg.flowSeq = nextFlowSequence(); msg.sourceNode = sourceIP; - requestExpiryTime = msTick() + udpFlowAckTimeout; + setRequestExpiryTime(msTick() + udpFlowAckTimeout); block.leave(); sendRequest(msg, false); } @@ -393,6 +402,11 @@ class UdpReceiverEntry : public IUdpReceiverEntry CLeavableCriticalBlock block(activeCrit); if (maxRequestDeadTimeouts && (timeouts >= maxRequestDeadTimeouts)) { + int timeExpired = msTick()-requestExpiryTime; + StringBuffer s; + EXCLOG(MCoperatorError,"ERROR: UdpSender: too many timeouts - aborting sends. Timed out %i times (flow=%u, max=%i, timeout=%u, expiryTime=%u[%u] ack(%u)) waiting ok_to_send for %u packets from node=%s", + timeouts.load(), activeFlowSequence.load(), maxRequestDeadTimeouts, udpFlowAckTimeout, requestExpiryTime.load(), timeExpired, (int)hadAcknowledgement, packetsQueued.load(), ip.getIpText(s).str()); + abort(); return; } @@ -404,7 +418,7 @@ class UdpReceiverEntry : public IUdpReceiverEntry msg.sendSeq = nextSendSequence; msg.flowSeq = activeFlowSequence; msg.sourceNode = sourceIP; - requestExpiryTime = msTick() + udpFlowAckTimeout; + setRequestExpiryTime(msTick() + udpFlowAckTimeout); block.leave(); sendRequest(msg, false); } @@ -429,7 +443,7 @@ class UdpReceiverEntry : public IUdpReceiverEntry hadAcknowledgement = true; CriticalBlock b(activeCrit); if (requestExpiryTime) - requestExpiryTime = msTick() + udpRequestTimeout; // set a timeout in case an ok_to_send message goes missing + setRequestExpiryTime(msTick() + udpRequestTimeout); // set a timeout in case an ok_to_send message goes missing } #ifdef TEST_DROPPED_PACKETS @@ -652,8 +666,11 @@ class UdpReceiverEntry : public IUdpReceiverEntry DBGLOG("UdpSender: abort sending queued data to node=%s", ip.getHostText(s).str()); } timeouts = 0; - requestExpiryTime = 0; removeData(nullptr, nullptr); + + CriticalBlock block(activeCrit); + if (packetsQueued == 0) + requestExpiryTime = 0; } inline void pushData(unsigned queue, DataBuffer *buffer) diff --git a/rtl/eclrtl/eclhelper_base.cpp b/rtl/eclrtl/eclhelper_base.cpp index e84728ce58a..d9007b54a28 100644 --- a/rtl/eclrtl/eclhelper_base.cpp +++ b/rtl/eclrtl/eclhelper_base.cpp @@ -619,6 +619,7 @@ void CThorSoapActionArg::getLogTailText(size32_t & lenText, char * & text, const const char * CThorSoapActionArg::getXpathHintsXml() { return nullptr;} const char * CThorSoapActionArg::getRequestHeader() { return nullptr; } const char * CThorSoapActionArg::getRequestFooter() { return nullptr; } +unsigned CThorSoapActionArg::getPersistPoolSize() { return 0; } //CThorSoapCallArg @@ -644,6 +645,7 @@ const char * CThorSoapCallArg::getInputIteratorPath() { return NULL; } const char * CThorSoapCallArg::getXpathHintsXml() { return nullptr; } const char * CThorSoapCallArg::getRequestHeader() { return nullptr; } const char * CThorSoapCallArg::getRequestFooter() { return nullptr; } +unsigned CThorSoapCallArg::getPersistPoolSize() { return 0; } size32_t CThorSoapCallArg::onFailTransform(ARowBuilder & rowBuilder, const void * left, IException * e) { return 0; } void CThorSoapCallArg::getLogText(size32_t & lenText, char * & text, const void * left) { lenText =0; text = NULL; } diff --git a/rtl/include/eclhelper.hpp b/rtl/include/eclhelper.hpp index a7c2019a42a..621d55d0a92 100644 --- a/rtl/include/eclhelper.hpp +++ b/rtl/include/eclhelper.hpp @@ -2232,7 +2232,9 @@ enum SOAPFjson = 0x008000, SOAPFxml = 0x010000, SOAPFlogusertail = 0x020000, - SOAPFformEncoded = 0x040000 + SOAPFformEncoded = 0x040000, + SOAPFpersist = 0x080000, + SOAPFpersistPool = 0x100000, }; struct IHThorWebServiceCallActionArg : public IHThorArg @@ -2267,6 +2269,7 @@ struct IHThorWebServiceCallActionArg : public IHThorArg virtual const char * getRequestHeader() = 0; virtual const char * getRequestFooter() = 0; virtual void getLogTailText(size32_t & lenText, char * & text, const void * left) = 0; // iff SOAPFlogusertail set + virtual unsigned getPersistPoolSize() = 0; // only available iff SOAPFpersistPool }; typedef IHThorWebServiceCallActionArg IHThorSoapActionArg ; typedef IHThorWebServiceCallActionArg IHThorHttpActionArg ; diff --git a/rtl/include/eclhelper_base.hpp b/rtl/include/eclhelper_base.hpp index 677d5eec8f1..d34c664d013 100644 --- a/rtl/include/eclhelper_base.hpp +++ b/rtl/include/eclhelper_base.hpp @@ -822,6 +822,7 @@ class ECLRTL_API CThorSoapActionArg : public CThorSinkArgOf virtual const char * getXpathHintsXml() override; virtual const char * getRequestHeader() override; virtual const char * getRequestFooter() override; + virtual unsigned getPersistPoolSize() override; }; class ECLRTL_API CThorSoapCallArg : public CThorArgOf @@ -852,6 +853,7 @@ class ECLRTL_API CThorSoapCallArg : public CThorArgOf virtual const char * getXpathHintsXml() override; virtual const char * getRequestHeader() override; virtual const char * getRequestFooter() override; + virtual unsigned getPersistPoolSize() override; }; typedef CThorSoapCallArg CThorHttpCallArg; diff --git a/system/jhtree/jhtree.cpp b/system/jhtree/jhtree.cpp index 53265d82167..e845c23fb9c 100644 --- a/system/jhtree/jhtree.cpp +++ b/system/jhtree/jhtree.cpp @@ -350,7 +350,8 @@ static UnexpectedVirtualFieldCallback unexpectedFieldCallback; class jhtree_decl CKeyLevelManager : implements IKeyManager, public CInterface { protected: - KeyStatsCollector stats; + IContextLogger *ctx = nullptr; + IContextLogger *activeCtx = nullptr; Owned filter; IKeyCursor *keyCursor; ConstPointerArray activeBlobs; @@ -368,7 +369,7 @@ class jhtree_decl CKeyLevelManager : implements IKeyManager, public CInterface IMPLEMENT_IINTERFACE; CKeyLevelManager(const RtlRecord &_recInfo, IKeyIndex * _key, IContextLogger *_ctx, bool _newFilters, bool _logExcessiveSeeks) - : stats(_ctx), newFilters(_newFilters), logExcessiveSeeks(_logExcessiveSeeks) + : ctx(_ctx), newFilters(_newFilters), logExcessiveSeeks(_logExcessiveSeeks) { if (newFilters) filter.setown(new IndexRowFilter(_recInfo)); @@ -390,11 +391,6 @@ class jhtree_decl CKeyLevelManager : implements IKeyManager, public CInterface return keyCursor ? 1 : 0; } - virtual void resetCounts() - { - stats.reset(); - } - void setKey(IKeyIndexBase * _key) { ::Release(keyCursor); @@ -410,6 +406,9 @@ class jhtree_decl CKeyLevelManager : implements IKeyManager, public CInterface keyedSize = ki->keyedSize(); partitionFieldMask = ki->getPartitionFieldMask(); indexParts = ki->numPartitions(); + + // If TLK don't collect context information + activeCtx = ki->isTopLevelKey() ? nullptr : ctx; } } @@ -498,26 +497,26 @@ class jhtree_decl CKeyLevelManager : implements IKeyManager, public CInterface virtual bool lookup(bool exact) { if (keyCursor) - return keyCursor->lookup(exact, stats); + return keyCursor->lookup(exact, activeCtx); else return false; } virtual bool lookupSkip(const void *seek, size32_t seekOffset, size32_t seeklen) { - return keyCursor ? keyCursor->lookupSkip(seek, seekOffset, seeklen, stats) : false; + return keyCursor ? keyCursor->lookupSkip(seek, seekOffset, seeklen, activeCtx) : false; } unsigned __int64 getCount() { assertex(keyCursor); - return keyCursor->getCount(stats); + return keyCursor->getCount(activeCtx); } unsigned __int64 getCurrentRangeCount(unsigned groupSegCount) { assertex(keyCursor); - return keyCursor->getCurrentRangeCount(groupSegCount, stats); + return keyCursor->getCurrentRangeCount(groupSegCount, activeCtx); } bool nextRange(unsigned groupSegCount) @@ -529,7 +528,7 @@ class jhtree_decl CKeyLevelManager : implements IKeyManager, public CInterface unsigned __int64 checkCount(unsigned __int64 max) { assertex(keyCursor); - return keyCursor->checkCount(max, stats); + return keyCursor->checkCount(max, activeCtx); } virtual void serializeCursorPos(MemoryBuffer &mb) @@ -539,7 +538,7 @@ class jhtree_decl CKeyLevelManager : implements IKeyManager, public CInterface virtual void deserializeCursorPos(MemoryBuffer &mb) { - keyCursor->deserializeCursorPos(mb, stats); + keyCursor->deserializeCursorPos(mb, activeCtx); } virtual const byte *loadBlob(unsigned __int64 blobid, size32_t &blobsize, IContextLogger *ctx) @@ -1037,6 +1036,28 @@ void CKeyStore::clearCacheEntry(const IFileIO *io) } } +// + +static void noteSeeks(IContextLogger *ctx, unsigned lseeks, unsigned lscans, unsigned lwildseeks) +{ + if (ctx) + { + if (lseeks) ctx->noteStatistic(StNumIndexSeeks, lseeks); + if (lscans) ctx->noteStatistic(StNumIndexScans, lscans); + if (lwildseeks) ctx->noteStatistic(StNumIndexWildSeeks, lwildseeks); + } +} + +static void noteSkips(IContextLogger *ctx, unsigned lskips, unsigned lnullSkips) +{ + if (ctx) + { + if (lskips) ctx->noteStatistic(StNumIndexSkips, lskips); + if (lnullSkips) ctx->noteStatistic(StNumIndexNullSkips, lnullSkips); + } +} + + // CKeyIndex impl. CKeyIndex::CKeyIndex(unsigned _iD, const char *_name) : name(_name) @@ -1502,7 +1523,7 @@ bool CKeyIndex::prewarmPage(offset_t offset, NodeType type) return false; } -const CJHSearchNode *CKeyIndex::locateFirstLeafNode(KeyStatsCollector &stats) const +const CJHSearchNode *CKeyIndex::locateFirstLeafNode(IContextLogger *ctx) const { keySeeks++; @@ -1511,7 +1532,7 @@ const CJHSearchNode *CKeyIndex::locateFirstLeafNode(KeyStatsCollector &stats) co { if (leafOffset == 0) return nullptr; - return getNode(leafOffset, NodeLeaf, stats.ctx); + return getNode(leafOffset, NodeLeaf, ctx); } //Unusual - an index with no elements @@ -1525,17 +1546,17 @@ const CJHSearchNode *CKeyIndex::locateFirstLeafNode(KeyStatsCollector &stats) co const CJHTreeNode * prev = cur; depth++; NodeType type = (depth < getBranchDepth()) ? NodeBranch : NodeLeaf; - cur = getNode(cur->getFPosAt(0), type, stats.ctx); + cur = getNode(cur->getFPosAt(0), type, ctx); assertex(cur); prev->Release(); } return cur; } -const CJHSearchNode *CKeyIndex::locateLastLeafNode(KeyStatsCollector &stats) const +const CJHSearchNode *CKeyIndex::locateLastLeafNode(IContextLogger *ctx) const { keySeeks++; - stats.noteSeeks(1, 0, 0); + noteSeeks(ctx, 1, 0, 0); //Unusual - an index with no elements if (keyHdr->getNumRecords() == 0) @@ -1549,7 +1570,7 @@ const CJHSearchNode *CKeyIndex::locateLastLeafNode(KeyStatsCollector &stats) con const CJHSearchNode * prev = cur; depth++; NodeType type = (depth < getBranchDepth()) ? NodeBranch : NodeLeaf; - cur = getNode(cur->nextNodeFpos(), type, stats.ctx); + cur = getNode(cur->nextNodeFpos(), type, ctx); assertex(cur); prev->Release(); } @@ -1558,36 +1579,13 @@ const CJHSearchNode *CKeyIndex::locateLastLeafNode(KeyStatsCollector &stats) con for (;;) { const CJHSearchNode * last = cur; - cur = getNode(cur->nextNodeFpos(), NodeLeaf, stats.ctx); + cur = getNode(cur->nextNodeFpos(), NodeLeaf, ctx); if (!cur) return last; ::Release(last); } } -void KeyStatsCollector::noteSeeks(unsigned lseeks, unsigned lscans, unsigned lwildseeks) -{ - if (ctx) - { - if (lseeks) ctx->noteStatistic(StNumIndexSeeks, lseeks); - if (lscans) ctx->noteStatistic(StNumIndexScans, lscans); - if (lwildseeks) ctx->noteStatistic(StNumIndexWildSeeks, lwildseeks); - } -} - -void KeyStatsCollector::noteSkips(unsigned lskips, unsigned lnullSkips) -{ - if (ctx) - { - if (lskips) ctx->noteStatistic(StNumIndexSkips, lskips); - if (lnullSkips) ctx->noteStatistic(StNumIndexNullSkips, lnullSkips); - } -} - -void KeyStatsCollector::reset() -{ -} - CKeyCursor::CKeyCursor(CKeyIndex &_key, const IIndexFilterList *_filter, bool _logExcessiveSeeks) : key(OLINK(_key)), filter(_filter), logExcessiveSeeks(_logExcessiveSeeks) { @@ -1624,17 +1622,17 @@ void CKeyCursor::reset() setLow(0); } -bool CKeyCursor::next(KeyStatsCollector &stats) +bool CKeyCursor::next(IContextLogger *ctx) { - return _next(stats) && node && node->getKeyAt(nodeKey, recordBuffer); + return _next(ctx) && node && node->getKeyAt(nodeKey, recordBuffer); } -bool CKeyCursor::_next(KeyStatsCollector &stats) +bool CKeyCursor::_next(IContextLogger *ctx) { fullBufferValid = false; if (!node) { - node.setown(key.locateFirstLeafNode(stats)); + node.setown(key.locateFirstLeafNode(ctx)); nodeKey = 0; return node && node->isKeyAt(nodeKey); } @@ -1648,7 +1646,7 @@ bool CKeyCursor::_next(KeyStatsCollector &stats) node.clear(); if (rsib != 0) { - node.setown(key.getNode(rsib, type, stats.ctx)); + node.setown(key.getNode(rsib, type, ctx)); if (node != NULL) { nodeKey = 0; @@ -1706,10 +1704,10 @@ unsigned __int64 CKeyCursor::getSequence() return node->getSequence(nodeKey); } -bool CKeyCursor::_last(KeyStatsCollector &stats) +bool CKeyCursor::_last(IContextLogger *ctx) { fullBufferValid = false; - node.setown(key.locateLastLeafNode(stats)); + node.setown(key.locateLastLeafNode(ctx)); if (node) { nodeKey = node->getNumKeys()-1; @@ -1718,7 +1716,7 @@ bool CKeyCursor::_last(KeyStatsCollector &stats) return false; } -bool CKeyCursor::_gtEqual(KeyStatsCollector &stats) +bool CKeyCursor::_gtEqual(IContextLogger *ctx) { fullBufferValid = false; key.keySeeks++; @@ -1760,7 +1758,7 @@ bool CKeyCursor::_gtEqual(KeyStatsCollector &stats) else { offset_t nextPos = node->nextNodeFpos(); // This can happen at eof because of key peculiarity where level above reports ffff as last - node.setown(key.getNode(nextPos, NodeLeaf, stats.ctx)); + node.setown(key.getNode(nextPos, NodeLeaf, ctx)); nodeKey = 0; } if (node) @@ -1775,7 +1773,7 @@ bool CKeyCursor::_gtEqual(KeyStatsCollector &stats) offset_t npos = node->getFPosAt(a); depth++; NodeType type = (depth < branchDepth) ? NodeBranch : NodeLeaf; - node.setown(key.getNode(npos, type, stats.ctx)); + node.setown(key.getNode(npos, type, ctx)); } else return false; @@ -1783,7 +1781,7 @@ bool CKeyCursor::_gtEqual(KeyStatsCollector &stats) } } -bool CKeyCursor::_ltEqual(KeyStatsCollector &stats) +bool CKeyCursor::_ltEqual(IContextLogger *ctx) { fullBufferValid = false; key.keySeeks++; @@ -1830,7 +1828,7 @@ bool CKeyCursor::_ltEqual(KeyStatsCollector &stats) else { offset_t prevPos = node->prevNodeFpos(); - node.setown(key.getNode(prevPos, NodeLeaf, stats.ctx)); + node.setown(key.getNode(prevPos, NodeLeaf, ctx)); if (node) nodeKey = node->getNumKeys()-1; } @@ -1849,7 +1847,7 @@ bool CKeyCursor::_ltEqual(KeyStatsCollector &stats) offset_t npos = node->getFPosAt(a); depth++; NodeType type = (depth < branchDepth) ? NodeBranch : NodeLeaf; - node.setown(key.getNode(npos, type, stats.ctx)); + node.setown(key.getNode(npos, type, ctx)); if (!node) throw MakeStringException(0, "Invalid key %s: child node pointer should never be NULL", key.name.get()); } @@ -1877,7 +1875,7 @@ void CKeyCursor::serializeCursorPos(MemoryBuffer &mb) } } -void CKeyCursor::deserializeCursorPos(MemoryBuffer &mb, KeyStatsCollector &stats) +void CKeyCursor::deserializeCursorPos(MemoryBuffer &mb, IContextLogger *ctx) { mb.read(eof); node.clear(); @@ -1890,7 +1888,7 @@ void CKeyCursor::deserializeCursorPos(MemoryBuffer &mb, KeyStatsCollector &stats fullBufferValid = false; if (nodeAddress) { - node.setown(key.getNode(nodeAddress, NodeLeaf, stats.ctx)); + node.setown(key.getNode(nodeAddress, NodeLeaf, ctx)); if (node && recordBuffer) node->getKeyAt(nodeKey, recordBuffer); } @@ -1902,12 +1900,12 @@ const byte *CKeyCursor::loadBlob(unsigned __int64 blobid, size32_t &blobsize, IC return key.loadBlob(blobid, blobsize, ctx); } -bool CKeyCursor::lookup(bool exact, KeyStatsCollector &stats) +bool CKeyCursor::lookup(bool exact, IContextLogger *ctx) { - return _lookup(exact, filter->lastRealSeg(), filter->isUnfiltered(), stats); + return _lookup(exact, filter->lastRealSeg(), filter->isUnfiltered(), ctx); } -bool CKeyCursor::_lookup(bool exact, unsigned lastSeg, bool unfiltered, KeyStatsCollector &stats) +bool CKeyCursor::_lookup(bool exact, unsigned lastSeg, bool unfiltered, IContextLogger *ctx) { if (unfiltered && !matched) { @@ -1922,13 +1920,13 @@ bool CKeyCursor::_lookup(bool exact, unsigned lastSeg, bool unfiltered, KeyStats { if (matched) { - if (!_next(stats)) + if (!_next(ctx)) eof = true; lscans++; } else { - if (!_gtEqual(stats)) + if (!_gtEqual(ctx)) eof = true; lseeks++; } @@ -1961,18 +1959,18 @@ bool CKeyCursor::_lookup(bool exact, unsigned lastSeg, bool unfiltered, KeyStats eof = true; } if (logExcessiveSeeks && lwildseeks > 1000 && ret) - reportExcessiveSeeks(lwildseeks, lastSeg, stats); - stats.noteSeeks(lseeks, lscans, lwildseeks); + reportExcessiveSeeks(lwildseeks, lastSeg, ctx); + noteSeeks(ctx, lseeks, lscans, lwildseeks); return ret; } -bool CKeyCursor::lookupSkip(const void *seek, size32_t seekOffset, size32_t seeklen, KeyStatsCollector &stats) +bool CKeyCursor::lookupSkip(const void *seek, size32_t seekOffset, size32_t seeklen, IContextLogger *ctx) { if (skipTo(seek, seekOffset, seeklen)) - stats.noteSkips(1, 0); + noteSkips(ctx, 1, 0); else - stats.noteSkips(0, 1); - bool ret = lookup(true, stats); + noteSkips(ctx, 0, 1); + bool ret = lookup(true, ctx); #ifdef _DEBUG if (doTrace(traceSmartStepping, TraceFlags::Max)) { @@ -1988,9 +1986,9 @@ bool CKeyCursor::lookupSkip(const void *seek, size32_t seekOffset, size32_t seek { recstr.appendf("%02x ", ((unsigned char *) recordBuffer)[i]); } - if (stats.ctx) + if (ctx) { - const CRuntimeStatisticCollection &statsCollection = stats.ctx->queryStats(); + const CRuntimeStatisticCollection &statsCollection = ctx->queryStats(); unsigned __int64 seeks = statsCollection.getStatisticValue(StNumIndexSeeks); unsigned __int64 scans = statsCollection.getStatisticValue(StNumIndexScans); unsigned __int64 skips = statsCollection.getStatisticValue(StNumIndexSkips); @@ -2006,7 +2004,7 @@ bool CKeyCursor::lookupSkip(const void *seek, size32_t seekOffset, size32_t seek } -unsigned __int64 CKeyCursor::getCount(KeyStatsCollector &stats) +unsigned __int64 CKeyCursor::getCount(IContextLogger *ctx) { reset(); unsigned __int64 result = 0; @@ -2014,11 +2012,11 @@ unsigned __int64 CKeyCursor::getCount(KeyStatsCollector &stats) bool unfiltered = filter->isUnfiltered(); for (;;) { - if (_lookup(true, lastRealSeg, unfiltered, stats)) + if (_lookup(true, lastRealSeg, unfiltered, ctx)) { unsigned __int64 locount = getSequence(); endRange(lastRealSeg); - _ltEqual(stats); + _ltEqual(ctx); result += getSequence()-locount+1; if (!incrementKey(lastRealSeg)) break; @@ -2029,7 +2027,7 @@ unsigned __int64 CKeyCursor::getCount(KeyStatsCollector &stats) return result; } -unsigned __int64 CKeyCursor::checkCount(unsigned __int64 max, KeyStatsCollector &stats) +unsigned __int64 CKeyCursor::checkCount(unsigned __int64 max, IContextLogger *ctx) { reset(); unsigned __int64 result = 0; @@ -2037,19 +2035,19 @@ unsigned __int64 CKeyCursor::checkCount(unsigned __int64 max, KeyStatsCollector bool unfiltered = filter->isUnfiltered(); if (lastFullSeg == (unsigned) -1) { - stats.noteSeeks(1, 0, 0); - if (_last(stats)) + noteSeeks(ctx, 1, 0, 0); + if (_last(ctx)) return getSequence()+1; else return 0; } for (;;) { - if (_lookup(true, lastFullSeg, unfiltered, stats)) + if (_lookup(true, lastFullSeg, unfiltered, ctx)) { unsigned __int64 locount = getSequence(); endRange(lastFullSeg); - _ltEqual(stats); + _ltEqual(ctx); result += getSequence()-locount+1; if (max && (result > max)) break; @@ -2062,12 +2060,12 @@ unsigned __int64 CKeyCursor::checkCount(unsigned __int64 max, KeyStatsCollector return result; } -unsigned __int64 CKeyCursor::getCurrentRangeCount(unsigned groupSegCount, KeyStatsCollector &stats) +unsigned __int64 CKeyCursor::getCurrentRangeCount(unsigned groupSegCount, IContextLogger *ctx) { unsigned __int64 locount = getSequence(); endRange(groupSegCount); - _ltEqual(stats); - stats.noteSeeks(1, 0, 0); + _ltEqual(ctx); + noteSeeks(ctx, 1, 0, 0); return getSequence()-locount+1; } @@ -2079,7 +2077,7 @@ bool CKeyCursor::nextRange(unsigned groupSegCount) return true; } -void CKeyCursor::reportExcessiveSeeks(unsigned numSeeks, unsigned lastSeg, KeyStatsCollector &stats) +void CKeyCursor::reportExcessiveSeeks(unsigned numSeeks, unsigned lastSeg, IContextLogger *ctx) { StringBuffer recstr; unsigned i; @@ -2105,8 +2103,8 @@ void CKeyCursor::reportExcessiveSeeks(unsigned numSeeks, unsigned lastSeg, KeySt } recstr.append ("\nusing filter:\n"); filter->describe(recstr); - if (stats.ctx) - stats.ctx->CTXLOG("%d seeks to lookup record \n%s\n in key %s", numSeeks, recstr.str(), key.queryFileName()); + if (ctx) + ctx->CTXLOG("%d seeks to lookup record \n%s\n in key %s", numSeeks, recstr.str(), key.queryFileName()); else DBGLOG("%d seeks to lookup record \n%s\n in key %s", numSeeks, recstr.str(), key.queryFileName()); } @@ -2898,8 +2896,8 @@ class CKeyMerger : public CKeyLevelManager activekeys--; if (!activekeys) { - if (stats.ctx) - stats.ctx->noteStatistic(StNumIndexMergeCompares, compares); + if (ctx) + ctx->noteStatistic(StNumIndexMergeCompares, compares); return false; } mergeheap[0] = mergeheap[activekeys]; @@ -2950,9 +2948,9 @@ class CKeyMerger : public CKeyLevelManager { recstr.appendf("%02x ", ((unsigned char *) keyBuffer)[i]); } - if (stats.ctx) + if (ctx) { - const CRuntimeStatisticCollection &statsCollection = stats.ctx->queryStats(); + const CRuntimeStatisticCollection &statsCollection = ctx->queryStats(); unsigned __int64 seeks = statsCollection.getStatisticValue(StNumIndexSeeks); unsigned __int64 scans = statsCollection.getStatisticValue(StNumIndexScans); unsigned __int64 skips = statsCollection.getStatisticValue(StNumIndexSkips); @@ -2964,16 +2962,16 @@ class CKeyMerger : public CKeyLevelManager } } #endif - if (stats.ctx) - stats.ctx->noteStatistic(StNumIndexMergeCompares, compares); + if (ctx) + ctx->noteStatistic(StNumIndexMergeCompares, compares); return true; } else { compares++; - if (stats.ctx && (compares == 100)) + if (ctx && (compares == 100)) { - stats.ctx->noteStatistic(StNumIndexMergeCompares, compares); // also checks for abort... + ctx->noteStatistic(StNumIndexMergeCompares, compares); // also checks for abort... compares = 0; } } @@ -3031,11 +3029,11 @@ class CKeyMerger : public CKeyLevelManager else lnullSkips++; } - found = cursor->lookup(true, stats); + found = cursor->lookup(true, ctx); if (!found || !seek || memcmp(cursor->queryKeyedBuffer() + seekOffset, seek, seeklen) >= 0) break; } - stats.noteSkips(lskips, lnullSkips); + noteSkips(ctx, lskips, lnullSkips); if (found) { IKeyCursor *mergeCursor; @@ -3058,8 +3056,8 @@ class CKeyMerger : public CKeyLevelManager } if (activekeys>0) { - if (stats.ctx) - stats.ctx->noteStatistic(StNumIndexMerges, activekeys); + if (ctx) + ctx->noteStatistic(StNumIndexMerges, activekeys); cursors = cursorArray.getArray(); mergeheap = mergeHeapArray.getArray(); /* Permute mergeheap to establish the heap property @@ -3130,7 +3128,7 @@ class CKeyMerger : public CKeyLevelManager if (!activekeys) return false; unsigned key = mergeheap[0]; - if (!keyCursor->lookup(exact, stats)) + if (!keyCursor->lookup(exact, ctx)) { activekeys--; if (!activekeys) @@ -3223,7 +3221,7 @@ class CKeyMerger : public CKeyLevelManager mb.read(keyno); keyNoArray.append(keyno); keyCursor = keyset->queryPart(keyno)->getCursor(filter, logExcessiveSeeks); - keyCursor->deserializeCursorPos(mb, stats); + keyCursor->deserializeCursorPos(mb, ctx); cursorArray.append(*keyCursor); mergeHeapArray.append(i); } diff --git a/system/jhtree/jhtree.hpp b/system/jhtree/jhtree.hpp index 32aca365da0..d22531c3b70 100644 --- a/system/jhtree/jhtree.hpp +++ b/system/jhtree/jhtree.hpp @@ -41,38 +41,26 @@ interface jhtree_decl IDelayedFile : public IInterface virtual IFileIO *getFileIO() = 0; }; -class KeyStatsCollector -{ -public: - IContextLogger *ctx; - - KeyStatsCollector(IContextLogger *_ctx) : ctx(_ctx) {} - void reset(); - void noteSeeks(unsigned lseeks, unsigned lscans, unsigned lwildseeks); - void noteSkips(unsigned lskips, unsigned lnullSkips); - -}; - interface jhtree_decl IKeyCursor : public IInterface { virtual const char *queryName() const = 0; virtual size32_t getSize() = 0; // Size of current row virtual size32_t getKeyedSize() const = 0; // Size of keyed fields virtual void serializeCursorPos(MemoryBuffer &mb) = 0; - virtual void deserializeCursorPos(MemoryBuffer &mb, KeyStatsCollector &stats) = 0; + virtual void deserializeCursorPos(MemoryBuffer &mb, IContextLogger *ctx) = 0; virtual unsigned __int64 getSequence() = 0; virtual offset_t getFPos() const = 0; virtual const byte *loadBlob(unsigned __int64 blobid, size32_t &blobsize, IContextLogger *ctx) = 0; virtual void reset() = 0; - virtual bool lookup(bool exact, KeyStatsCollector &stats) = 0; - virtual bool next(KeyStatsCollector &stats) = 0; - virtual bool lookupSkip(const void *seek, size32_t seekOffset, size32_t seeklen, KeyStatsCollector &stats) = 0; + virtual bool lookup(bool exact, IContextLogger *ctx) = 0; + virtual bool next(IContextLogger *ctx) = 0; + virtual bool lookupSkip(const void *seek, size32_t seekOffset, size32_t seeklen, IContextLogger *ctx) = 0; virtual bool skipTo(const void *_seek, size32_t seekOffset, size32_t seeklen) = 0; virtual IKeyCursor *fixSortSegs(unsigned sortFieldOffset) = 0; - virtual unsigned __int64 getCount(KeyStatsCollector &stats) = 0; - virtual unsigned __int64 checkCount(unsigned __int64 max, KeyStatsCollector &stats) = 0; - virtual unsigned __int64 getCurrentRangeCount(unsigned groupSegCount, KeyStatsCollector &stats) = 0; + virtual unsigned __int64 getCount(IContextLogger *ctx) = 0; + virtual unsigned __int64 checkCount(unsigned __int64 max, IContextLogger *ctx) = 0; + virtual unsigned __int64 getCurrentRangeCount(unsigned groupSegCount, IContextLogger *ctx) = 0; virtual bool nextRange(unsigned groupSegCount) = 0; virtual const byte *queryRecordBuffer() const = 0; virtual const byte *queryKeyedBuffer() const = 0; @@ -276,7 +264,6 @@ interface IKeyManager : public IInterface, extends IIndexReadContext virtual void deserializeCursorPos(MemoryBuffer &mb) = 0; virtual const byte *loadBlob(unsigned __int64 blobid, size32_t &blobsize, IContextLogger *ctx) = 0; virtual void releaseBlobs() = 0; - virtual void resetCounts() = 0; virtual void setLayoutTranslator(const IDynamicTransform * trans) = 0; virtual void finishSegmentMonitors() = 0; diff --git a/system/jhtree/jhtree.ipp b/system/jhtree/jhtree.ipp index 83cf744fd0a..9280e967f40 100644 --- a/system/jhtree/jhtree.ipp +++ b/system/jhtree/jhtree.ipp @@ -76,8 +76,8 @@ enum request { LTE, GTE }; interface INodeLoader { virtual const CJHTreeNode *loadNode(cycle_t * fetchCycles, offset_t offset) const = 0; - virtual const CJHSearchNode *locateFirstLeafNode(KeyStatsCollector &stats) const = 0; - virtual const CJHSearchNode *locateLastLeafNode(KeyStatsCollector &stats) const = 0; + virtual const CJHSearchNode *locateFirstLeafNode(IContextLogger *ctx) const = 0; + virtual const CJHSearchNode *locateLastLeafNode(IContextLogger *ctx) const = 0; }; class jhtree_decl CKeyIndex : implements IKeyIndex, implements INodeLoader, public CInterface @@ -157,8 +157,8 @@ public: // INodeLoader impl. virtual const CJHTreeNode *loadNode(cycle_t * fetchCycles, offset_t offset) const override = 0; // Must be implemented in derived classes - virtual const CJHSearchNode *locateFirstLeafNode(KeyStatsCollector &stats) const override; - virtual const CJHSearchNode *locateLastLeafNode(KeyStatsCollector &stats) const override; + virtual const CJHSearchNode *locateFirstLeafNode(IContextLogger *ctx) const override; + virtual const CJHSearchNode *locateLastLeafNode(IContextLogger *ctx) const override; virtual void mergeStats(CRuntimeStatisticCollection & stats) const override {} }; @@ -215,19 +215,19 @@ public: virtual size32_t getKeyedSize() const; virtual offset_t getFPos() const; virtual void serializeCursorPos(MemoryBuffer &mb); - virtual void deserializeCursorPos(MemoryBuffer &mb, KeyStatsCollector &stats); + virtual void deserializeCursorPos(MemoryBuffer &mb, IContextLogger *ctx); virtual unsigned __int64 getSequence(); virtual const byte *loadBlob(unsigned __int64 blobid, size32_t &blobsize, IContextLogger *ctx); virtual void reset(); - virtual bool lookup(bool exact, KeyStatsCollector &stats) override; - virtual bool next(KeyStatsCollector &stats) override; - virtual bool lookupSkip(const void *seek, size32_t seekOffset, size32_t seeklen, KeyStatsCollector &stats) override; + virtual bool lookup(bool exact, IContextLogger *ctx) override; + virtual bool next(IContextLogger *ctx) override; + virtual bool lookupSkip(const void *seek, size32_t seekOffset, size32_t seeklen, IContextLogger *ctx) override; virtual bool skipTo(const void *_seek, size32_t seekOffset, size32_t seeklen) override; virtual IKeyCursor *fixSortSegs(unsigned sortFieldOffset) override; - virtual unsigned __int64 getCount(KeyStatsCollector &stats) override; - virtual unsigned __int64 checkCount(unsigned __int64 max, KeyStatsCollector &stats) override; - virtual unsigned __int64 getCurrentRangeCount(unsigned groupSegCount, KeyStatsCollector &stats) override; + virtual unsigned __int64 getCount(IContextLogger *ctx) override; + virtual unsigned __int64 checkCount(unsigned __int64 max, IContextLogger *ctx) override; + virtual unsigned __int64 getCurrentRangeCount(unsigned groupSegCount, IContextLogger *ctx) override; virtual bool nextRange(unsigned groupSegCount) override; virtual const byte *queryRecordBuffer() const override; virtual const byte *queryKeyedBuffer() const override; @@ -235,15 +235,15 @@ protected: CKeyCursor(const CKeyCursor &from); // Internal searching functions - set current node/nodekey/matched values - bool _last(KeyStatsCollector &stats); // Updates node/nodekey - bool _gtEqual(KeyStatsCollector &stats); // Reads recordBuffer, updates node/nodekey - bool _ltEqual(KeyStatsCollector &stats); // Reads recordBuffer, updates node/nodekey - bool _next(KeyStatsCollector &stats); // Updates node/nodekey + bool _last(IContextLogger *ctx); // Updates node/nodekey + bool _gtEqual(IContextLogger *ctx); // Reads recordBuffer, updates node/nodekey + bool _ltEqual(IContextLogger *ctx); // Reads recordBuffer, updates node/nodekey + bool _next(IContextLogger *ctx); // Updates node/nodekey // if _lookup returns true, recordBuffer will contain keyed portion of result - bool _lookup(bool exact, unsigned lastSeg, bool unfiltered, KeyStatsCollector &stats); + bool _lookup(bool exact, unsigned lastSeg, bool unfiltered, IContextLogger *ctx); - void reportExcessiveSeeks(unsigned numSeeks, unsigned lastSeg, KeyStatsCollector &stats); + void reportExcessiveSeeks(unsigned numSeeks, unsigned lastSeg, IContextLogger *ctx); inline void setLow(unsigned segNo) { diff --git a/system/jhtree/keydiff.cpp b/system/jhtree/keydiff.cpp index 02fb14fda5e..2e9ac69d91d 100644 --- a/system/jhtree/keydiff.cpp +++ b/system/jhtree/keydiff.cpp @@ -55,9 +55,9 @@ class RowBuffer *fpos = 0; } - bool getCursorNext(IKeyCursor * keyCursor, KeyStatsCollector &stats) + bool getCursorNext(IKeyCursor * keyCursor, IContextLogger *ctx) { - if(keyCursor->next(stats)) + if(keyCursor->next(ctx)) { memcpy(row, keyCursor->queryRecordBuffer(), keyCursor->getSize()); thisrowsize = keyCursor->getSize() - sizeof(offset_t); @@ -216,7 +216,7 @@ class RowBuffer class CKeyReader: public CInterface { public: - CKeyReader(char const * filename) : count(0), stats(nullptr) + CKeyReader(char const * filename) : count(0) { keyFile.setown(createIFile(filename)); keyFileIO.setown(keyFile->open(IFOread)); @@ -260,7 +260,7 @@ class CKeyReader: public CInterface { if(eof) return false; - if(buffer.getCursorNext(keyCursor, stats)) + if(buffer.getCursorNext(keyCursor, ctx)) { buffer.tally(crc); count++; @@ -282,7 +282,7 @@ class CKeyReader: public CInterface { while(!eof) { - if(keyCursor->next(stats)) + if(keyCursor->next(ctx)) { const byte *buff = keyCursor->queryRecordBuffer(); offset_t fpos = keyCursor->getFPos(); @@ -330,7 +330,7 @@ class CKeyReader: public CInterface Owned keyFileIO; Owned keyIndex; Owned keyCursor; - KeyStatsCollector stats; + IContextLogger *ctx = nullptr; CRC32 crc; size32_t keyedsize; size32_t rowsize; diff --git a/system/jlib/jlog.hpp b/system/jlib/jlog.hpp index 3d9abc26eb1..ca142787684 100644 --- a/system/jlib/jlog.hpp +++ b/system/jlib/jlog.hpp @@ -835,7 +835,7 @@ constexpr LogMsgCategory MCuserInfo(MSGAUD_user, MSGCLS_information, InfoMsgThre constexpr LogMsgCategory MCdebugInfo(MSGAUD_programmer, MSGCLS_information, DebugMsgThreshold); constexpr LogMsgCategory MCauditInfo(MSGAUD_audit, MSGCLS_information, AudMsgThreshold); constexpr LogMsgCategory MCoperatorInfo(MSGAUD_operator, MSGCLS_information, InfoMsgThreshold); -constexpr LogMsgCategory MCoperatorMetric(MSGAUD_operator, MSGCLS_metric, ErrMsgThreshold); +constexpr LogMsgCategory MCmonitorMetric(MSGAUD_monitor, MSGCLS_metric, ErrMsgThreshold); constexpr LogMsgCategory MCmonitorEvent(MSGAUD_monitor, MSGCLS_event, ProgressMsgThreshold); /* diff --git a/system/jlib/jtrace.cpp b/system/jlib/jtrace.cpp index 807612337b0..b187b6f8aec 100644 --- a/system/jlib/jtrace.cpp +++ b/system/jlib/jtrace.cpp @@ -23,6 +23,7 @@ #include "opentelemetry/sdk/trace/simple_processor_factory.h" #include "opentelemetry/sdk/trace/batch_span_processor_factory.h" #include "opentelemetry/exporters/ostream/span_exporter_factory.h"// auto exporter = opentelemetry::exporter::trace::OStreamSpanExporterFactory::Create(); +#include "opentelemetry/exporters/ostream/common_utils.h" //#define oldForEach ForEach // error: ‘ForEach’ was not declared in this scope #undef ForEach //opentelemetry defines ForEach #include "opentelemetry/exporters/memory/in_memory_span_exporter_factory.h" @@ -65,7 +66,7 @@ class NoopSpanExporter final : public opentelemetry::sdk::trace::SpanExporter /** * @return Returns a unique pointer to an empty recordable object */ - std::unique_ptr MakeRecordable() noexcept override + virtual std::unique_ptr MakeRecordable() noexcept override { return std::unique_ptr(new opentelemetry::sdk::trace::SpanData()); } @@ -107,6 +108,294 @@ class NoopSpanExporterFactory } }; +/** + * Converts an OpenTelemetry span status to its string representation. + * These are OTel span status defined in include/opentelemetry/trace/span_metadata.h + * + * @param StatusCode The OpenTelemetry span status code to translate. + * @return The string representation of the OpenTelemetry status code. + */ +static const char * spanStatusToString(opentelemetry::trace::StatusCode spanStatus) +{ + switch(spanStatus) + { + case opentelemetry::trace::StatusCode::kUnset: + return "Unset"; + case opentelemetry::trace::StatusCode::kOk: + return "Ok"; + case opentelemetry::trace::StatusCode::kError: + return "Error"; + default: + return "Unknown"; + } +} + +/** + * Converts an OpenTelemetry span kind to its string representation. + * These are OTel span kinds defined in include/opentelemetry/trace/span_metadata.h, + * not HPCC JLib CSpan kinds + * + * @param spanKind The OpenTelemetry span kind to convert. + * @return The string representation of the OpenTelemetry span kind. + */ +static const char * spanKindToString(opentelemetry::trace::SpanKind spanKind) +{ + switch (spanKind) + { + case opentelemetry::trace::SpanKind::kClient: + return "Client"; + case opentelemetry::trace::SpanKind::kServer: + return "Server"; + case opentelemetry::trace::SpanKind::kProducer: + return "Producer"; + case opentelemetry::trace::SpanKind::kConsumer: + return "Consumer"; + case opentelemetry::trace::SpanKind::kInternal: + return "Internal"; + default: + return "Unknown"; + } +} + +class JLogSpanExporter final : public opentelemetry::sdk::trace::SpanExporter +{ +public: + JLogSpanExporter(SpanLogFlags spanLogFlags) : logFlags(spanLogFlags), shutDown(false) {} + + /** + * @return Returns a unique pointer to an empty recordable object + */ + virtual std::unique_ptr MakeRecordable() noexcept override + { + return std::unique_ptr(new opentelemetry::sdk::trace::SpanData()); + } + + /** + * Export - Formats recordable spans in HPCC Jlog format and reports to JLog + * + * @param recordables + * @return Always returns success + */ + opentelemetry::sdk::common::ExportResult Export( + const nostd::span> &recordables) noexcept override + { + if (isShutDown()) + return opentelemetry::sdk::common::ExportResult::kFailure; + + for (auto &recordable : recordables) + { + //Casting the recordable object to the type of the object that was previously created by + //JLogSpanExporter::MakeRecordable() - + auto span = std::unique_ptr( + static_cast(recordable.release())); + + if (span != nullptr) + { + char traceID[32] = {0}; + char spanID[16] = {0}; + + span->GetTraceId().ToLowerBase16(traceID); + span->GetSpanId().ToLowerBase16(spanID); + + StringBuffer out("{ \"type\": \"span\""); //for simple identification in log scraping + out.appendf(", \"name\": \"%s\"", span->GetName().data()); + out.append(", \"trace_id\": \"").append(32, traceID).append("\""); + out.append(", \"span_id\": \"").append(16, spanID).append("\""); + out.appendf(", \"start\": %lld", (long long)(span->GetStartTime().time_since_epoch()).count()); + out.appendf(", \"duration\": %lld", (long long)span->GetDuration().count()); + + if (hasMask(logFlags, SpanLogFlags::LogParentInfo)) + { + if (span->GetParentSpanId().IsValid()) + { + char parentSpanID[16] = {0}; + span->GetParentSpanId().ToLowerBase16(parentSpanID); + out.append(", \"parent_span_id\": \"").append(16, parentSpanID).append("\""); + } + + std::string traceStatestr = span->GetSpanContext().trace_state()->ToHeader(); + if (!traceStatestr.empty()) + out.appendf(", \"trace_state\": \"%s\"", traceStatestr.c_str()); + } + + if (hasMask(logFlags, SpanLogFlags::LogSpanDetails)) + { + out.appendf(", \"status\": \"%s\"", spanStatusToString(span->GetStatus())); + out.appendf(", \"kind\": \"%s\"", spanKindToString(span->GetSpanKind())); + const char * description = span->GetDescription().data(); + if (!isEmptyString(description)) + { + StringBuffer encoded; + encodeJSON(encoded, description); + out.appendf(", \"description\": \"%s\"", encoded.str()); + } + printInstrumentationScope(out, span->GetInstrumentationScope()); + } + + if (hasMask(logFlags, SpanLogFlags::LogAttributes)) + printAttributes(out, span->GetAttributes()); + + if (hasMask(logFlags, SpanLogFlags::LogEvents)) + printEvents(out, span->GetEvents()); + + if (hasMask(logFlags, SpanLogFlags::LogLinks)) + printLinks(out, span->GetLinks()); + + if (hasMask(logFlags, SpanLogFlags::LogResources)) + printResources(out, span->GetResource()); + + out.append(" }"); + LOG(MCmonitorEvent, "%s",out.str()); + } + } + return opentelemetry::sdk::common::ExportResult::kSuccess; + } + + /** + * Shut down the exporter. + * @param timeout an optional timeout. + * @return return the status of the operation. + */ + virtual bool Shutdown( + std::chrono::microseconds timeout = std::chrono::microseconds::max()) noexcept override + { + shutDown = true; + return true; + } + +private: + bool isShutDown() const noexcept + { + return shutDown; + } + +public: + static void printAttributes(StringBuffer & out, const std::unordered_map &map, const char * attsContainerName = "attributes") + { + if (map.size() == 0) + return; + + out.appendf(", \"%s\": {", attsContainerName); + + bool first = true; + for (const auto &kv : map) + { + if (!first) + out.append(","); + else + first = false; + + std::ostringstream attsOS; //used to exploit OTel convenience functions for printing attribute values + opentelemetry::exporter::ostream_common::print_value(kv.second, attsOS); + std::string val = attsOS.str(); + if (val.size() > 0) + { + StringBuffer encoded; + encodeJSON(encoded, val.c_str()); + out.appendf("\"%s\": \"%s\"", kv.first.c_str(), encoded.str()); + } + } + out.append(" }"); + } + + static void printEvents(StringBuffer & out, const std::vector &events) + { + if (events.size() == 0) + return; + + out.append(", \"events\":[ "); + bool first = true; + for (const auto &event : events) + { + if (!first) + out.append(","); + else + first = false; + + out.append("{ \"name\": \"").append(event.GetName().data()).append("\""); + out.appendf(", \"time_stamp\": %lld", (long long)event.GetTimestamp().time_since_epoch().count()); + + printAttributes(out, event.GetAttributes()); + out.append(" }"); + } + + out.append(" ]"); + } + + static void printLinks(StringBuffer & out, const std::vector &links) + { + if (links.size() == 0) + return; + + bool first = true; + + out.append(", \"links\": ["); + for (const auto &link : links) + { + if (!first) + out.append(","); + else + first = false; + + char traceID[32] = {0}; + char spanID[16] = {0}; + link.GetSpanContext().trace_id().ToLowerBase16(traceID); + link.GetSpanContext().span_id().ToLowerBase16(spanID); + + out.append(" { \"trace_id\": \"").append(32, traceID).append("\","); + out.append(" \"span_id\": \"").append(16, spanID).append("\","); + out.append(" \"trace_state\": \"").append(link.GetSpanContext().trace_state()->ToHeader().c_str()).append("\""); + printAttributes(out, link.GetAttributes()); + } + out.append(" ]"); + } + + static void printResources(StringBuffer & out, const opentelemetry::sdk::resource::Resource &resources) + { + auto attributes = resources.GetAttributes(); + if (attributes.size()) + printAttributes(out, attributes, "resources"); + } + + static void printInstrumentationScope(StringBuffer & out, + const opentelemetry::sdk::instrumentationscope::InstrumentationScope &instrumentation_scope) + { + out.appendf(", \"instrumented_library\": \"%s\"", instrumentation_scope.GetName().c_str()); + auto version = instrumentation_scope.GetVersion(); + if (version.size()) + out.appendf("-").append(version.c_str()); + } + +private: + SpanLogFlags logFlags = SpanLogFlags::LogNone; + std::atomic_bool shutDown; +}; + +/*#ifdef _USE_CPPUNIT +void testJLogExporterPrintAttributes(StringBuffer & out, const std::unordered_map & map, const char * attsContainerName) +{ + JLogSpanExporter::printAttributes(out, map, attsContainerName); +} + +void testJLogExporterPrintResources(StringBuffer & out, const opentelemetry::sdk::resource::Resource &resources) +{ + JLogSpanExporter::printResources(out, resources); +} +#endif +*/ +class JLogSpanExporterFactory +{ +public: + /** + * Create a JLogSpanExporter. + */ + static std::unique_ptr Create(SpanLogFlags logFlags) + { + return std::unique_ptr( + new JLogSpanExporter(logFlags)); + } +}; + class CHPCCHttpTextMapCarrier : public opentelemetry::context::propagation::TextMapCarrier { public: @@ -157,8 +446,6 @@ class CTraceManager : implements ITraceManager, public CInterface bool enabled = true; bool optAlwaysCreateGlobalIds = false; bool optAlwaysCreateTraceIds = true; - bool optLogSpanStart = false; - bool optLogSpanFinish = true; StringAttr moduleName; nostd::shared_ptr tracer; @@ -190,16 +477,6 @@ class CTraceManager : implements ITraceManager, public CInterface return optAlwaysCreateTraceIds; } - bool logSpanStart() const - { - return optLogSpanStart; - } - - bool logSpanFinish() const - { - return optLogSpanFinish; - } - nostd::shared_ptr queryTracer() const { return tracer; @@ -224,13 +501,6 @@ class CSpan : public CInterfaceOf //Record the span as complete before we output the logging for the end of the span if (span != nullptr) span->End(); - - if (queryInternalTraceManager().logSpanFinish()) - { - StringBuffer out; - toLog(out); - LOG(MCmonitorEvent, "SpanFinish: {%s}", out.str()); - } } const char * getSpanID() const @@ -241,17 +511,6 @@ class CSpan : public CInterfaceOf ISpan * createClientSpan(const char * name) override; ISpan * createInternalSpan(const char * name) override; - virtual void toLog(StringBuffer & out) const override - { - out.append(",\"Name\":\"").append(name.get()).append("\""); - - if (span != nullptr) - { - out.append(",\"TraceID\":\"").append(traceID.get()).append("\""); - out.append(",\"SpanID\":\"").append(spanID.get()).append("\""); - } - } - virtual void toString(StringBuffer & out) const { toString(out, true); @@ -289,6 +548,22 @@ class CSpan : public CInterfaceOf span->SetAttribute(key, val); } + void addSpanEvent(const char * eventName, IProperties * attributes) override + { + if (span && !isEmptyString(eventName)) + { + std::map attributesMap; + Owned iter = attributes->getIterator(); + ForEach(*iter) + { + const char * key = iter->getPropKey(); + attributesMap.insert(std::pair(key, iter->queryPropValue())); + } + + span->AddEvent(eventName, attributesMap); + } + } + void addSpanEvent(const char * eventName) override { if (span && !isEmptyString(eventName)) @@ -428,13 +703,6 @@ class CSpan : public CInterfaceOf if (span != nullptr) { storeSpanContext(); - - if (queryInternalTraceManager().logSpanStart()) - { - StringBuffer out; - toLog(out); - LOG(MCmonitorEvent, "SpanStart: {%s}", out.str()); - } } } } @@ -513,10 +781,10 @@ class CNullSpan : public CInterfaceOf virtual void setSpanAttribute(const char * key, const char * val) override {} virtual void setSpanAttributes(const IProperties * attributes) override {} virtual void addSpanEvent(const char * eventName) override {} + virtual void addSpanEvent(const char * eventName, IProperties * attributes) override {}; virtual bool getSpanContext(IProperties * ctxProps, bool otelFormatted) const override { return false; } virtual void toString(StringBuffer & out) const override {} - virtual void toLog(StringBuffer & out) const override {} virtual void getLogPrefix(StringBuffer & out) const override {} virtual const char* queryGlobalId() const override { return nullptr; } @@ -579,15 +847,6 @@ class CChildSpan : public CSpan return localParentSpan->queryLocalId(); } - virtual void toLog(StringBuffer & out) const override - { - CSpan::toLog(out); - - out.append(",\"ParentSpanID\": \""); - out.append(localParentSpan->getSpanID()); - out.append("\""); - } - virtual void toString(StringBuffer & out, bool isLeaf) const { CSpan::toString(out, isLeaf); @@ -611,12 +870,6 @@ class CInternalSpan : public CChildSpan init(SpanFlags::None); } - void toLog(StringBuffer & out) const override - { - out.append("\"Type\":\"Internal\""); - CChildSpan::toLog(out); - } - void toString(StringBuffer & out, bool isLeaf) const override { out.append("\"Type\":\"Internal\""); @@ -634,12 +887,6 @@ class CClientSpan : public CChildSpan init(SpanFlags::None); } - void toLog(StringBuffer & out) const override - { - out.append("\"Type\":\"Client\""); - CChildSpan::toLog(out); - } - void toString(StringBuffer & out, bool isLeaf) const override { out.append("\"Type\":\"Client\""); @@ -734,6 +981,9 @@ class CServerSpan : public CSpan if (!isEmptyString(hpccCallerId)) setSpanAttribute(kCallerIdOtelAttributeName, hpccCallerId.get()); + + if (!isEmptyString(hpccLocalId)) + setSpanAttribute(kLocalIdIdOtelAttributeName, hpccLocalId.get()); } public: @@ -761,28 +1011,6 @@ class CServerSpan : public CSpan return hpccLocalId.get(); } - virtual void toLog(StringBuffer & out) const override - { - out.append("\"Type\":\"Server\""); - CSpan::toLog(out); - - if (!isEmptyString(hpccGlobalId.get())) - out.append(",\"GlobalID\":\"").append(hpccGlobalId.get()).append("\""); - if (!isEmptyString(hpccCallerId.get())) - out.append(",\"CallerID\":\"").append(hpccCallerId.get()).append("\""); - if (!isEmptyString(hpccLocalId.get())) - out.append(",\"LocalID\":\"").append(hpccLocalId.get()).append("\""); - - if (remoteParentSpanCtx.IsValid()) - { - out.append(",\"ParentSpanID\":\""); - char spanId[16] = {0}; - remoteParentSpanCtx.span_id().ToLowerBase16(spanId); - out.append(16, spanId) - .append("\""); - } - } - virtual void toString(StringBuffer & out, bool isLeaf) const override { out.append("\"Type\":\"Server\""); @@ -826,7 +1054,8 @@ IProperties * getSpanContext(const ISpan * span) void CTraceManager::initTracerProviderAndGlobalInternals(const IPropertyTree * traceConfig) { - std::unique_ptr exporter = NoopSpanExporterFactory::Create(); + //Trace to JLog by default. + std::unique_ptr exporter = JLogSpanExporterFactory::Create(DEFAULT_SPAN_LOG_FLAGS); //Administrators can choose to export trace data to a different backend by specifying the exporter type if (traceConfig && traceConfig->hasProp("exporter")) @@ -836,14 +1065,14 @@ void CTraceManager::initTracerProviderAndGlobalInternals(const IPropertyTree * t { StringBuffer exportType; exportConfig->getProp("@type", exportType); - DBGLOG("Exporter type: %s", exportType.str()); + LOG(MCoperatorInfo, "Exporter type: %s", exportType.str()); if (!exportType.isEmpty()) { if (stricmp(exportType.str(), "OS")==0) //To stdout/err { exporter = opentelemetry::exporter::trace::OStreamSpanExporterFactory::Create(); - DBGLOG("Tracing to stdout/err..."); + LOG(MCoperatorInfo, "Tracing exporter set OS"); } else if (stricmp(exportType.str(), "OTLP")==0 || stricmp(exportType.str(), "OTLP-HTTP")==0) { @@ -859,7 +1088,7 @@ void CTraceManager::initTracerProviderAndGlobalInternals(const IPropertyTree * t trace_opts.console_debug = exportConfig->getPropBool("@consoleDebug", false); exporter = opentelemetry::exporter::otlp::OtlpHttpExporterFactory::Create(trace_opts); - DBGLOG("Exporting traces via OTLP/HTTP to: (%s)", trace_opts.url.c_str()); + LOG(MCoperatorInfo,"Tracing exporter set to OTLP/HTTP to: (%s)", trace_opts.url.c_str()); } else if (stricmp(exportType.str(), "OTLP-GRPC")==0) { @@ -884,17 +1113,64 @@ void CTraceManager::initTracerProviderAndGlobalInternals(const IPropertyTree * t opts.timeout = std::chrono::seconds(exportConfig->getPropInt("@timeOutSecs")); exporter = otlp::OtlpGrpcExporterFactory::Create(opts); - DBGLOG("Exporting traces via OTLP/GRPC to: (%s)", opts.endpoint.c_str()); + LOG(MCoperatorInfo, "Tracing exporter set to OTLP/GRPC to: (%s)", opts.endpoint.c_str()); + } + else if (stricmp(exportType.str(), "JLOG")==0) + { + StringBuffer logFlagsStr; + SpanLogFlags logFlags = SpanLogFlags::LogNone; + + if (exportConfig->getPropBool("@logSpanDetails", false)) + { + logFlags |= SpanLogFlags::LogSpanDetails; + logFlagsStr.append(" LogDetails "); + } + if (exportConfig->getPropBool("@logParentInfo", false)) + { + logFlags |= SpanLogFlags::LogParentInfo; + logFlagsStr.append(" LogParentInfo "); + } + if (exportConfig->getPropBool("@logAttributes", false)) + { + logFlags |= SpanLogFlags::LogAttributes; + logFlagsStr.append(" LogAttributes "); + } + if (exportConfig->getPropBool("@logEvents", false)) + { + logFlags |= SpanLogFlags::LogEvents; + logFlagsStr.append(" LogEvents "); + } + if (exportConfig->getPropBool("@logLinks", false)) + { + logFlags |= SpanLogFlags::LogLinks; + logFlagsStr.append(" LogLinks "); + } + if (exportConfig->getPropBool("@logResources", false)) + { + logFlags |= SpanLogFlags::LogResources; + logFlagsStr.append(" LogLinks "); + } + + //if no log feature flags provided, use default + if (logFlags == SpanLogFlags::LogNone) + logFlags = DEFAULT_SPAN_LOG_FLAGS; + + exporter = JLogSpanExporterFactory::Create(logFlags); + + LOG(MCoperatorInfo, "Tracing exporter set to JLog: logFlags( LogAttributes LogParentInfo %s)", logFlagsStr.str()); } else if (stricmp(exportType.str(), "Prometheus")==0) - DBGLOG("Tracing to Prometheus currently not supported"); + LOG(MCoperatorInfo, "Tracing to Prometheus currently not supported"); else if (stricmp(exportType.str(), "NONE")==0) - DBGLOG("Tracing exporter set to 'NONE', no trace exporting will be performed"); + { + exporter = NoopSpanExporterFactory::Create(); + LOG(MCoperatorInfo, "Tracing exporter set to 'NONE', no trace exporting will be performed"); + } else - DBGLOG("Tracing exporter type not supported: '%s', no trace exporting will be performed", exportType.str()); + LOG(MCoperatorInfo, "Tracing exporter type not supported: '%s', JLog trace exporting will be performed", exportType.str()); } else - DBGLOG("Tracing exporter type not specified"); + LOG(MCoperatorInfo, "Tracing exporter type not specified"); } } @@ -917,15 +1193,15 @@ void CTraceManager::initTracerProviderAndGlobalInternals(const IPropertyTree * t //equal to max_queue_size. //size_t max_export_batch_size = 512 processor = opentelemetry::sdk::trace::BatchSpanProcessorFactory::Create(std::move(exporter), options); - DBGLOG("OpenTel tracing using batch Span Processor"); + LOG(MCoperatorInfo, "OpenTel tracing using batch Span Processor"); } else if (foundProcessorType && strcmp("simple", processorType.str())==0) { - DBGLOG("OpenTel tracing using batch simple Processor"); + LOG(MCoperatorInfo, "OpenTel tracing using batch simple Processor"); } else { - DBGLOG("OpenTel tracing detected invalid processor type: '%s'", processorType.str()); + LOG(MCoperatorInfo, "OpenTel tracing detected invalid processor type: '%s'", processorType.str()); } } @@ -950,7 +1226,7 @@ Expected Configuration format: alwaysCreateGlobalIds : false #optional - should global ids always be created? alwaysCreateTraceIds #optional - should trace ids always be created? exporter: #optional - Controls how trace data is exported/reported - type: OTLP #OS|OTLP|Prometheus|HPCC (default: no export, jlog entry) + type: OTLP #OS|OTLP|Prometheus|JLOG (default: JLOG) endpoint: "localhost:4317" #exporter specific key/value pairs useSslCredentials: true sslCredentialsCACcert: "ssl-certificate" @@ -966,14 +1242,18 @@ void CTraceManager::initTracer(const IPropertyTree * traceConfig) if (!traceConfig) { const char * simulatedGlobalYaml = R"!!(global: -tracing: + tracing: disabled: false - exporter: - type: OTLP-HTTP - timeOutSecs: 15 - consoleDebug: true processor: - type: simple + type: simple + exporter: + type: JLOG + logSpanDetails: true + logParentInfo: true + logAttributes: true + logEvents: true + logLinks: true + logResources: true )!!"; testTree.setown(createPTreeFromYAMLString(simulatedGlobalYaml, ipt_none, ptr_ignoreWhiteSpace, nullptr)); traceConfig = testTree->queryPropTree("global/tracing"); @@ -984,7 +1264,6 @@ void CTraceManager::initTracer(const IPropertyTree * traceConfig) toXML(traceConfig, xml); DBGLOG("traceConfig tree: %s", xml.str()); } - #endif bool disableTracing = traceConfig && traceConfig->getPropBool("@disabled", false); @@ -1004,8 +1283,6 @@ void CTraceManager::initTracer(const IPropertyTree * traceConfig) //Non open-telemetry tracing configuration if (traceConfig) { - optLogSpanStart = traceConfig->getPropBool("@logSpanStart", optLogSpanStart); - optLogSpanFinish = traceConfig->getPropBool("@logSpanFinish", optLogSpanFinish); optAlwaysCreateGlobalIds = traceConfig->getPropBool("@alwaysCreateGlobalIds", optAlwaysCreateGlobalIds); optAlwaysCreateTraceIds = traceConfig->getPropBool("@alwaysCreateTraceIds", optAlwaysCreateTraceIds); } diff --git a/system/jlib/jtrace.hpp b/system/jlib/jtrace.hpp index 7998ffd5c3c..aa2a46a077f 100644 --- a/system/jlib/jtrace.hpp +++ b/system/jlib/jtrace.hpp @@ -17,7 +17,6 @@ #ifndef JTRACE_HPP #define JTRACE_HPP - /** * @brief This follows open telemetry's span attribute naming conventions * Known HPCC span Keys could be added here @@ -29,6 +28,22 @@ static constexpr const char *kLegacyGlobalIdHttpHeaderName = "HPCC-Global-Id"; static constexpr const char *kLegacyCallerIdHttpHeaderName = "HPCC-Caller-Id"; static constexpr const char *kGlobalIdOtelAttributeName = "hpcc.globalid"; static constexpr const char *kCallerIdOtelAttributeName = "hpcc.callerid"; +static constexpr const char *kLocalIdIdOtelAttributeName = "hpcc.localid"; + +enum class SpanLogFlags : unsigned +{ + LogNone = 0x00000000, + LogSpanDetails = 0x00000001, + LogParentInfo = 0x00000002, + LogAttributes = 0x00000004, + LogEvents = 0x00000008, + LogLinks = 0x00000010, + LogResources = 0x00000020, +}; +BITMASK_ENUM(SpanLogFlags); + +static constexpr SpanLogFlags DEFAULT_SPAN_LOG_FLAGS = SpanLogFlags::LogAttributes | SpanLogFlags::LogParentInfo; + enum class SpanFlags : unsigned { @@ -43,9 +58,9 @@ interface ISpan : extends IInterface virtual void setSpanAttribute(const char * key, const char * val) = 0; virtual void setSpanAttributes(const IProperties * attributes) = 0; virtual void addSpanEvent(const char * eventName) = 0; + virtual void addSpanEvent(const char * eventName, IProperties * attributes) = 0; virtual bool getSpanContext(IProperties * ctxProps, bool otelFormatted) const = 0; virtual void toString(StringBuffer & out) const = 0; - virtual void toLog(StringBuffer & out) const = 0; virtual void getLogPrefix(StringBuffer & out) const = 0; virtual ISpan * createClientSpan(const char * name) = 0; @@ -70,6 +85,17 @@ interface ITraceManager : extends IInterface extern jlib_decl ISpan * getNullSpan(); extern jlib_decl void initTraceManager(const char * componentName, const IPropertyTree * componentConfig, const IPropertyTree * globalConfig); extern jlib_decl ITraceManager & queryTraceManager(); +/* +Temporarily disabled due to build issues in certain environments +#ifdef _USE_CPPUNIT +#include "opentelemetry/sdk/common/attribute_utils.h" +#include "opentelemetry/sdk/resource/resource.h" + +extern jlib_decl void testJLogExporterPrintResources(StringBuffer & out, const opentelemetry::sdk::resource::Resource &resources); +extern jlib_decl void testJLogExporterPrintAttributes(StringBuffer & out, const std::unordered_map & map, const char * attsContainerName); +#endif +*/ + //The following class is responsible for ensuring that the active span is restored in a context when the scope is exited //Use a template class so it can be reused for IContextLogger and IEspContext diff --git a/system/metrics/sinks/log/logSink.cpp b/system/metrics/sinks/log/logSink.cpp index d9ce1b4fc99..30c51eaf922 100644 --- a/system/metrics/sinks/log/logSink.cpp +++ b/system/metrics/sinks/log/logSink.cpp @@ -65,7 +65,7 @@ void LogMetricSink::writeLogEntry(const std::shared_ptr &pMetric) { name.append(".").append(unitsStr); } - LOG(MCoperatorMetric, "name=%s,value=%" I64F "d", name.c_str(), metricValue); + LOG(MCmonitorMetric, "name=%s,value=%" I64F "d", name.c_str(), metricValue); } } else @@ -84,20 +84,20 @@ void LogMetricSink::writeLogEntry(const std::shared_ptr &pMetric) cumulative += values[i]; if (!ignoreZeroMetrics || values[i]) { - LOG(MCoperatorMetric, "name=%s, bucket le %" I64F "d=%" I64F "d", name.c_str(), limits[i], cumulative); + LOG(MCmonitorMetric, "name=%s, bucket le %" I64F "d=%" I64F "d", name.c_str(), limits[i], cumulative); } } // The inf bucket count is the last element in the array of values returned. // Add it to the cumulative count and print the value cumulative += values[countBucketValues - 1]; - LOG(MCoperatorMetric, "name=%s, bucket inf=%" I64F "d", name.c_str(), cumulative); + LOG(MCmonitorMetric, "name=%s, bucket inf=%" I64F "d", name.c_str(), cumulative); // sum - total of all observations - LOG(MCoperatorMetric, "name=%s, sum=%" I64F "d", name.c_str(), sum); + LOG(MCmonitorMetric, "name=%s, sum=%" I64F "d", name.c_str(), sum); // count - total of all bucket counts (same as inf) - LOG(MCoperatorMetric, "name=%s, count=%" I64F "d", name.c_str(), cumulative); + LOG(MCmonitorMetric, "name=%s, count=%" I64F "d", name.c_str(), cumulative); } } } diff --git a/testing/unittests/jlibtests.cpp b/testing/unittests/jlibtests.cpp index 6aeb3e11f36..4bbf6e80cee 100644 --- a/testing/unittests/jlibtests.cpp +++ b/testing/unittests/jlibtests.cpp @@ -36,6 +36,9 @@ #include "jutil.hpp" #include "junicode.hpp" +#include "opentelemetry/sdk/common/attribute_utils.h" +#include "opentelemetry/sdk/resource/resource.h" + #include "unittests.hpp" #define CPPUNIT_ASSERT_EQUAL_STR(x, y) CPPUNIT_ASSERT_EQUAL(std::string(x ? x : ""),std::string(y ? y : "")) @@ -60,6 +63,9 @@ class JlibTraceTest : public CppUnit::TestFixture CPPUNIT_TEST(testNullSpan); CPPUNIT_TEST(testClientSpanGlobalID); CPPUNIT_TEST(testEnsureTraceID); + + //CPPUNIT_TEST(testJTraceJLOGExporterprintResources); + //CPPUNIT_TEST(testJTraceJLOGExporterprintAttributes); CPPUNIT_TEST_SUITE_END(); const char * simulatedGlobalYaml = R"!!(global: @@ -104,6 +110,84 @@ class JlibTraceTest : public CppUnit::TestFixture protected: + /*void testJTraceJLOGExporterprintAttributes() + { + StringBuffer out; + testJLogExporterPrintAttributes(out, {}, "attributes"); + CPPUNIT_ASSERT_EQUAL_MESSAGE("Unexpected non-empty printattributes", true, out.length() == 0); + + + testJLogExporterPrintAttributes(out, {{"url", "https://localhost"}, {"content-length", 562}, {"content-type", "html/text"}}, "attributes"); + CPPUNIT_ASSERT_EQUAL_MESSAGE("Unexpected empty printattributes", false, out.length() == 0); + + Owned jtraceAsTree; + try + { + CPPUNIT_ASSERT_EQUAL_MESSAGE("Unexpected leading non-comma char in printattributes", true, out.charAt(0) == ','); + + out.setCharAt(0, '{'); + out.append("}"); + + jtraceAsTree.setown(createPTreeFromJSONString(out.str())); + } + catch (IException *e) + { + StringBuffer msg; + msg.append("Unexpected printAttributes format failure detected: "); + e->errorMessage(msg); + e->Release(); + CPPUNIT_ASSERT_MESSAGE(msg.str(), false); + } + + CPPUNIT_ASSERT_EQUAL_MESSAGE("Unexpected printresources format failure detected", true, jtraceAsTree != nullptr); + CPPUNIT_ASSERT_EQUAL_MESSAGE("Missing resource attribute detected", true, jtraceAsTree->hasProp("attributes")); + CPPUNIT_ASSERT_EQUAL_MESSAGE("Missing resource attribute detected", true, jtraceAsTree->hasProp("attributes/url")); + CPPUNIT_ASSERT_EQUAL_MESSAGE("Missing resource attribute detected", true, jtraceAsTree->hasProp("attributes/content-length")); + CPPUNIT_ASSERT_EQUAL_MESSAGE("Missing resource attribute detected", true, jtraceAsTree->hasProp("attributes/content-type")); + }*/ + + /*void testJTraceJLOGExporterprintResources() + { + StringBuffer out; + auto dummyAttributes = opentelemetry::sdk::resource::ResourceAttributes + { + {"service.name", "shoppingcart"}, + {"service.instance.id", "instance-12"} + }; + auto dummyResources = opentelemetry::sdk::resource::Resource::Create(dummyAttributes); + + testJLogExporterPrintResources(out, dummyResources); + + Owned jtraceAsTree; + try + { + CPPUNIT_ASSERT_EQUAL_MESSAGE("Unexpected empty printresources return", false, out.length() == 0); + + CPPUNIT_ASSERT_EQUAL_MESSAGE("Unexpected leading non-comma char in printresources return", true, out.charAt(0) == ','); + + out.setCharAt(0, '{'); + out.append("}"); + + jtraceAsTree.setown(createPTreeFromJSONString(out.str())); + } + catch (IException *e) + { + StringBuffer msg; + msg.append("Unexpected printresources format failure detected: "); + e->errorMessage(msg); + e->Release(); + CPPUNIT_ASSERT_MESSAGE(msg.str(), false); + } + + CPPUNIT_ASSERT_EQUAL_MESSAGE("Unexpected printresources format failure detected", true, jtraceAsTree != nullptr); + CPPUNIT_ASSERT_EQUAL_MESSAGE("Missing resource attribute detected", true, jtraceAsTree->hasProp("resources")); + CPPUNIT_ASSERT_EQUAL_MESSAGE("Missing resource attribute detected", true, jtraceAsTree->hasProp("resources/service.name")); + CPPUNIT_ASSERT_EQUAL_MESSAGE("Missing resource attribute detected", true, jtraceAsTree->hasProp("resources/service.instance.id")); + CPPUNIT_ASSERT_EQUAL_MESSAGE("Missing resource attribute detected", true, jtraceAsTree->hasProp("resources/telemetry.sdk.language")); + CPPUNIT_ASSERT_EQUAL_MESSAGE("Missing resource attribute detected", true, jtraceAsTree->hasProp("resources/telemetry.sdk.version")); + CPPUNIT_ASSERT_EQUAL_MESSAGE("Missing resource attribute detected", true, jtraceAsTree->hasProp("resources/telemetry.sdk.name")); + }*/ + void testTraceDisableConfig() { Owned testTree = createPTreeFromYAMLString(disableTracingYaml, ipt_none, ptr_ignoreWhiteSpace, nullptr); @@ -383,32 +467,6 @@ class JlibTraceTest : public CppUnit::TestFixture Owned internalSpan2 = internalSpan->createInternalSpan("internalSpan2"); StringBuffer out; - out.set("{"); - internalSpan2->toLog(out); - out.append("}"); - { - Owned jtraceAsTree; - try - { - jtraceAsTree.setown(createPTreeFromJSONString(out.str())); - } - catch (IException *e) - { - StringBuffer msg; - msg.append("Unexpected toLog format failure detected: "); - e->errorMessage(msg); - e->Release(); - CPPUNIT_ASSERT_MESSAGE(msg.str(), false); - } - - CPPUNIT_ASSERT_EQUAL_MESSAGE("Unexpected toLog format failure detected", true, jtraceAsTree != nullptr); - CPPUNIT_ASSERT_EQUAL_MESSAGE("Unexpected missing 'TraceID' entry in toLog output", true, jtraceAsTree->hasProp("TraceID")); - CPPUNIT_ASSERT_EQUAL_MESSAGE("Unexpected missing 'SpanID' entry in toLog output", true, jtraceAsTree->hasProp("SpanID")); - CPPUNIT_ASSERT_EQUAL_MESSAGE("Unexpected missing 'Name' entry in toLog output", true, jtraceAsTree->hasProp("Name")); - CPPUNIT_ASSERT_EQUAL_MESSAGE("Unexpected missing 'Type' entry in toLog output", true, jtraceAsTree->hasProp("Type")); - CPPUNIT_ASSERT_EQUAL_MESSAGE("Unexpected missing 'ParentSpanID' entry in toLog output", true, jtraceAsTree->hasProp("ParentSpanID")); - } - out.set("{"); internalSpan2->toString(out); out.append("}"); diff --git a/thorlcr/graph/thgraphmaster.cpp b/thorlcr/graph/thgraphmaster.cpp index 8a313fc4090..bcdc68b656a 100644 --- a/thorlcr/graph/thgraphmaster.cpp +++ b/thorlcr/graph/thgraphmaster.cpp @@ -2352,7 +2352,8 @@ void CMasterGraph::reset() void CMasterGraph::abort(IException *e) { if (aborted) return; - getFinalProgress(true); + if (initialized) // if aborted before initialized, there will be no slave activity to collect + getFinalProgress(true); bool _graphDone = graphDone; // aborting master activities can trigger master graphDone, but want to fire GraphAbort to slaves if graphDone=false at start. bool dumpInfo = TE_WorkUnitAbortingDumpInfo == e->errorCode() || job.getOptBool("dumpInfoOnAbort"); if (dumpInfo) @@ -2793,9 +2794,12 @@ void CMasterGraph::getFinalProgress(bool aborting) { size32_t progressLen; msg.read(progressLen); - MemoryBuffer progressData; - progressData.setBuffer(progressLen, (void *)msg.readDirect(progressLen)); - queryJobManager().queryDeMonServer()->takeHeartBeat(progressData); + if (progressLen) + { + MemoryBuffer progressData; + progressData.setBuffer(progressLen, (void *)msg.readDirect(progressLen)); + queryJobManager().queryDeMonServer()->takeHeartBeat(progressData); + } } catch (IException *e) { diff --git a/thorlcr/master/thgraphmanager.cpp b/thorlcr/master/thgraphmanager.cpp index ae1d3ded4c4..cbc6b00db96 100644 --- a/thorlcr/master/thgraphmanager.cpp +++ b/thorlcr/master/thgraphmanager.cpp @@ -1460,6 +1460,10 @@ void thorMain(ILogMsgHandler *logHandler, const char *wuid, const char *graphNam { if (!streq(currentWuid, wuid)) { + queryLogMsgManager()->removeJobId(thorJob.queryJobID()); + LogMsgJobId thorJobId = queryLogMsgManager()->addJobId(wuid); + thorJob.setJobID(thorJobId); + setDefaultJobId(thorJobId); // perhaps slightly overkill, but avoid checking/locking wuid to add pod info. // if this instance has already done so. auto it = publishedPodWuids.find(wuid.str()); @@ -1477,6 +1481,7 @@ void thorMain(ILogMsgHandler *logHandler, const char *wuid, const char *graphNam // NB: this set of pods could still already be published, if so, publishPodNames will not re-add. } currentWuid.set(wuid); // NB: will always be same if !multiJobLinger + saveWuidToFile(currentWuid); break; // success } else if (ret < 0) diff --git a/thorlcr/master/thmastermain.cpp b/thorlcr/master/thmastermain.cpp index a575a6d362a..1dc2bdde01d 100644 --- a/thorlcr/master/thmastermain.cpp +++ b/thorlcr/master/thmastermain.cpp @@ -1002,6 +1002,7 @@ int main( int argc, const char *argv[] ) bool doWorkerRegistration = false; if (isContainerized()) { + saveWuidToFile(workunit); LogMsgJobId thorJobId = queryLogMsgManager()->addJobId(workunit); thorJob.setJobID(thorJobId); setDefaultJobId(thorJobId); diff --git a/thorlcr/slave/slavmain.cpp b/thorlcr/slave/slavmain.cpp index 80ab3d47800..b4621aebb6b 100644 --- a/thorlcr/slave/slavmain.cpp +++ b/thorlcr/slave/slavmain.cpp @@ -1788,6 +1788,7 @@ class CJobListener : public CSimpleInterface StringAttr wuid, graphName; StringBuffer soPath; msg.read(wuid); + saveWuidToFile(wuid); msg.read(graphName); Owned querySo; @@ -1885,6 +1886,7 @@ class CJobListener : public CSimpleInterface ILogMsgFilter *existingLogHandler = queryLogMsgManager()->queryMonitorFilter(logHandler); dbgassertex(existingLogHandler); verifyex(queryLogMsgManager()->changeMonitorFilterOwn(logHandler, getCategoryLogMsgFilter(existingLogHandler->queryAudienceMask(), existingLogHandler->queryClassMask(), maxLogDetail))); + queryLogMsgManager()->removeJobId(thorJob.queryJobID()); LogMsgJobId thorJobId = queryLogMsgManager()->addJobId(wuid); thorJob.setJobID(thorJobId); setDefaultJobId(thorJobId); diff --git a/thorlcr/slave/slwatchdog.cpp b/thorlcr/slave/slwatchdog.cpp index c5569fd2dd1..7860eaeecde 100644 --- a/thorlcr/slave/slwatchdog.cpp +++ b/thorlcr/slave/slwatchdog.cpp @@ -111,7 +111,12 @@ class CGraphProgressHandlerBase : public CInterfaceOf, implement virtual void stopGraph(CGraphBase &graph, MemoryBuffer *mb) override { CriticalBlock b(crit); - if (NotFound != activeGraphs.find(graph)) + if (NotFound == activeGraphs.find(graph)) + { + if (mb) + mb->append((size32_t)0); + } + else { StringBuffer str("Watchdog: Stop Job "); LOG(MCthorDetailedDebugInfo, thorJob, "%s", str.append(graph.queryGraphId()).str()); diff --git a/thorlcr/slave/thslavemain.cpp b/thorlcr/slave/thslavemain.cpp index 23cfdbbbc5d..1620955ed99 100644 --- a/thorlcr/slave/thslavemain.cpp +++ b/thorlcr/slave/thslavemain.cpp @@ -316,24 +316,35 @@ class CReleaseMutex : public CSimpleInterface, public Mutex ILogMsgHandler *startSlaveLog() { ILogMsgHandler *logHandler = nullptr; -#ifndef _CONTAINERIZED - StringBuffer fileName("thorslave"); - Owned lf = createComponentLogFileCreator(globals->queryProp("@logDir"), "thor"); - StringBuffer slaveNumStr; - lf->setPostfix(slaveNumStr.append(mySlaveNum).str()); - lf->setCreateAliasFile(false); - lf->setName(fileName.str());//override default filename - logHandler = lf->beginLogging(); + if (!isContainerized()) + { + StringBuffer fileName("thorslave"); + Owned lf = createComponentLogFileCreator(globals->queryProp("@logDir"), "thor"); + StringBuffer slaveNumStr; + lf->setPostfix(slaveNumStr.append(mySlaveNum).str()); + lf->setCreateAliasFile(false); + lf->setName(fileName.str());//override default filename + logHandler = lf->beginLogging(); #ifndef _DEBUG - // keep duplicate logging output to stderr to aide debugging - queryLogMsgManager()->removeMonitor(queryStderrLogMsgHandler()); + // keep duplicate logging output to stderr to aide debugging + queryLogMsgManager()->removeMonitor(queryStderrLogMsgHandler()); #endif - LOG(MCdebugProgress, thorJob, "Opened log file %s", lf->queryLogFileSpec()); -#else - setupContainerizedLogMsgHandler(); - logHandler = queryStderrLogMsgHandler(); -#endif + LOG(MCdebugProgress, thorJob, "Opened log file %s", lf->queryLogFileSpec()); + } + else + { + setupContainerizedLogMsgHandler(); + logHandler = queryStderrLogMsgHandler(); + StringBuffer wuid; + if (getComponentConfigSP()->getProp("@workunit", wuid)) + { + LogMsgJobId thorJobId = queryLogMsgManager()->addJobId(wuid); + thorJob.setJobID(thorJobId); + setDefaultJobId(thorJobId); + } + } + //setupContainerizedStorageLocations(); LOG(MCdebugProgress, thorJob, "Build %s", hpccBuildInfo.buildTag); return logHandler; @@ -391,12 +402,9 @@ int main( int argc, const char *argv[] ) usage(); mySlaveNum = globals->getPropInt("@slavenum", NotFound); - /* NB: in cloud/non-local storage mode, slave number is not known until after registration with the master - * For the time being log file names are based on their slave number, so can only start when known. - */ - ILogMsgHandler *slaveLogHandler = nullptr; - if (NotFound != mySlaveNum) - slaveLogHandler = startSlaveLog(); + if (!isContainerized() && (NotFound == mySlaveNum)) + throw makeStringException(0, "Slave number not specified (@slavenum)"); + ILogMsgHandler *slaveLogHandler = startSlaveLog(); // In container world, SLAVE= will not be used const char *slave = globals->queryProp("@slave"); @@ -429,9 +437,6 @@ int main( int argc, const char *argv[] ) if (RegisterSelf(masterEp)) { - if (!slaveLogHandler) - slaveLogHandler = startSlaveLog(); - if (globals->getPropBool("@MPChannelReconnect")) getMPServer()->setOpt(mpsopt_channelreopen, "true"); diff --git a/thorlcr/thorutil/thormisc.cpp b/thorlcr/thorutil/thormisc.cpp index 9eb7d43f8bf..518491d145a 100644 --- a/thorlcr/thorutil/thormisc.cpp +++ b/thorlcr/thorutil/thormisc.cpp @@ -1681,4 +1681,14 @@ void CThorPerfTracer::stop() EXCLOG(E); ::Release(E); } +} + +void saveWuidToFile(const char *wuid) +{ + // Store current wuid to a local file, so post mortem script can find it (and if necessary publish files to it) + Owned wuidFile = createIFile("wuid"); // NB: each pod is in it's own private working directory + Owned wuidFileIO = wuidFile->open(IFOcreate); + if (!wuidFileIO) + throw makeStringException(0, "Failed to create file 'wuid' to store current workunit for post mortem script"); + wuidFileIO->write(0, strlen(wuid), wuid); } \ No newline at end of file diff --git a/thorlcr/thorutil/thormisc.hpp b/thorlcr/thorutil/thormisc.hpp index ee667d4246c..39270729a6c 100644 --- a/thorlcr/thorutil/thormisc.hpp +++ b/thorlcr/thorutil/thormisc.hpp @@ -615,5 +615,7 @@ class graph_decl CThorPerfTracer : protected PerfTracer void stop(); }; +extern graph_decl void saveWuidToFile(const char *wuid); + #endif