Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/candidate-9.4.x'
Browse files Browse the repository at this point in the history
Signed-off-by: Gavin Halliday <[email protected]>

# Conflicts:
#	helm/hpcc/Chart.yaml
#	helm/hpcc/templates/_helpers.tpl
#	helm/hpcc/templates/dafilesrv.yaml
#	helm/hpcc/templates/dali.yaml
#	helm/hpcc/templates/dfuserver.yaml
#	helm/hpcc/templates/eclagent.yaml
#	helm/hpcc/templates/eclccserver.yaml
#	helm/hpcc/templates/eclscheduler.yaml
#	helm/hpcc/templates/esp.yaml
#	helm/hpcc/templates/localroxie.yaml
#	helm/hpcc/templates/roxie.yaml
#	helm/hpcc/templates/sasha.yaml
#	helm/hpcc/templates/thor.yaml
#	version.cmake
  • Loading branch information
ghalliday committed Jan 19, 2024
2 parents caa7c5c + 1fc701a commit d480684
Show file tree
Hide file tree
Showing 32 changed files with 869 additions and 237 deletions.
44 changes: 37 additions & 7 deletions .github/workflows/build-assets.yml
Original file line number Diff line number Diff line change
Expand Up @@ -112,6 +112,9 @@ jobs:
- os: ubuntu-22.04
name: LN
ln: true
- os: ubuntu-22.04
name: Enterprise
ee: true
- os: ubuntu-20.04
name: LN
ln: true
Expand All @@ -138,7 +141,7 @@ jobs:
path: ${{ needs.preamble.outputs.folder_platform }}

- name: Checkout LN
if: ${{ matrix.ln }}
if: ${{ matrix.ln || matrix.ee }}
uses: actions/checkout@v3
with:
repository: ${{ github.repository_owner }}/LN
Expand Down Expand Up @@ -192,7 +195,7 @@ jobs:

# Communtiy Build
- name: CMake Packages (community)
if: ${{ !matrix.ln && !matrix.container && !matrix.documentation }}
if: ${{ !matrix.ln && !matrix.ee && !matrix.container && !matrix.documentation }}
run: |
mkdir -p ${{ needs.preamble.outputs.folder_build }}
echo "${{ secrets.SIGNING_SECRET }}" > ${{ needs.preamble.outputs.folder_build }}/private.key
Expand All @@ -209,7 +212,7 @@ jobs:
done
- name: CMake Containerized Packages (community)
if: ${{ !matrix.ln && matrix.container && !matrix.documentation }}
if: ${{ !matrix.ln && !matrix.ee && matrix.container && !matrix.documentation }}
run: |
mkdir -p ${{ needs.preamble.outputs.folder_build }}
echo "${{ secrets.SIGNING_SECRET }}" > ${{ needs.preamble.outputs.folder_build }}/private.key
Expand All @@ -223,7 +226,7 @@ jobs:
cmake --build /hpcc-dev/build --parallel $(nproc) --target package"
- name: CMake documentation (community)
if: ${{ !matrix.ln && !matrix.container && matrix.documentation }}
if: ${{ !matrix.ln && !matrix.ee && !matrix.container && matrix.documentation }}
run: |
mkdir -p {${{needs.preamble.outputs.folder_build }},EN_US,PT_BR}
sudo rm -f ${{ needs.preamble.outputs.folder_build }}/CMakeCache.txt
Expand All @@ -235,7 +238,7 @@ jobs:
docker run --rm --mount ${{ needs.preamble.outputs.mount_platform }} --mount ${{ needs.preamble.outputs.mount_build }} ${{ steps.vars.outputs.docker_tag_candidate_base }} "cd /hpcc-dev/build/Release/docs/PT_BR && zip ALL_HPCC_DOCS_PT_BR-${{ needs.preamble.outputs.community_tag }}.zip *.pdf"
- name: Upload Assets (community)
if: ${{ !matrix.ln }}
if: ${{ !matrix.ln && !matrix.ee }}
uses: ncipollo/[email protected]
with:
allowUpdates: true
Expand All @@ -244,7 +247,7 @@ jobs:
artifacts: "${{ needs.preamble.outputs.folder_build }}/*.deb,${{ needs.preamble.outputs.folder_build }}/*.rpm,${{ needs.preamble.outputs.folder_build }}/Release/docs/*.zip,${{ needs.preamble.outputs.folder_build }}/Release/docs/EN_US/*.zip,${{ needs.preamble.outputs.folder_build }}/Release/docs/PT_BR/*.zip,${{ needs.preamble.outputs.folder_build }}/docs/EN_US/EclipseHelp/*.zip,${{ needs.preamble.outputs.folder_build }}/docs/EN_US/HTMLHelp/*.zip,${{ needs.preamble.outputs.folder_build }}/docs/PT_BR/HTMLHelp/*.zip"

- name: Locate k8s deb file (community)
if: ${{ !matrix.ln && matrix.container && !matrix.documentation }}
if: ${{ !matrix.ln && !matrix.ee && matrix.container && !matrix.documentation }}
id: container
run: |
k8s_pkg_path=$(ls -t ${{ needs.preamble.outputs.folder_build }}/*64_k8s.deb 2>/dev/null | head -1)
Expand All @@ -254,7 +257,7 @@ jobs:
- name: Create Docker Image (community)
uses: docker/build-push-action@v4
if: ${{ !matrix.ln && matrix.container && !matrix.documentation }}
if: ${{ !matrix.ln && !matrix.ee && matrix.container && !matrix.documentation }}
with:
builder: ${{ steps.buildx.outputs.name }}
file: ${{ needs.preamble.outputs.folder_platform }}/dockerfiles/vcpkg/platform-core-${{ matrix.os }}/Dockerfile
Expand Down Expand Up @@ -360,6 +363,33 @@ jobs:
jf docker push ${{ secrets.JFROG_REGISTRY || 'dummy.io' }}/hpccpl-docker-local/platform-core-ln:${{ needs.preamble.outputs.hpcc_version }} --build-name=platform-core-ln --build-number=${{ needs.preamble.outputs.hpcc_version }} --project=hpccpl
jf rt bp platform-core-ln ${{ needs.preamble.outputs.hpcc_version }} --project=hpccpl
# Enterprise Build ---
- name: CMake Packages (enterprise)
if: ${{ matrix.ee }}
run: |
mkdir -p ${{ needs.preamble.outputs.folder_build }}
echo "${{ secrets.SIGNING_SECRET }}" > ${{ needs.preamble.outputs.folder_build }}/private.key
sudo rm -f ${{ needs.preamble.outputs.folder_build }}/CMakeCache.txt
sudo rm -rf ${{ needs.preamble.outputs.folder_build }}/CMakeFiles
docker run --rm --mount ${{ needs.preamble.outputs.mount_platform }} --mount ${{ needs.preamble.outputs.mount_ln }} --mount ${{ needs.preamble.outputs.mount_build }} ${{ steps.vars.outputs.docker_tag_candidate_base }} "${{ needs.preamble.outputs.gpg_import }} && \
cmake -S /hpcc-dev/LN -B /hpcc-dev/build -DHPCC_SOURCE_DIR=/hpcc-dev/HPCC-Platform ${{ needs.preamble.outputs.cmake_docker_config }} -DBUILD_LEVEL=ENTERPRISE -DSIGN_MODULES_PASSPHRASE=${{ secrets.SIGN_MODULES_PASSPHRASE }} -DSIGN_MODULES_KEYID=${{ secrets.SIGN_MODULES_KEYID }} -DPLATFORM=ON -DINCLUDE_PLUGINS=ON -DCONTAINERIZED=OFF -DSUPPRESS_REMBED=ON -DSUPPRESS_V8EMBED=ON -DSUPPRESS_SPARK=ON -DCPACK_STRIP_FILES=OFF && \
cmake --build /hpcc-dev/build --parallel $(nproc) --target package"
- name: Upload Assets (enterprise)
if: ${{ matrix.ee }}
uses: ncipollo/[email protected]
with:
allowUpdates: true
generateReleaseNotes: false
prerelease: ${{ contains(github.ref, '-rc') }}
owner: ${{ secrets.LNB_ACTOR }}
repo: LN
token: ${{ secrets.LNB_TOKEN }}
tag: ${{ needs.preamble.outputs.internal_tag }}
artifacts: "${{ needs.preamble.outputs.folder_build }}/hpccsystems-platform-enterprise*.deb,${{ needs.preamble.outputs.folder_build }}/hpccsystems-platform-enterprise*.rpm"


# Common ---
- name: Cleanup Environment
if: always()
Expand Down
7 changes: 5 additions & 2 deletions common/thorhelper/roxierow.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -488,9 +488,12 @@ class CAllocatorCache : public CSimpleInterfaceOf<IRowAllocatorMetaActIdCache>
CAllocatorCacheItem *container = _lookup(meta, activityId, flags);
if (container)
{
if (0 == (roxiemem::RHFunique & flags))
if (0 == ((roxiemem::RHFunique|roxiemem::RHFblocked) & flags))
return LINK(&container->queryElement());
// if in cache but unique, reuse allocatorId
// If in cache but unique, reuse allocatorId, but create a unique allocator (and heap)
// If blocked the allocator must not be commoned up! (The underlying heap will be within roxiemem.)
// This is very unusual, but can happen if a library is used more than once within the same query
// since you will have multiple activity instances with the same activityId.
SpinUnblock b(allAllocatorsLock);
return callback->createAllocator(this, meta, activityId, container->queryAllocatorId(), flags);
}
Expand Down
7 changes: 4 additions & 3 deletions dali/base/dafdesc.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -3856,9 +3856,10 @@ class CStoragePlaneInfo : public CInterfaceOf<IStoragePlane>
Owned<IPropertyTreeIterator> srcAliases = xml->getElements("aliases");
ForEach(*srcAliases)
aliases.push_back(new CStoragePlaneAlias(&srcAliases->query()));
Owned<IPropertyTreeIterator> srcHosts = xml->getElements("hosts");
ForEach(*srcHosts)
hosts.emplace_back(srcHosts->query().queryProp(nullptr));
StringArray planeHosts;
getPlaneHosts(planeHosts, xml);
ForEachItemIn(h, planeHosts)
hosts.emplace_back(planeHosts.item(h));
}

virtual const char * queryPrefix() const override { return xml->queryProp("@prefix"); }
Expand Down
8 changes: 4 additions & 4 deletions dali/base/dautils.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -2299,7 +2299,7 @@ void setMaxPageCacheItems(unsigned _maxPageCacheItems)
class CTimedCacheItem: public CInterface
{
protected: friend class CTimedCache;
unsigned due = 0;
unsigned timestamp = 0;
StringAttr owner;
public:
DALI_UID hint;
Expand Down Expand Up @@ -2342,9 +2342,9 @@ class CTimedCache
ForEachItemIn(i, items)
{
CTimedCacheItem &item = items.item(i);
if (item.due > now)
if (now - item.timestamp < pageCacheTimeoutMilliSeconds)
{
res = item.due - now;
res = pageCacheTimeoutMilliSeconds - (now - item.timestamp);
break;
}
expired++;
Expand Down Expand Up @@ -2392,7 +2392,7 @@ class CTimedCache
CriticalBlock block(sect);
if ((maxPageCacheItems > 0) && (maxPageCacheItems == items.length()))
items.remove(0);
item->due = msTick() + pageCacheTimeoutMilliSeconds;
item->timestamp = msTick();
items.append(*item);
DALI_UID ret = item->hint;
sem.signal();
Expand Down
94 changes: 69 additions & 25 deletions dali/ft/filecopy.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -3384,25 +3384,9 @@ void FileSprayer::spray()
afterTransfer();

//If got here then we have succeeded
//Note: On failure, costs will not be updated. Future: would be useful to have a way to update costs on failure.
updateTargetProperties();

//Calculate and store file access cost
cost_type fileAccessCost = 0;
if (distributedTarget)
{
StringBuffer cluster;
distributedTarget->getClusterName(0, cluster);
if (!cluster.isEmpty())
fileAccessCost += calcFileAccessCost(cluster, totalNumWrites, 0);
}
if (distributedSource && distributedSource->querySuperFile()==nullptr)
{
StringBuffer cluster;
distributedSource->getClusterName(0, cluster);
if (!cluster.isEmpty())
fileAccessCost += calcFileAccessCost(cluster, 0, totalNumReads);
}
progressReport->setFileAccessCost(fileAccessCost);
StringBuffer copyEventText; // [logical-source] > [logical-target]
if (distributedSource)
copyEventText.append(distributedSource->queryLogicalName());
Expand Down Expand Up @@ -3455,6 +3439,7 @@ void FileSprayer::updateTargetProperties()
{
TimeSection timer("FileSprayer::updateTargetProperties() time");
Owned<IException> error;
cost_type totalWriteCost = 0;
if (distributedTarget)
{
StringBuffer failedParts;
Expand All @@ -3467,6 +3452,7 @@ void FileSprayer::updateTargetProperties()
bool sameSizeHeaderFooter = isSameSizeHeaderFooter();
bool sameSizeSourceTarget = (sources.ordinality() == distributedTarget->numParts());
offset_t partCompressedLength = 0;
IDistributedSuperFile * superTgt = distributedTarget->querySuperFile();

ForEachItemIn(idx, partition)
{
Expand Down Expand Up @@ -3585,15 +3571,37 @@ void FileSprayer::updateTargetProperties()
partLength = 0;
partCompressedLength = 0;
}

// Update @writeCost and @numWrites in subfile properties and update totalWriteCost
if (superTgt)
{
if (cur.whichOutput != (unsigned)-1)
{
unsigned targetPartNum = targets.item(cur.whichOutput).partNum;
IDistributedFile &subfile = superTgt->querySubFile(targetPartNum, true);
DistributedFilePropertyLock lock(&subfile);
IPropertyTree &subFileProps = lock.queryAttributes();
cost_type prevNumWrites = subFileProps.getPropInt64(getDFUQResultFieldName(DFUQRFnumDiskWrites));
cost_type prevWriteCost = subFileProps.getPropInt64(getDFUQResultFieldName(DFUQRFwriteCost));
cost_type curWriteCost = calcFileAccessCost(&subfile, curProgress.numWrites, 0);
subFileProps.setPropInt64(getDFUQResultFieldName(DFUQRFwriteCost), prevWriteCost + curWriteCost);
subFileProps.setPropInt64(getDFUQResultFieldName(DFUQRFnumDiskWrites), prevNumWrites + curProgress.numWrites);
totalWriteCost += curWriteCost;
}
else // not sure if tgt superfile can have whichOutput==-1 (but if so, the following cost calc works)
totalWriteCost += calcFileAccessCost(distributedTarget, curProgress.numWrites, 0);
}
}

if (failedParts.length())
error.setown(MakeStringException(DFTERR_InputOutputCrcMismatch, "%s", failedParts.str()));

DistributedFilePropertyLock lock(distributedTarget);
IPropertyTree &curProps = lock.queryAttributes();
cost_type writeCost = calcFileAccessCost(distributedTarget, totalNumWrites, 0);
curProps.setPropInt64(getDFUQResultFieldName(DFUQRFwriteCost), writeCost);

if (!superTgt)
totalWriteCost = calcFileAccessCost(distributedTarget, totalNumWrites, 0);
curProps.setPropInt64(getDFUQResultFieldName(DFUQRFwriteCost), totalWriteCost);
curProps.setPropInt64(getDFUQResultFieldName(DFUQRFnumDiskWrites), totalNumWrites);

if (calcCRC())
Expand Down Expand Up @@ -3772,17 +3780,53 @@ void FileSprayer::updateTargetProperties()
if (expireDays != -1)
curProps.setPropInt("@expireDays", expireDays);
}
// Update file readCost and numReads in file properties and do the same for subfiles
// Update totalReadCost
cost_type totalReadCost = 0;
if (distributedSource)
{
if (distributedSource->querySuperFile()==nullptr)
IDistributedSuperFile * superSrc = distributedSource->querySuperFile();
if (superSrc)
{
IPropertyTree & fileAttr = distributedSource->queryAttributes();
cost_type legacyReadCost = getLegacyReadCost(fileAttr, distributedSource);
cost_type curReadCost = calcFileAccessCost(distributedSource, 0, totalNumReads);
distributedSource->addAttrValue(getDFUQResultFieldName(DFUQRFreadCost), legacyReadCost+curReadCost);
distributedSource->addAttrValue(getDFUQResultFieldName(DFUQRFnumDiskReads), totalNumReads);
ForEachItemIn(idx, partition)
{
PartitionPoint & cur = partition.item(idx);
OutputProgress & curProgress = progress.item(idx);

if (cur.whichInput != (unsigned)-1)
{
unsigned sourcePartNum = sources.item(cur.whichInput).partNum;
IDistributedFile &subfile = superSrc->querySubFile(sourcePartNum, true);
DistributedFilePropertyLock lock(&subfile);
IPropertyTree &subFileProps = lock.queryAttributes();
stat_type prevNumReads = subFileProps.getPropInt64(getDFUQResultFieldName(DFUQRFnumDiskReads), 0);
cost_type legacyReadCost = getLegacyReadCost(subfile.queryAttributes(), &subfile);
cost_type prevReadCost = subFileProps.getPropInt64(getDFUQResultFieldName(DFUQRFreadCost), 0);
cost_type curReadCost = calcFileAccessCost(&subfile, 0, curProgress.numReads);
subFileProps.setPropInt64(getDFUQResultFieldName(DFUQRFnumDiskReads), prevNumReads + curProgress.numReads);
subFileProps.setPropInt64(getDFUQResultFieldName(DFUQRFreadCost), legacyReadCost + prevReadCost + curReadCost);
totalReadCost += curReadCost;
}
else
{
// not sure if src superfile can have whichInput==-1 (but if so, this is best effort to calc cost)
totalReadCost += calcFileAccessCost(distributedSource, 0, curProgress.numReads);
}
}
}
else
{
totalReadCost = calcFileAccessCost(distributedSource, 0, totalNumReads);
}
DistributedFilePropertyLock lock(distributedSource);
IPropertyTree &curProps = lock.queryAttributes();
stat_type prevNumReads = curProps.getPropInt64(getDFUQResultFieldName(DFUQRFnumDiskReads), 0);
cost_type legacyReadCost = getLegacyReadCost(curProps, distributedSource);
cost_type prevReadCost = curProps.getPropInt64(getDFUQResultFieldName(DFUQRFreadCost), 0);
curProps.setPropInt64(getDFUQResultFieldName(DFUQRFnumDiskReads), prevNumReads + totalNumReads);
curProps.setPropInt64(getDFUQResultFieldName(DFUQRFreadCost), legacyReadCost + prevReadCost + totalReadCost);
}
progressReport->setFileAccessCost(cost_type2money(totalReadCost+totalWriteCost));
if (error)
throw error.getClear();
}
Expand Down
56 changes: 53 additions & 3 deletions docs/EN_US/ConfiguringHPCC/ConfiguringHPCC.xml
Original file line number Diff line number Diff line change
Expand Up @@ -1155,8 +1155,10 @@ sudo -u hpcc cp /etc/HPCCSystems/source/NewEnvironment.xml /etc/HPCCSystems/envi
<entry>Directory</entry>

<entry>Description</entry>
</row></thead>
<tbody>
</row>
</thead>

<tbody>
<row>
<entry>
<emphasis role="bold">log</emphasis>
Expand Down Expand Up @@ -1449,6 +1451,30 @@ sudo -u hpcc cp /etc/HPCCSystems/source/NewEnvironment.xml /etc/HPCCSystems/envi
</para>
</sect3>

<sect3 id="AllowedPipePrograms_hThor">
<title>Allowed Pipe Programs</title>

<para>In version 9.2.0 and greater, commands used in a PIPE action
are restricted by default. However, for legacy reasons, the default
stock behavior is different in bare-metal and containerized
deployments. In both types of systems, if allowedPipePrograms is
unset, then all but "built-in" programs are restricted (The only
built-in program currently is 'roxiepipe').</para>

<para>In bare-metal, the default environment.xml includes a setting
value of "*" for <emphasis
role="bold">allowedPipePrograms</emphasis>. This means (by default)
any PIPE program can be used.</para>

<para>
<emphasis role="bold">In a secure system, this should be removed
or edited to avoid arbitrary programs, including system programs,
from being executed.</emphasis>
</para>

<para />
</sect3>

<sect3>
<title>EclAgentProcessNotes</title>

Expand Down Expand Up @@ -1702,7 +1728,7 @@ sudo -u hpcc cp /etc/HPCCSystems/source/NewEnvironment.xml /etc/HPCCSystems/envi
<para>If you update the platform, but are using a preexisting
configuration, you could encounter a situation where Feature level
access flags are not automatically created. Missing flags can deny
access to users trying to access features in the system. </para>
access to users trying to access features in the system.</para>

<orderedlist>
<listitem>
Expand Down Expand Up @@ -2653,6 +2679,30 @@ sudo -u hpcc cp /etc/HPCCSystems/source/NewEnvironment.xml /etc/HPCCSystems/envi
</sect4>
</sect3>

<sect3 id="AllowedPipeProgramsThor">
<title>Allowed Pipe Programs</title>

<para>In version 9.2.0 and greater, commands used in a PIPE action
are restricted by default. However, for legacy reasons, the default
stock behavior is different in bare-metal and containerized
deployments. In both types of systems, if allowedPipePrograms is
unset, then all but "built-in" programs are restricted (The only
built-in program currently is 'roxiepipe'). </para>

<para>In bare-metal, the default environment.xml includes a setting
value of "*" for <emphasis
role="bold">allowedPipePrograms</emphasis>. This means (by default)
any PIPE program can be used. </para>

<para>
<emphasis role="bold">In a secure system, this should be removed
or edited to avoid arbitrary programs, including system programs,
from being executed.</emphasis>
</para>

<para />
</sect3>

<xi:include href="ECLWatch/TheECLWatchMan.xml"
xpointer="xpointer(//*[@id='ECLWatchXREFMultiThor'])"
xmlns:xi="http://www.w3.org/2001/XInclude" />
Expand Down
Loading

0 comments on commit d480684

Please sign in to comment.