Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP adapt DW PR#2568 to use accumulo PR#4898 #2582

Draft
wants to merge 12 commits into
base: integration
Choose a base branch
from

Conversation

keith-turner
Copy link
Collaborator

@keith-turner keith-turner commented Sep 30, 2024

These draft changes build on #2568 with the following differences.

  • Compute bulkv2 load plans using new unreleased APIs in accumulo PR
    4898
  • The table splits are loaded at the beginning of writing to rfiles
    instead of at the end. Not sure about the overall implications on
    on memory use in reducers of this change. The load plan could
    be computed after the rfile is closed using a new API in 4898 if
    defering the loading of tablet splits is desired.
  • Switches to using accumulo public APIs for writing rfiles instaead of
    internal accumulo methods. Well public once they are actually
    released.
  • The algorithm to compute the load plan does less work per key/value.
    Should be rougly constant time vs log(N).
  • Adds a simple SortedList class. This reason this was added is that
    this code does binary searches on list, however it was not certain
    those list were actually sorted. If the list was not sorted it would
    not cause exceptions in binary search but could lead to incorrect load
    plans and lost data. This new SortedList class ensures list are
    sorted and allows this assurance to travel around in the code. Maybe
    this change should be its own PR.

These changes are not yet tested.

keith-ratcliffe and others added 4 commits September 20, 2024 15:32
These draft changes build on NationalSecurityAgency#2568 with the following differences.

 * Compute bulkv2 load plans using new unreleased APIs in accumulo PR
   4898
 * The table splits are loaded at the beginning of writing to rfiles
   instead of at the end.  Not sure about the overall implications on
   on memory use in reducers of this change.  The load plan could
   be computed after the rfile is closed using a new API in 4898 if
   defering the loading of tablet splits is desired.
 * Switches to using accumulo public APIs for writing rfiles instaead of
   internal accumulo methods. Well public once they are actually
   released.
 * The algorithm to compute the load plan does less work per key/value.
   Should be rougly constant time vs log(N).
 * Adds a simple SortedList class.  This reason this was added is that
   this code does binary searches on list, however it was not certain
   those list were actually sorted.  If the list was not sorted it would
   not cause exceptions in binary search but could lead to incorrect load
   plans and lost data. This new SortedList class ensures list are
   sorted and allows this assurance to travel around in the code.  Maybe
   this change should be its own PR.
@@ -9,7 +9,7 @@ DW_DATAWAVE_INGEST_HOME="${DW_CLOUD_HOME}/${DW_DATAWAVE_INGEST_SYMLINK}"
# ingest reducers. Set to 1 for standalone instance, but typically set to the first prime number that is less than the
# number of available Accumulo tablet servers...

DW_DATAWAVE_INGEST_NUM_SHARDS=${DW_DATAWAVE_INGEST_NUM_SHARDS:-1}
DW_DATAWAVE_INGEST_NUM_SHARDS=${DW_DATAWAVE_INGEST_NUM_SHARDS:-10}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to go against the comment

@@ -72,7 +72,7 @@ DW_DATAWAVE_INGEST_LIVE_DATA_TYPES=${DW_DATAWAVE_INGEST_LIVE_DATA_TYPES:-"wikipe

# Comma-delimited data type identifiers to be ingested via "bulk" ingest, ie via bulk import of RFiles into Accumulo tables

DW_DATAWAVE_INGEST_BULK_DATA_TYPES=${DW_DATAWAVE_INGEST_BULK_DATA_TYPES:-"shardStats"}
DW_DATAWAVE_INGEST_BULK_DATA_TYPES=${DW_DATAWAVE_INGEST_BULK_DATA_TYPES:-"shardStats,wikipedia,mycsv,myjson"}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We shouldn't have the same datatypes in both bulk and live.

@keith-ratcliffe
Copy link
Collaborator

@ddanielr, @keith-turner, it seems like the BulkV2-accumulo-4898 branch on this repo might be the most recent source for this effort, but I'm not sure.

If so, should we close this PR and open a new one on that branch instead? Or, alternatively, force push it back to the keith-turner fork, leaving this PR intact?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants