-
Notifications
You must be signed in to change notification settings - Fork 244
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP adapt DW PR#2568 to use accumulo PR#4898 #2582
base: integration
Are you sure you want to change the base?
WIP adapt DW PR#2568 to use accumulo PR#4898 #2582
Conversation
These draft changes build on NationalSecurityAgency#2568 with the following differences. * Compute bulkv2 load plans using new unreleased APIs in accumulo PR 4898 * The table splits are loaded at the beginning of writing to rfiles instead of at the end. Not sure about the overall implications on on memory use in reducers of this change. The load plan could be computed after the rfile is closed using a new API in 4898 if defering the loading of tablet splits is desired. * Switches to using accumulo public APIs for writing rfiles instaead of internal accumulo methods. Well public once they are actually released. * The algorithm to compute the load plan does less work per key/value. Should be rougly constant time vs log(N). * Adds a simple SortedList class. This reason this was added is that this code does binary searches on list, however it was not certain those list were actually sorted. If the list was not sorted it would not cause exceptions in binary search but could lead to incorrect load plans and lost data. This new SortedList class ensures list are sorted and allows this assurance to travel around in the code. Maybe this change should be its own PR.
...house/ingest-core/src/main/java/datawave/ingest/mapreduce/job/MultiRFileOutputFormatter.java
Outdated
Show resolved
Hide resolved
…job/MultiRFileOutputFormatter.java Co-authored-by: Daniel Roberts <[email protected]>
@@ -9,7 +9,7 @@ DW_DATAWAVE_INGEST_HOME="${DW_CLOUD_HOME}/${DW_DATAWAVE_INGEST_SYMLINK}" | |||
# ingest reducers. Set to 1 for standalone instance, but typically set to the first prime number that is less than the | |||
# number of available Accumulo tablet servers... | |||
|
|||
DW_DATAWAVE_INGEST_NUM_SHARDS=${DW_DATAWAVE_INGEST_NUM_SHARDS:-1} | |||
DW_DATAWAVE_INGEST_NUM_SHARDS=${DW_DATAWAVE_INGEST_NUM_SHARDS:-10} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems to go against the comment
@@ -72,7 +72,7 @@ DW_DATAWAVE_INGEST_LIVE_DATA_TYPES=${DW_DATAWAVE_INGEST_LIVE_DATA_TYPES:-"wikipe | |||
|
|||
# Comma-delimited data type identifiers to be ingested via "bulk" ingest, ie via bulk import of RFiles into Accumulo tables | |||
|
|||
DW_DATAWAVE_INGEST_BULK_DATA_TYPES=${DW_DATAWAVE_INGEST_BULK_DATA_TYPES:-"shardStats"} | |||
DW_DATAWAVE_INGEST_BULK_DATA_TYPES=${DW_DATAWAVE_INGEST_BULK_DATA_TYPES:-"shardStats,wikipedia,mycsv,myjson"} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We shouldn't have the same datatypes in both bulk and live.
@ddanielr, @keith-turner, it seems like the BulkV2-accumulo-4898 branch on this repo might be the most recent source for this effort, but I'm not sure. If so, should we close this PR and open a new one on that branch instead? Or, alternatively, force push it back to the |
These draft changes build on #2568 with the following differences.
4898
instead of at the end. Not sure about the overall implications on
on memory use in reducers of this change. The load plan could
be computed after the rfile is closed using a new API in 4898 if
defering the loading of tablet splits is desired.
internal accumulo methods. Well public once they are actually
released.
Should be rougly constant time vs log(N).
this code does binary searches on list, however it was not certain
those list were actually sorted. If the list was not sorted it would
not cause exceptions in binary search but could lead to incorrect load
plans and lost data. This new SortedList class ensures list are
sorted and allows this assurance to travel around in the code. Maybe
this change should be its own PR.
These changes are not yet tested.