Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add bulk_init_chunk_size in torchrec #2638

Closed
wants to merge 1 commit into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions torchrec/distributed/types.py
Original file line number Diff line number Diff line change
Expand Up @@ -633,6 +633,7 @@ class KeyValueParams:
gather_ssd_cache_stats: bool: whether enable ssd stats collection, std reporter and ods reporter
report_interval: int: report interval in train iteration if gather_ssd_cache_stats is enabled
ods_prefix: str: ods prefix for ods reporting
bulk_init_chunk_size: int: number of rows to insert into rocksdb in each chunk

# Parameter Server (PS) Attributes
ps_hosts (Optional[Tuple[Tuple[str, int]]]): List of PS host ip addresses
Expand All @@ -652,6 +653,7 @@ class KeyValueParams:
l2_cache_size: Optional[int] = None # size in GB
max_l1_cache_size: Optional[int] = None # size in MB
enable_async_update: Optional[bool] = None
bulk_init_chunk_size: Optional[int] = None # number of rows

# Parameter Server (PS) Attributes
ps_hosts: Optional[Tuple[Tuple[str, int], ...]] = None
Expand All @@ -676,6 +678,7 @@ def __hash__(self) -> int:
self.l2_cache_size,
self.max_l1_cache_size,
self.enable_async_update,
self.bulk_init_chunk_size,
)
)

Expand Down