Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove GC in harness_begin. full_heap_system_gc defaults to true #1141

Open
wants to merge 6 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 23 additions & 1 deletion docs/userguide/src/migration/prefix.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,29 @@ Notes for the mmtk-core developers:

## 0.26.0

### Remove GC in `harness_begin`

```admonish tldr
`harness_begin` no longer triggers a GC before collecting statistics. The user application
is responsible to trigger a GC before calling `harness_begin`.
```

API changes:
* module `memory_manager`
* `handle_user_collection_request`
* It now takes an argument `exhaustive` to indicate if the user triggered GC is
exhaustive (full heap) or not.
* type `Options`
* The runtime option `full_heap_sytem_gc` is removed.

Not API change, but worth noting:

* module `mmtk::memory_manager`
* `harness_begin`
* The function used to trigger a forced full heap GC before starting collecting statistics.
Now it no longer triggers the GC.
* The user applications and benchmarks are responsible to trigger a GC before calling `harness_begin`.

### Rename "edge" to "slot"

```admonish tldr
Expand Down Expand Up @@ -75,7 +98,6 @@ See also:
- PR: <https://github.com/mmtk/mmtk-core/pull/1134>
- Example: <https://github.com/mmtk/mmtk-openjdk/pull/274>


## 0.25.0

### `ObjectReference` is no longer nullable
Expand Down
7 changes: 6 additions & 1 deletion src/global_state.rs
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@ pub struct GlobalState {
pub(crate) emergency_collection: AtomicBool,
/// Is the current GC triggered by the user?
pub(crate) user_triggered_collection: AtomicBool,
/// Is the current user triggered GC an exhaustive GC (e.g. full heap)?
pub(crate) user_triggered_exhaustive_gc: AtomicBool,
/// Is the current GC triggered internally by MMTK? This is unused for now. We may have internally triggered GC
/// for a concurrent plan.
pub(crate) internal_triggered_collection: AtomicBool,
Expand Down Expand Up @@ -123,7 +125,9 @@ impl GlobalState {
self.internal_triggered_collection
.store(false, Ordering::SeqCst);
self.user_triggered_collection
.store(false, Ordering::Relaxed);
.store(false, Ordering::SeqCst);
self.user_triggered_exhaustive_gc
.store(false, Ordering::SeqCst);
}

/// Are the stacks scanned?
Expand Down Expand Up @@ -204,6 +208,7 @@ impl Default for GlobalState {
stacks_prepared: AtomicBool::new(false),
emergency_collection: AtomicBool::new(false),
user_triggered_collection: AtomicBool::new(false),
user_triggered_exhaustive_gc: AtomicBool::new(false),
internal_triggered_collection: AtomicBool::new(false),
last_internal_triggered_collection: AtomicBool::new(false),
allocation_success: AtomicBool::new(false),
Expand Down
14 changes: 10 additions & 4 deletions src/memory_manager.rs
Original file line number Diff line number Diff line change
Expand Up @@ -571,8 +571,13 @@ pub fn total_bytes<VM: VMBinding>(mmtk: &MMTK<VM>) -> usize {
/// Arguments:
/// * `mmtk`: A reference to an MMTk instance.
/// * `tls`: The thread that triggers this collection request.
pub fn handle_user_collection_request<VM: VMBinding>(mmtk: &MMTK<VM>, tls: VMMutatorThread) {
mmtk.handle_user_collection_request(tls, false, false);
/// * `exhaustive`: The GC should be exhaustive (e.g. full heap GC, compaction GCs, etc.)
pub fn handle_user_collection_request<VM: VMBinding>(
mmtk: &MMTK<VM>,
tls: VMMutatorThread,
exhaustive: bool,
) {
mmtk.handle_user_collection_request(tls, exhaustive);
}

/// Is the object alive?
Expand Down Expand Up @@ -710,8 +715,9 @@ pub fn add_phantom_candidate<VM: VMBinding>(mmtk: &MMTK<VM>, reff: ObjectReferen
mmtk.reference_processors.add_phantom_candidate(reff);
}

/// Generic hook to allow benchmarks to be harnessed. We do a full heap
/// GC, and then start recording statistics for MMTk.
/// Generic hook to allow benchmarks to be harnessed. It is recommended that
/// the application/benchmark should trigger a GC through [`crate::memory_manager::handle_user_collection_request`]
/// before calling `harness_begin`.
///
/// Arguments:
/// * `mmtk`: A reference to an MMTk instance.
Expand Down
31 changes: 10 additions & 21 deletions src/mmtk.rs
Original file line number Diff line number Diff line change
Expand Up @@ -318,12 +318,10 @@ impl<VM: VMBinding> MMTK<VM> {
self.scheduler.respawn_gc_threads_after_forking(tls);
}

/// Generic hook to allow benchmarks to be harnessed. MMTk will trigger a GC
/// to clear any residual garbage and start collecting statistics for the benchmark.
/// Generic hook to allow benchmarks to be harnessed.
/// This is usually called by the benchmark harness as its last step before the actual benchmark.
pub fn harness_begin(&self, tls: VMMutatorThread) {
pub fn harness_begin(&self, _tls: VMMutatorThread) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we decided that this is going to be an API-breaking change, we can remove this unused _tls argument.

Suggested change
pub fn harness_begin(&self, _tls: VMMutatorThread) {
pub fn harness_begin(&self) {

probe!(mmtk, harness_begin);
self.handle_user_collection_request(tls, true, true);
self.inside_harness.store(true, Ordering::SeqCst);
self.stats.start_all();
self.scheduler.enable_stat();
Expand Down Expand Up @@ -403,35 +401,26 @@ impl<VM: VMBinding> MMTK<VM> {
}

/// The application code has requested a collection. This is just a GC hint, and
/// we may ignore it.
/// we may ignore it (See `ignore_system_gc` and `full_heap_system_gc` in [`crate::util::options::Options`]).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We removed full_heap_system_gc.

Suggested change
/// we may ignore it (See `ignore_system_gc` and `full_heap_system_gc` in [`crate::util::options::Options`]).
/// we may ignore it (See `ignore_system_gc` in [`crate::util::options::Options`]).

///
/// # Arguments
/// * `tls`: The mutator thread that requests the GC
/// * `force`: The request cannot be ignored (except for NoGC)
/// * `exhaustive`: The requested GC should be exhaustive. This is also a hint.
pub fn handle_user_collection_request(
&self,
tls: VMMutatorThread,
force: bool,
exhaustive: bool,
) {
/// * `exhaustive`: The GC should be exhaustive (e.g. full heap GC, compaction GCs, etc.)
pub fn handle_user_collection_request(&self, tls: VMMutatorThread, exhaustive: bool) {
use crate::vm::Collection;
if !self.get_plan().constraints().collects_garbage {
warn!("User attempted a collection request, but the plan can not do GC. The request is ignored.");
return;
}

if force || !*self.options.ignore_system_gc && VM::VMCollection::is_collection_enabled() {
if !*self.options.ignore_system_gc && VM::VMCollection::is_collection_enabled() {
info!("User triggering collection");
if exhaustive {
if let Some(gen) = self.get_plan().generational() {
gen.force_full_heap_collection();
}
}

self.state
.user_triggered_collection
.store(true, Ordering::Relaxed);
.store(true, Ordering::SeqCst);
self.state
.user_triggered_exhaustive_gc
.store(exhaustive, Ordering::SeqCst);
self.gc_requester.request();
VM::VMCollection::block_for_gc(tls);
}
Expand Down
7 changes: 6 additions & 1 deletion src/plan/generational/global.rs
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,12 @@ impl<VM: VMBinding> CommonGenPlan<VM> {
.global_state
.user_triggered_collection
.load(Ordering::SeqCst)
&& *self.common.base.options.full_heap_system_gc
&& self
.common
.base
.global_state
.user_triggered_exhaustive_gc
.load(Ordering::SeqCst)
{
trace!("full heap: user triggered");
// User triggered collection, and we force full heap for user triggered collection
Expand Down
5 changes: 4 additions & 1 deletion src/plan/immix/global.rs
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,10 @@ impl<VM: VMBinding> Immix<VM> {
.cur_collection_attempts
.load(Ordering::SeqCst),
plan.base().global_state.is_user_triggered_collection(),
*plan.base().options.full_heap_system_gc,
plan.base()
.global_state
.user_triggered_exhaustive_gc
.load(Ordering::SeqCst),
);

if in_defrag {
Expand Down
8 changes: 7 additions & 1 deletion src/plan/sticky/immix/global.rs
Original file line number Diff line number Diff line change
Expand Up @@ -354,7 +354,13 @@ impl<VM: VMBinding> StickyImmix<VM> {
.global_state
.user_triggered_collection
.load(Ordering::SeqCst)
&& *self.immix.common.base.options.full_heap_system_gc
&& self
.immix
.common
.base
.global_state
.user_triggered_exhaustive_gc
.load(Ordering::SeqCst)
{
// User triggered collection, and we force full heap for user triggered collection
true
Expand Down
2 changes: 0 additions & 2 deletions src/util/options.rs
Original file line number Diff line number Diff line change
Expand Up @@ -806,8 +806,6 @@ options! {
/// to 10% of the heap size while using the default value for max nursery.
nursery: NurserySize [env_var: true, command_line: true] [|v: &NurserySize| v.validate()]
= NurserySize::ProportionalBounded { min: DEFAULT_PROPORTIONAL_MIN_NURSERY, max: DEFAULT_PROPORTIONAL_MAX_NURSERY },
/// Should a major GC be performed when a system GC is required?
full_heap_system_gc: bool [env_var: true, command_line: true] [always_valid] = false,
/// Should finalization be disabled?
no_finalizer: bool [env_var: true, command_line: true] [always_valid] = false,
/// Should reference type processing be disabled?
Expand Down
Loading