Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixing broken links #218

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion src/doc/reference/glossary/distributed_memory.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: Distributed memory
tags: distributed memory
---
Computer storage that is partitioned
among several {% defn "processors" %}. A distributed-memory {% defn "multiprocessor" %} is a computer in
among several {% defn "processor", "processors" %}. A distributed-memory {% defn "multiprocessor" %} is a computer in
which processors must send messages
to remote processors to access data in
remote processor memory. Contrast with
Expand Down
2 changes: 1 addition & 1 deletion src/doc/reference/glossary/scale_up.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,5 @@ title: Scale up
tags: scale up
---
The ability of a parallel application to run efficiently
on a large number of {% defn "processors" %}.
on a large number of {% defn "processor", "processors" %}.
See also {% defn "linear speedup" %}.
3 changes: 1 addition & 2 deletions src/doc/reference/reducers.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,7 @@ accumulator or appending an item to a list. As long as the operation
is _associative_ (`A ⊕ (B ⊕ C) = (A ⊕ B) ⊕ C`) the final result will
be correct.

Formally, a reducer is a mathematical object called a _{% defn
"monoid" %}_, meaning it has the following components:
Formally, a reducer is a mathematical object called a _monoid_, meaning it has the following components:
* a type (e.g `double`),
* an _identity_ value (`0.0`), and
* an associative binary operation (`+`).
Expand Down
15 changes: 7 additions & 8 deletions src/doc/tutorials/introduction-to-cilk-programming.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,9 @@ error-prone than serial programming. {% defn "OpenCilk" %} aims to bridge this
gap. OpenCilk supports the {% defn "Cilk" %} language extensions to C and C++,
which make it easy to write parallel programs that are both correct and fast.

OpenCilk is a {% defn "task-parallel-platforms-programming-and-algorithms",
"task-parallel platform" %} that provides language abstractions for {% defn
"shared-memory" %} parallel computations on {% defn "multicores", "multicore"
%} systems. As a Cilk programmer, you are only responsible for expressing the
OpenCilk is a {% defn "task-parallel" %} platform that provides language abstractions for
{% defn "shared-memory" %} parallel computations on {% defn "multicore" %}
systems. As a Cilk programmer, you are only responsible for expressing the
{% defn "logical parallelism" %} in your application, that is, which tasks
*may* run in parallel. (With Cilk, there are no tasks which *must* run in
parallel.) The OpenCilk compiler produces optimized parallel code, and the
Expand All @@ -44,15 +43,15 @@ When using the OpenCilk platform, you write code in the Cilk language, which
extends C and C++ with a just few keywords to support task-parallel
programming. Specifically, Cilk supports {% defn "fork-join parallelism" %}, a
simple and effective form of task parallelism. Cilk provides linguistic
mechanisms for {% defn "spawning" %} and {% defn "parallel loops" %}.
mechanisms for {% defn "spawn", "spawning" %} and {% defn "parallel loop", "parallel loops" %}.

In this tutorial, we'll introduce spawning parallel tasks. Upcoming tutorials
will also cover the following:

- How to use parallel loops.
- How to ensure your program is free of {% defn "determinacy-race", "race bugs"
%} using the {% defn "Cilksan" %} tool.
- How to determine the {% defn "scalability" %} of your program on multiple
- How to determine the {% defn "scale down", "scalability" %} of your program on multiple
processors using the {% defn "Cilkscale" %} tool.
- How OpenCilk runs your program to achieve good performance.

Expand All @@ -63,10 +62,10 @@ and parallel regions. A serial region contains no parallelism and its tasks
are executed in sequence, as usual. A parallel region has two distinguishing
characteristics:

1. Within the parallel region, functions may be {% defn "spawning", "spawned"
1. Within the parallel region, functions may be {% defn "spawn", "spawned"
%}, i.e., they may be run in parallel with the caller.
2. At the end of the parallel region, all functions that were spawned within it
are {% defn "syncing", "synced" %}, i.e., they have finished executing.
are {% defn "sync", "synced" %}, i.e., they have finished executing.

Functions within either a serial or parallel region may themselves contain
serial and parallel regions, allowing for {% defn "nested parallelism" %}.
Expand Down
2 changes: 1 addition & 1 deletion src/doc/users-guide/cilkscale.md
Original file line number Diff line number Diff line change
Expand Up @@ -436,7 +436,7 @@ exhibits sufficient slackness for only 2–3 cores.

An additional issue is that the memory bandwidth of the laptop that was used in
these experiments becomes insufficient as more processing cores are used. This
is often the case for computations with low {% defn "arithmetic intensity" %}
is often the case for computations with low arithmetic intensity
when the observed parallel speedup falls below the burdened-dag speedup bound.
(Another possible cause for speedup below the burdened-dag bound is {% defn
"contention" %} of parallel resources.) The memory bandwidth ceiling was
Expand Down