Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Synchronization Algebra #59

Open
eupp opened this issue Jan 22, 2021 · 4 comments
Open

Synchronization Algebra #59

eupp opened this issue Jan 22, 2021 · 4 comments
Labels
enhancement New feature or request RFC Request For Comment theory

Comments

@eupp
Copy link
Collaborator

eupp commented Jan 22, 2021

Synchronization algebra is a algebraic structure defined on the set of labels.
It determines pairs of events from parallel processes which should synchronize on parallel composition.

In [2] the S.A. is defined as tuple <L, \star, \bot, *> where:

  • L is a set of labels;
  • \star is a "fictional" asynchronous label (more about its purpose below);
  • \bot is undefined label, it's used to indicate when two labels cannot synchronize;
  • * is a binary synchronization operation.

With the following axioms.

  • * is associate
  • * is commutative
  • \star * \star = \star
  • l * l' = \star ==> l = l' = \star for all l, l'
  • l * \bot = \bot for all l

The paper [2] gives an example of S.A.

\star a !a t \bot
\star \star a !a t \bot
a a \bot t \bot \bot
!a !a t \bot \bot \bot
t t \bot \bot \bot \bot
\bot \bot \bot \bot \bot \bot

That is, the algebra consists of two complementary labels a and !a, which can synchronize into label t.

Consider two singleton event structures: E1 ::= { e1:a } and E2 ::= { e2:!a }
(with no conflicts and trivial causality, i.e. \emptyset |- e1 and \emptyset |- e2).
Then their parallel composition (according to [1]) will be
E1 || E2 ::= { (e1, \star):a, (\star, e2):!a, (e1, e2):t }
with trivial causality and conflict between events (e1, \star) & (e1, e2); and conflict between events (\star, e2) & (e1, e2).
In other words, an event (e1, e2) does not depend neither on (e1, \star) nor (\star, e2), and it is in conflict (cannot occur simultaneously) with both of them.

While the label \bot is used to indicate that two labels cannot synchronize (i.e. it encodes partiality of the synchronization operation), the label \star is used as dummy "idle" synchronization.
That is, since we have e1:a * e2:!a = (e1,e2):t, it is convinient to assume that e1:a * \star = (e1, \star):a, i.e. e1 "synchonizes" with distinguished fictional event \star.

Next, the paper [2] defines the "divides" relation (let's denote it as >>).

l1 >> l2 ::= \exists l3. l1 * l3 = l2.

This relation is proven to be transitive.

Now, giving this classical construction [1,2], we'll have to adjust it for our needs.

The most significant change that I want to propose is to get rid of commutativity, i.e. to make synchronization operation asymmetric. Thus we will be able to capture asymmetry between writes/reads to shared memory, or, more generally, an asymmetry between producers/consumers.

Let's consider the labels that we have in the relaxed shared memory concurrency semantics.

Label ::= W(x,v) | R(x,v)

That is, we have writes and reads, annotated by the memory location and value written/read.

It would be convenient to assume that there is a distinguished "undefined" value \bot.
We will use it to give the denotations to individual read instructions.

For example, consider two programs P1 ::= write(x, 1) and P2 ::= read(x).
Their denotations would be equal to the following event structures:
E1 ::= { e1:W(x,1) } and E2 ::= { e2:R(x,\bot) }.
Next, we want the denotation of their parallel composition P1 || P2 to be
E1 || E2 ::= { (e1,\star):W(x,1), (\star,e2):R(x,\bot), (e1,e2):R(x,1) }.
In order to achieve that, we need to define the synchronization algebra as follows:

  1. W(x,a) * R(x,\bot) = R(x,a);
  2. l1 * l2 = \bot otherwise.

Then we'll have that W(x,a) >> R(x,a) as expected.

However, an important departure from the classical theory,
is that in E1 || E2 we also want to add causality dependency
between the write and the synchronized read.
That is, we want (e1,e2):R(x,1) to causally depend on (e1,\star):W(x,1),
and at the same time the event (e1,e2):R(x,1) should be in conflict with (\star,e2):R(x,\bot).

That is why I propose to get rid of commutativity in synchronization algebra.
In other words, the W(x,a) * R(x,\bot) = R(x,a), but R(x,\bot) * W(x,a) = \bot.
My assumption is that this asymmetry will help us to capture the causality
between the write event (producer) and the read event (consumer).

Unresolved Questions

  1. How to better encode the partial synchronization operation?
  • Assume that the type L of labels has a distinguished \bot inhabitant.
    Then we can encode * as a total operation.
    The drawback of this approach is that the \bot label cannot be used
    as the "actual" label of some event in the event structure.
    That is, we will have to encode this as an additional axiom of the event structure,
    i.e. something like forall e, lab e != \bot.
    It might be very inconvenient to work with this constraint,
    as we'll have to make redundant case analysis every time we'll take a label of some event.

  • Use the option, i.e. * : L -> L -> option L.
    All common drawbacks of this approach follow immediately.

  • Use an encoding similar to the one from FCSL-PCM project https://github.com/imdea-software/fcsl-pcm.
    Here we need help from @anton-trunov to explain to us how it works
    and what are the pros and cons of this approach.

[1] Winskel G. Event structures //Advanced Course on Petri Nets. – Springer, Berlin, Heidelberg, 1986. – С. 325-392.
link

[2] Winskel G. Event structure semantics for CCS and related languages //International Colloquium on Automata, Languages, and Programming. – Springer, Berlin, Heidelberg, 1982. – С. 561-576.
link

@eupp eupp added enhancement New feature or request RFC Request For Comment theory labels Jan 22, 2021
@clayrat
Copy link
Contributor

clayrat commented Feb 26, 2021

It seems to me SAs are pretty much conical PCMs, e.g. see the presentation in https://core.ac.uk/download/pdf/82434444.pdf

@eupp
Copy link
Collaborator Author

eupp commented Feb 26, 2021

It seems to me SAs are pretty much conical PCMs, e.g. see the presentation in https://core.ac.uk/download/pdf/82434444.pdf

Well, except the commutativity part : )
I've given some intuition on why we try to get rid of commutativity in the original post.
I can try to give further explanations here after the weekend, or we can discuss it in person.

@clayrat
Copy link
Contributor

clayrat commented Feb 28, 2021

It sounds like you might want to keep some commutativity, arriving at some sort of a hybrid between PCM and a trace monoid (partial partially commutative monoid?)

@eupp
Copy link
Collaborator Author

eupp commented Mar 2, 2021

Currently, I'm not sure that commutativity applies to our setting. So let me try to explain why : )

In a relaxed memory theory, a common approach to define the semantics as a set of so-called execution graphs.
Those are the generalization of traces in the sense that a trace can be viewed as a total order, while the execution graph is a partial order. Related concepts/keywords --- pomsets, Mazurkiewicz traces. I believe those three (execution graphs, pomsets, Mazurkiewicz traces) are basically the same thing, with minor differences.

Each execution graph consists of set of events E, labeling function E -> L and two binary relations --- program order po and reads-from rf. Labels are usually either read or writes, i.e. R(x,v) and W(x,v), where x is a memory location, and v is a value read/written. Program order determines the intra-thread ordering between the events, while the reads-from determines the "dataflow" from write events to read events.
More details on this can be found, e.g. here, or basically in any paper from the group of Viktor Vafeadis, e.g. RC11 :)

The transitive closure of po and rf can be seen as a "causality" relation ca := (po+rf)^+. There is a couple of subtle points here. First, this causality order is different from the stronger "happens-before" order. Second, this definition doesn't work for some weaker models (e.g ARMv8, Promising, etc). But for a wide class of models (e.g. SC, TSO, RA, RC11), it matches our intuition (to some extent).

Now to the business!
The problem is that the reads-from relation is essentially not symmetric. It captures the flow of data from write events (producers) to read events (consumers). Now, our idea is to derive an "axiomatization" of reads-from relation from the synchronization algebra defined on labels. This is why we don't want a commutative monoid.

Back to the example in the beginning.
Suppose we have two processes P1 ::= write(x, 1) and P2 ::= read(x).
An axiomatic semantics (e.g. RC11) will assign two execution graphs to the program P1 || P2:
< { E:L := e1:W(x,1), e2:R(x,\bot) } , po := \empty, rf := \empty >
< { E:L := e1:W(x,1), e2:R(x,1) } , po := \empty, rf := { (e1, e2) } >

How we can reconstruct it following the compositional semantics?
We start with two denotations.
E1 := < { E:L := e1:W(x,1) } , po := \empty, rf := \empty >
E2 := < { E:L := e2:R(x,\bot) } , po := \empty, rf := \empty >.

Next, we take their composition E1 || E2. It's defined w.r.t synchronization algebra.
In our S.A. we have W(x,1) * R(x,\bot) = R(x,1).
Now, it is important that these two do not commute, i.e. R(x,\bot) * W(x,1) = 0 != R(x,1).
Essentially, the anticommutativity defines the "direction" of reads-from relation in the parallel product.
As a result, we will obtain

E := < { E:L := (e1,\star):W(x,1), (\star,e2):R(x,\bot), (e1, e2):R(x,1) }  , 
          po := \empty, 
          rf := { ((e1, \star), (e1, e2)) } >

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request RFC Request For Comment theory
Projects
None yet
Development

No branches or pull requests

2 participants