Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lockfree Ripser #31

Open
wants to merge 42 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
eaf5900
Switch compressed_sparse_matrix to vector<vector<ValueType>>
mrzv Feb 26, 2020
7d2babc
Add USE_DENSE_PIVOTS
mrzv Feb 26, 2020
35ee0dd
Fix compilation with coefficients
mrzv Feb 26, 2020
10a06fa
Make cofacet_entries static thread_local
mrzv Mar 9, 2020
adeb456
Add trivial_concurrent_hash_map; for now doesn't work with coefficients
mrzv Mar 12, 2020
1eca2b0
Make entry_t = index_t, when using coefficients, to enable atomic_ref
mrzv Mar 12, 2020
44f6581
Undo over-reserving pivot_column_index
mrzv Mar 15, 2020
d923928
Switch compressed_sparse_matrix to a vector of atomic pointers to vector
mrzv Mar 15, 2020
caec39e
Read column and check correctness of the read
mrzv Mar 15, 2020
cbfcf29
Store coefficient in the value of the hash map, not the key
mrzv Mar 15, 2020
0882d65
Restructure the code for parallelism
mrzv Mar 15, 2020
1047cb2
Add i/atomic_ref_serial.hpp, included via a flag
mrzv Mar 15, 2020
7e21cf8
Add USE_PARALLEL_STL for-loop parallelization
mrzv Mar 15, 2020
7ddbf8c
Add logic to move reduction to the right
mrzv Mar 16, 2020
8c5764d
Add shuffled serial and hand-rolled chunking in foreach
mrzv Mar 16, 2020
f04a19b
Don't add diagonal element to working_reduction_column, when continui…
mrzv Mar 16, 2020
7711c40
Make simplex_coboundary_enumerator::neighbor_{it,end} cache thread_lo…
mrzv Mar 16, 2020
eb20913
Disable emergent pair check in parallel
mrzv Mar 16, 2020
665ff4f
trivial_concurrent_hash_map::find() guarantees that a value has been …
mrzv Mar 16, 2020
f62afd4
Don't output progress and pairs in parallel
mrzv Mar 16, 2020
7ff922f
Add --threads argument + indicate progress in parallel
mrzv Mar 16, 2020
de75624
Enable emergent pairs optimization in parallel
mrzv Mar 16, 2020
780ff3b
Add memory reclamation (for hand-rolled parallel foreach)
mrzv Mar 16, 2020
5eea68e
Add pair output in the parallel case
mrzv Mar 16, 2020
d9e0120
Add MemoryManager to serial versions of foreach
mrzv Mar 16, 2020
d391d5c
Add MemoryManager to parallel STL version of foreach
mrzv Mar 16, 2020
4a1ec6a
Pass memory manager by reference
mrzv Mar 17, 2020
2de8ca1
Add parallel assembly of columns
mrzv Mar 17, 2020
a1ece3d
Add parallel_sort from Boost
mrzv Mar 17, 2020
fdb9a41
Fix compilation with parallel STL
mrzv Mar 17, 2020
fbdc42e
Add missing INDICATE_PROGRESS guards
mrzv Mar 17, 2020
a793274
Indicate empty columns by nullptr, rather than empty vector
mrzv Apr 7, 2020
0c7e27a
Fix resize bug in trivial_concurrent_hash_map + bump multiplier to 10
mrzv Apr 7, 2020
0ae052b
Add necessary function to hash_map based on unordered_map
mrzv Apr 7, 2020
fa93b2b
Add USE_TBB_HASHMAP option
mrzv Apr 8, 2020
cb34f33
Don't allocate second hash map, if not printing pairs
mrzv Apr 8, 2020
f2bcbbd
Use the next power of 2 for size in trivial_concurrent_hash_map + bum…
mrzv Apr 8, 2020
b283b1f
Add USE_TBB option
mrzv Apr 11, 2020
4bf9361
Tweak the Makefile: add more targets + automatic dependencies
mrzv Apr 11, 2020
ba9be8f
USE_CONCURRENT_PIVOTS -> USE_TRIVIAL_CONCURRENT_HASHMAP + reorder #el…
mrzv Apr 11, 2020
1264577
Fix memory reclamation by extending the wait by an extra epoch
mrzv Apr 19, 2020
625be2c
Add a link to the paper that describes the algorithm
mrzv Jun 24, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 20 additions & 5 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,18 +1,33 @@
build: ripser


all: ripser ripser-coeff ripser-debug
all: ripser ripser-coeff ripser-debug ripser-serial ripser-serial-debug

FLAGS=-Iinclude -pthread
FLAGS+=-MMD -MP

-include *.d

ripser: ripser.cpp
c++ -std=c++11 -Wall ripser.cpp -o ripser -Ofast -D NDEBUG
c++ -std=c++11 -Wall ripser.cpp -o $@ -Ofast -D NDEBUG ${FLAGS}

ripser-serial: ripser.cpp
c++ -std=c++11 -Wall ripser.cpp -o $@ -Ofast -D NDEBUG ${FLAGS} -DUSE_SERIAL

ripser-serial-debug: ripser.cpp
c++ -std=c++11 -Wall ripser.cpp -o $@ -g ${FLAGS} -DUSE_SERIAL

ripser-coeff: ripser.cpp
c++ -std=c++11 -Wall ripser.cpp -o ripser-coeff -Ofast -D NDEBUG -D USE_COEFFICIENTS
c++ -std=c++11 -Wall ripser.cpp -o $@ -Ofast -D NDEBUG -D USE_COEFFICIENTS ${FLAGS}

ripser-debug: ripser.cpp
c++ -std=c++11 -Wall ripser.cpp -o ripser-debug -g
c++ -std=c++11 -Wall ripser.cpp -o $@ -g ${FLAGS} # -fsanitize=thread -fsanitize=undefined

ripser-tbb: ripser.cpp
c++ -std=c++11 -Wall ripser.cpp -o $@ -Ofast -D NDEBUG ${FLAGS} -DUSE_TBB -DUSE_TBB_HASHMAP -ltbb

ripser-pstl: ripser.cpp
c++ -std=c++11 -Wall ripser.cpp -o $@ -Ofast -D NDEBUG ${FLAGS} -DUSE_PARALLEL_STL -DUSE_TBB_HASHMAP -ltbb -std=c++17

clean:
rm -f ripser ripser-coeff ripser-debug
rm -f ripser ripser-*
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@

Copyright © 2015–2019 [Ulrich Bauer].

This branch implements a version of the algorithm described in
[Towards Lockfree Persistent Homology](https://mrzv.org/publications/lockfree-persistence/).

### Description

Expand Down Expand Up @@ -128,4 +130,4 @@ Ripser is licensed under the [MIT] license (`COPYING.txt`), with an extra clause
[Perseus]: <http://www.sas.upenn.edu/~vnanda/perseus/>
[GUDHI]: <http://gudhi.gforge.inria.fr>
[sparsehash]: <https://github.com/sparsehash/sparsehash>
[MIT]: <https://opensource.org/licenses/mit-license.php>
[MIT]: <https://opensource.org/licenses/mit-license.php>
172 changes: 172 additions & 0 deletions include/atomic_ref.hpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,172 @@
#pragma once

// Atuhor: Dmitriy Morozov
// Version 2020-03-15

#if defined(USE_SERIAL_ATOMIC_REF)
#include "atomic_ref_serial.hpp"
#else

#include <atomic>
#include <type_traits>
#include <cassert>

namespace mrzv
{

template<class T, class Enable = void>
class atomic_ref;

template<class T>
class atomic_ref_base
{
public:
using value_type = T;
using difference_type = value_type;

public:

explicit atomic_ref_base(T& obj): obj_(&obj) {}
atomic_ref_base(const atomic_ref_base& ref) noexcept =default;

T operator=(T desired) const noexcept { store(desired); return desired; }
atomic_ref_base&
operator=(const atomic_ref_base&) =delete;

bool is_lock_free() const noexcept { return __atomic_is_lock_free(sizeof(T), obj_); }

void store(T desired, std::memory_order order = std::memory_order_seq_cst) const noexcept
{ __atomic_store_n(obj_, desired, atomic_memory_order(order)); }
T load(std::memory_order order = std::memory_order_seq_cst) const noexcept
{ return __atomic_load_n(obj_, atomic_memory_order(order)); }

operator T() const noexcept { return load(); }

T exchange(T desired, std::memory_order order = std::memory_order_seq_cst) const noexcept
{ return __atomic_exchange_n(obj_, desired, atomic_memory_order(order)); }

bool compare_exchange_weak(T& expected, T desired,
std::memory_order success,
std::memory_order failure) const noexcept
{ return __atomic_compare_exchange_n(obj_, &expected, desired, true, atomic_memory_order(success), atomic_memory_order(failure)); }

bool compare_exchange_weak(T& expected, T desired,
std::memory_order order = std::memory_order_seq_cst ) const noexcept
{ return compare_exchange_weak(expected, desired, order, order); } // TODO: not quite this simple

bool compare_exchange_strong(T& expected, T desired,
std::memory_order success,
std::memory_order failure) const noexcept
{ return __atomic_compare_exchange_n(obj_, &expected, desired, false, atomic_memory_order(success), atomic_memory_order(failure)); }


bool compare_exchange_strong(T& expected, T desired,
std::memory_order order = std::memory_order_seq_cst) const noexcept
{ return compare_exchange_strong(expected, desired, order, order); } // TODO: not quite this simple

// would be great to have wait and notify, but unclear how to implement them efficiently with __atomic
//void wait(T old, std::memory_order order = std::memory_order::seq_cst) const noexcept;
//void wait(T old, std::memory_order order = std::memory_order::seq_cst) const volatile noexcept;
//void notify_one() const noexcept;
//void notify_one() const volatile noexcept;
//void notify_all() const noexcept;
//void notify_all() const volatile noexcept;

protected:
int atomic_memory_order(std::memory_order order) const
{
if (order == std::memory_order_relaxed)
return __ATOMIC_RELAXED;
else if (order == std::memory_order_consume)
return __ATOMIC_CONSUME;
else if (order == std::memory_order_acquire)
return __ATOMIC_ACQUIRE;
else if (order == std::memory_order_release)
return __ATOMIC_RELEASE;
else if (order == std::memory_order_acq_rel)
return __ATOMIC_ACQ_REL;
else if (order == std::memory_order_seq_cst)
return __ATOMIC_SEQ_CST;
assert(0);
return __ATOMIC_RELAXED;
}

protected:
T* obj_;
};

template<class T>
class atomic_ref<T, typename std::enable_if<std::is_integral<T>::value>::type>: public atomic_ref_base<T>
{
public:
using Parent = atomic_ref_base<T>;
using value_type = typename Parent::value_type;
using difference_type = typename Parent::difference_type;

public:
using Parent::Parent;

value_type operator++() const noexcept { return fetch_add(1) + 1; }
value_type operator++(int) const noexcept { return fetch_add(1); }
value_type operator--() const noexcept { return fetch_sub(1) - 1; }
value_type operator--(int) const noexcept { return fetch_sub(1); }

T operator+=( T arg ) const noexcept { return fetch_add(arg) + arg; }
T operator-=( T arg ) const noexcept { return fetch_sub(arg) - arg; }
T operator&=( T arg ) const noexcept { return fetch_and(arg) & arg; }
T operator|=( T arg ) const noexcept { return fetch_or(arg) | arg; }
T operator^=( T arg ) const noexcept { return fetch_xor(arg) ^ arg; }

T fetch_add(T arg, std::memory_order order = std::memory_order_seq_cst) const noexcept
{ return __atomic_fetch_add(obj_, arg, atomic_memory_order(order)); }

T fetch_sub(T arg, std::memory_order order = std::memory_order_seq_cst) const noexcept
{ return __atomic_fetch_sub(obj_, arg, atomic_memory_order(order)); }

T fetch_and(T arg, std::memory_order order = std::memory_order_seq_cst) const noexcept
{ return __atomic_fetch_and(obj_, arg, atomic_memory_order(order)); }

T fetch_or(T arg, std::memory_order order = std::memory_order_seq_cst) const noexcept
{ return __atomic_fetch_or(obj_, arg, atomic_memory_order(order)); }

T fetch_xor(T arg, std::memory_order order = std::memory_order_seq_cst) const noexcept
{ return __atomic_fetch_xor(obj_, arg, atomic_memory_order(order)); }

protected:
using Parent::obj_;
using Parent::atomic_memory_order;
};

template<class T>
class atomic_ref<T*>: public atomic_ref_base<T*>
{
public:
using Parent = atomic_ref_base<T*>;
using value_type = typename Parent::value_type;
using difference_type = typename Parent::difference_type;

public:
using Parent::Parent;

value_type operator++() const noexcept { return fetch_add(1) + 1; }
value_type operator++(int) const noexcept { return fetch_add(1); }
value_type operator--() const noexcept { return fetch_sub(1) - 1; }
value_type operator--(int) const noexcept { return fetch_sub(1); }

T* operator+=(std::ptrdiff_t arg) const noexcept { return fetch_add(arg) + arg; }
T* operator-=(std::ptrdiff_t arg) const noexcept { return fetch_sub(arg) - arg; }

T* fetch_add(std::ptrdiff_t arg, std::memory_order order = std::memory_order_seq_cst) const noexcept
{ return __atomic_fetch_add(obj_, arg * sizeof(T), atomic_memory_order(order)); }

T* fetch_sub(std::ptrdiff_t arg, std::memory_order order = std::memory_order_seq_cst) const noexcept
{ return __atomic_fetch_sub(obj_, arg * sizeof(T), atomic_memory_order(order)); }

protected:
using Parent::obj_;
using Parent::atomic_memory_order;
};

}

#endif // USE_SERIAL_ATOMIC_REF
148 changes: 148 additions & 0 deletions include/atomic_ref_serial.hpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,148 @@
#pragma once

#pragma message "Using serial atomic_ref"

// Atuhor: Dmitriy Morozov
// Version 2020-03-15

#include <atomic>
#include <type_traits>
#include <cassert>

namespace mrzv
{

template<class T, class Enable = void>
class atomic_ref;

template<class T>
class atomic_ref_base
{
public:
using value_type = T;
using difference_type = value_type;

public:

explicit atomic_ref_base(T& obj): obj_(&obj) {}
atomic_ref_base(const atomic_ref_base& ref) noexcept =default;

T operator=(T desired) const noexcept { store(desired); return desired; }
atomic_ref_base&
operator=(const atomic_ref_base&) =delete;

bool is_lock_free() const noexcept { return true; }

void store(T desired, std::memory_order order = std::memory_order_seq_cst) const noexcept
{ *obj_ = desired; }
T load(std::memory_order order = std::memory_order_seq_cst) const noexcept
{ return *obj_; }

operator T() const noexcept { return load(); }

T exchange(T desired, std::memory_order order = std::memory_order_seq_cst) const noexcept
{ std::swap(*obj_, desired); return desired; }

// Technically, not quite CAS, but simplified for the only meaningful scenario I can think of
bool compare_exchange_weak(T& expected, T desired,
std::memory_order,
std::memory_order) const noexcept
{ assert(*obj_ == expected); *obj_ = desired; return true; }

bool compare_exchange_weak(T& expected, T desired,
std::memory_order order = std::memory_order_seq_cst ) const noexcept
{ assert(*obj_ == expected); *obj_ = desired; return true; }

bool compare_exchange_strong(T& expected, T desired,
std::memory_order,
std::memory_order) const noexcept
{ assert(*obj_ == expected); *obj_ = desired; return true; }


bool compare_exchange_strong(T& expected, T desired,
std::memory_order order = std::memory_order_seq_cst) const noexcept
{ assert(*obj_ == expected); *obj_ = desired; return true; }

// would be great to have wait and notify, but unclear how to implement them efficiently with __atomic
//void wait(T old, std::memory_order order = std::memory_order::seq_cst) const noexcept;
//void wait(T old, std::memory_order order = std::memory_order::seq_cst) const volatile noexcept;
//void notify_one() const noexcept;
//void notify_one() const volatile noexcept;
//void notify_all() const noexcept;
//void notify_all() const volatile noexcept;

protected:
T* obj_;
};

template<class T>
class atomic_ref<T, typename std::enable_if<std::is_integral<T>::value>::type>: public atomic_ref_base<T>
{
public:
using Parent = atomic_ref_base<T>;
using value_type = typename Parent::value_type;
using difference_type = typename Parent::difference_type;

public:
using Parent::Parent;

value_type operator++() const noexcept { return fetch_add(1) + 1; }
value_type operator++(int) const noexcept { return fetch_add(1); }
value_type operator--() const noexcept { return fetch_sub(1) - 1; }
value_type operator--(int) const noexcept { return fetch_sub(1); }

T operator+=( T arg ) const noexcept { return fetch_add(arg) + arg; }
T operator-=( T arg ) const noexcept { return fetch_sub(arg) - arg; }
T operator&=( T arg ) const noexcept { return fetch_and(arg) & arg; }
T operator|=( T arg ) const noexcept { return fetch_or(arg) | arg; }
T operator^=( T arg ) const noexcept { return fetch_xor(arg) ^ arg; }

T fetch_add(T arg, std::memory_order order = std::memory_order_seq_cst) const noexcept
{ T result = *obj_; *obj_ += arg; return result; }

T fetch_sub(T arg, std::memory_order order = std::memory_order_seq_cst) const noexcept
{ T result = *obj_; *obj_ -= arg; return result; }

T fetch_and(T arg, std::memory_order order = std::memory_order_seq_cst) const noexcept
{ T result = *obj_; *obj_ &= arg; return result; }

T fetch_or(T arg, std::memory_order order = std::memory_order_seq_cst) const noexcept
{ T result = *obj_; *obj_ |= arg; return result; }

T fetch_xor(T arg, std::memory_order order = std::memory_order_seq_cst) const noexcept
{ T result = *obj_; *obj_ ^= arg; return result; }

protected:
using Parent::obj_;
};

template<class T>
class atomic_ref<T*>: public atomic_ref_base<T*>
{
public:
using Parent = atomic_ref_base<T*>;
using value_type = typename Parent::value_type;
using difference_type = typename Parent::difference_type;

public:
using Parent::Parent;

value_type operator++() const noexcept { return fetch_add(1) + 1; }
value_type operator++(int) const noexcept { return fetch_add(1); }
value_type operator--() const noexcept { return fetch_sub(1) - 1; }
value_type operator--(int) const noexcept { return fetch_sub(1); }

T* operator+=(std::ptrdiff_t arg) const noexcept { return fetch_add(arg) + arg; }
T* operator-=(std::ptrdiff_t arg) const noexcept { return fetch_sub(arg) - arg; }

T* fetch_add(std::ptrdiff_t arg, std::memory_order order = std::memory_order_seq_cst) const noexcept
{ T* result = *obj_; *obj_ += arg; return result; }

T* fetch_sub(std::ptrdiff_t arg, std::memory_order order = std::memory_order_seq_cst) const noexcept
{ T* result = *obj_; *obj_ -= arg; return result; }

protected:
using Parent::obj_;
};

}
Loading