Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix #580 by using a fixed-point implementation for unit conversions using integer representations #615

Draft
wants to merge 39 commits into
base: master
Choose a base branch
from

Conversation

burnpanck
Copy link
Contributor

This PR provides an implementation of what I played with in this comment to issue 580. Basically, it completely distinguishes floating-point and integral paths. For floating-point paths, the conversion implementation remains the same (a single multiplication with a floating-point representation of the conversion factor). For the integral paths, it now also uses a single multiplication, but now with a fixed-point representation. (mixed conversions take the floating-point path). Fixed-point multiplications have the advantage that they are comparatively cheap (much cheaper than a division) and can accurately describe all reasonable conversion factors between two n-bit numbers using a n.n (2n total) fixed-point representation. Unreasonable conversion factors are those larger than 2^n or smaller than 2^-n, i.e. those which either overflow for all input values or underflow for all input values. The cost of such a fixed-point multiplication is at most that of 4x n-bit integer multiplication plus some bookkeeping if their output is restricted to n-bit (as in the C++ standard; unlike typical hardware); if the output of a n-bit multiplication can be 2n (most instruction-sets provide those), it will just take two of them.

With the implementation change here, the computation of conversions between two quantity types of the same dimension and both integer type stay tightly within the types mandated by the source and destination type, without expanding to intmax unless necessary. In general, this will never cause overflows unless the result type actually cannot carry the result (the fixed-point multiplication is guaranteed to be sufficient). However, this practice triggered the weakly related #614, where it turned a silent overflow into a compilation failure. For now, this PR just disables those two test. Once we have a fix for that one, we should re-enable those tests and rebase this PR.

src/core/include/mp-units/bits/fixed_point.h Show resolved Hide resolved
Comment on lines 136 to 137
Th hi;
Tl lo;
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we make those private? In case we do, can we name them with the _ postfix to be consistent with all other places in the library?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have strong feelings here. I was thinking it could be relevant for this type to be a literal type, given that we intend to use it mostly at compile-time. Perhaps it becomes useful in some numeric manipulation needed for magnitudes? Perhaps we want to store an offset between two quantity points/origins with 128 bits of precision? However, I'm aware that literal types are only needed if we want instances as NTTP, and neither of these use-cases definitely require a literal type. I'll make them private, we can still revert that later.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you meant structural type? litearal types are types that can be used in constexpr functions and those do not need public members.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes, of course.

};


constexpr double_width_int operator+(Tl rhs) const
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should it be a non-member two-parameter function to make sure that conversions work for lhs and rhs? Should there be another overload with reversed arguments?

Tl lo;
};

#if false && defined(__SIZEOF_INT128__)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Always false?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, CI complains about the true-path because __int128_t is non-standard. Given that this is an implementation-detail, I should probably look for a way to disable that warning instead.

#endif

template<typename T>
inline constexpr std::size_t integer_rep_width_v = std::numeric_limits<std::make_unsigned_t<T>>::digits;
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

inline not needed from variable templates.


template<std::integral U>
requires(integer_rep_width_v<U> <= integer_rep_width_v<T>)
auto scale(U v) const
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

constexpr?

fractional_bits);
}

repr_t int_repr_is_an_implementation_detail_;
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

private?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, literal type, but again, I don't actually need that right now. I'll make it private (... and remove the is_an_implementation_detail_). We can still change it if a need comes up.

Comment on lines 57 to 63
/* using multiplier_type = conditional<
treat_as_floating_point<c_rep_type>,
// ensure that the multiplier is also floating-point
conditional<std::is_arithmetic_v<value_type_t<c_rep_type>>,
// reuse user's type if possible
std::common_type_t<c_mag_type, value_type_t<c_rep_type>>, std::common_type_t<c_mag_type, double>>,
c_mag_type>;*/
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why to comment this code instead of removing it?

src/core/include/mp-units/bits/sudo_cast.h Outdated Show resolved Hide resolved
src/core/include/mp-units/bits/sudo_cast.h Outdated Show resolved Hide resolved
@burnpanck
Copy link
Contributor Author

Most of the review concerns should be fixed now. Also, I "fixed" the CI failure in the trigonometry tests by allowing the affected test-cases to deviate by two instead of one epsilon. I believe this appears because now, for double quantities, the conversion factor is now a double instead of a long double - an intentional change of making the conversion accuracy just "good enough" for the values involved. In fact, that test previously failed on my mac M1 anyway - because on my platform, long double and double are the same thing. In general, guaranteeing one epsilon accuracy is probably unrealistic for reasonable computations anyway (my previous discussions on the topic used the term "of the order of the epsilon" on purpose).

What I did not yet address are the discussions about making the implementation details private. The aim was to keep the types "literal types", so we could use them in NTTPs if needed (maybe for large point-offsets?). That said, I'm not sure we really need it, and it's easy to change, so whatever you prefer. Also, if we start using such wide integers as repr inside quantities, we may need even wider ones to achieve our guarantees in conversions. The design of the double_width_int here could be relatively easily extended into a compile-time arbitrary-width integer if that need arose. But probably, that's a task for another PR.

@burnpanck
Copy link
Contributor Author

That CI failure for GCC 12 and GCC 13 error: extra ‘;’ [-Werror=pedantic] after a class template partial specialisation is obviously wrong. Should I re-write this as three separate classes and use something like std::conditional_t to get those compilers to accept?

@mpusz
Copy link
Owner

mpusz commented Sep 16, 2024

The aim was to keep the types "literal types", so we could use them in NTTPs if needed (maybe for large point-offsets?)

I was not aware of this. This makes sense... But in such a case, I would obfuscate the names much more (similar to what we have in quantity) so users will not depend on those, as we might hide them if the language rules for literal types will change in the future.

@mpusz
Copy link
Owner

mpusz commented Sep 16, 2024

That CI failure for GCC 12 and GCC 13 error: extra ‘;’ [-Werror=pedantic] after a class template partial specialisation is obviously wrong. Should I re-write this as three separate classes and use something like std::conditional_t to get those compilers to accept?

I think you did not check if properly. GCC complains about ; put after a function definition (not class).

test/static/CMakeLists.txt Outdated Show resolved Hide resolved
@burnpanck
Copy link
Contributor Author

I think you did not check if properly. GCC complains about ; put after a function definition (not class).

You are right of course. It just happens that sudo_cast.h had a template class specialisation end on the same line number, and I never properly checked the file name (it's fixed_point.h).

@burnpanck
Copy link
Contributor Author

I should probably increase the test coverage for that double_width_int quite a bit more. The value_cast use-case actually only uses the multiplication operator and constructor from long double now (though I should still change that for purely rational conversion factors), but once it's in the code-base, people might get tempted to use it for something else. Also, testing is easier if a more complete set of arithmetic is available.

@mpusz
Copy link
Owner

mpusz commented Oct 27, 2024

Hi @burnpanck, do you plan to work on this in the near future? I would love to see fixed-point-scaling in action. Also, @JohelEGP is writing API Reference for ISO papers now, and we do not know what to describe.

Also, talking about ISO papers, it would be great if you could contribute an add this small chapter about scaling that we talked about some time ago.

@burnpanck
Copy link
Contributor Author

Hey @mpusz , sorry for the long pause, as usual, paid work did interfere. Could you point me to a good location where to start drafting the design rationale behind this change?

@burnpanck burnpanck force-pushed the feature/fixed-point-multiplication-for-integral-conversion branch from 35a5d23 to 8415998 Compare November 5, 2024 06:49
@mpusz
Copy link
Owner

mpusz commented Nov 5, 2024

@burnpanck, it's great to hear from you again 😃

The best way to contribute to the ISO proposal is by doing a PR to a 3045R4 file at https://github.com/mpusz/wg21-papers. Please note that this repo has git submodules. To generate the paper, go to the src directory and do make update followed by make <name of the file>.html.

@mpusz
Copy link
Owner

mpusz commented Nov 5, 2024

I would recommend putting the chapter somewhere inside of https://mpusz.github.io/wg21-papers/papers/3045R4_quantities_and_units_library.html#quantities.

@burnpanck burnpanck force-pushed the feature/fixed-point-multiplication-for-integral-conversion branch from 8415998 to 99d4315 Compare November 5, 2024 07:46
@burnpanck
Copy link
Contributor Author

I just merged latest master, I believe it should pass the tests again now. Given that it does pass all the tests, I think it is potentially mergeable. Perhaps you want to review again, or do you want to wait for a first draft of the rationale doc?

Before I remove the "draft" state, there are two areas where I'd like your input:

  • Currently, we never use any implementation-provided __int128, because it is non-standard. The corresponding #if defined(...) is currently set as #if false && defined(...). An implementation may however potentially provide a "hand-optimised" 128 bit integer type that outperforms what the compiler synthesizes with our C++ implementation. What is our stance here?
  • Similarly, min_width_uint_t can either be implemented elegantly in terms of the C++20 std::bit_width and std::has_single_bit, or somewhat less elegant with older tools. Should we use feature macros to select between either implemenation, or just stick with one? (According to cppreference, recent releases of the four major compilers should support the feature).
  • (Found this while fixing a merge bug) The current tests in custom_rep_test_min_impl.cpp do not contain any units conversion that is not either a pure multiplication or division. If it had, the tests would fail, because currently the fixed point implementation doesn't play well with the custom integer min_impl<T> (with T being an integral type). We may want to refine the interface specification towards custom representation types:
    • I believe we already have something like treat_as_floating_point, which should probably be explicitly checked in order to select between fixed-point and floating-point implementations.
    • If fixed-point is selected, we will need a way to access the range/bit-width and potentially the "epsilon" (= 1 for actual integers, but there could be a custom fixed-point representation as-well) of the custom representation type; also, we'll either want to cast to our internal integer/fixed-point representation, do the computation, then cast back, or we'd require further access to bit-shift operators at least.
    • Alternatively, we could instead just create a customisation point for "scaling by a magnitude", and plug the fixed-point and floating-point implementation as default implementations; then we ask any custom integral types which would not work with our simplest fixed-point implementation (we should formalise what that means) to provide a custom implementation of that customisation point.

@mpusz
Copy link
Owner

mpusz commented Nov 5, 2024

or do you want to wait for a first draft of the rationale doc?

No, I would like to merge it ASAP. Having a rationale discussion in the ISO paper would be great, but it is a separate task.

@mpusz
Copy link
Owner

mpusz commented Nov 5, 2024

Currently, we never use any implementation-provided __int128

This is not exposed in the public API, so it does not affect our users' code. This is why I believe that if it improves performance (compile-time or runtime), we should use such types wherever we can and are sure they exist. A generic and portable implementation should be used only in the remaining cases.

@mpusz
Copy link
Owner

mpusz commented Nov 5, 2024

min_width_uint_t can either be implemented...

If those features are available on all the supported compilers, then there is no need for alternatives. We always compile with at least C++20.

@mpusz
Copy link
Owner

mpusz commented Nov 5, 2024

The current tests in custom_rep_test_min_impl.cpp do not contain any units conversion

Those tests may be cleaned up and refactored if needed.

We may want to refine the interface specification towards custom representation types

Sure, please note that I am also working on refactoring the Representation concepts. For scalar, the basic idea is to have something like:

template<typename T>
concept WeaklyRegular = std::copyable<T> && std::equality_comparable<T>;

template<typename T>
concept Scalar = is_scalar<T>;

template<typename T>
concept ScalarRepresentation = Scalar<T> && WeaklyRegular<T> && requires(T a, T b, std::intmax_t i, long double f) {
  // scaling
  { a* i } -> Scalar;
  { i* a } -> Scalar;
  { a / i } -> Scalar;
  { a* f } -> Scalar;  // TODO How this affects freestanding?
  { f* a } -> Scalar;
  { a / f } -> Scalar;

  // scalar operations
  { a + b } -> Scalar;
  { a - b } -> Scalar;
  { a* b } -> Scalar;
  { a / b } -> Scalar;
};

template<typename T>
concept Representation =
  detail::ScalarRepresentation<T> || detail::ComplexRepresentation<T> || detail::VectorRepresentation<T> ||
  detail::Tensor2Representation<T> || detail::Tensor4Representation<T>;

The scaling part probably needs modifications after your change?

@burnpanck
Copy link
Contributor Author

The scaling part probably needs modifications after your change?

Probably. I see two options. Option one is to distinguish two types of representations: "floating point" and "fixed point" (integers fall under this). The former ones are easy, because they have a a large enough range so we don't care about over- and underflow, and scaling them usually does not risk loosing precision, because the "epsilon" scales with the number. The later ones require us to carefully reason about range and resolution, because every scaling operation "shifts out" bits either above or below. I should really get a draft in of that rationale, to clarify why these are essential considerations.

The more I think of this however, I believe what we really want from a representation is that it can be scaled by a magnitude. Thus, I think we should actually use that as the "defining property"; we define an API such as scale(magnitude, value [, out_representation_specification]) which acts as a customisation point, and with suitable default implementations. Then, we just require that representations are "scalable", which shall imply that they are suitable for the value argument of scale (with the default out_representation_specification of requesting the same type as value). Note that this PR almost contains an implementation of that API in mp_units::detail::conversion_value_traits::scale, but without provisions for being a customisation point.

What would be the preferred mechanism for customisation points? I recall a discussion in the "std::execution" proposal for C++26 where they considered several possible choices for customisation points; they deemed the free-function approach of "std::ranges" not suitable for their use-case, but I believe that was because they essentially need something like "multiple-dispatch"; multiple unrelated entities each should have a shot at customising operations involving each other. In this case, the magnitude is not a customisable object, so free functions should work fine.

@mpusz
Copy link
Owner

mpusz commented Nov 9, 2024

I tried as well and I couldn't make it for gcc-12 as well. So I cheated 😜

@burnpanck
Copy link
Contributor Author

I just pushed another take at the Representation concept, in particular with respect to their "scaling" behaviour. The reasoning behind it is the following:

  1. Units of measurement form a (mathematical) vector space over the real numbers (i.e. any scaling of a unit by a real number is still a reasonable unit for the same dimension; so is the sum of any two units of the same dimension)
  2. quantity models the semantics of physical quantities, through their representation by the product of a "number" and a unit. Because every physical quantity has exactly one such representation for each unit of that dimension (which are all equivalent), the mathematical space from which the "numbers" are drawn also necessarily needs to embed a (mathematical) vector space over the real numbers. (The real numbers obviously do, but so do the complex numbers, as well as the "cartesian vectors" in R^N, and many other mathematical spaces).
  3. Any change of representation of a quantity as a different unit requires the ratio between the original unit and the new unit (in mp_units represented as a Magnitude) to be multiplied to the "number". Thus, the fundamental requirement on the C++ representation of the "number" requires some way to at least approximately represent the result of rescaling an input number by real number represented by a Magnitude. Therefore, this latest commit explicitly encodes that operation as mp_units::scale(Magnitude auto M, typename Value), along with a number of customisation points for that implementation (more on that below).
  4. We probably would like to require that the "numbers" are dimensionless (i.e. neither quantity nor std::chrono::duration should be allowed for the "number" representation).

Both 3 & 4 may be difficult to fully express as C++ concepts; the problem with 3 is that we ideally would like to verify scaling against all possible magnitudes, and with 4 it may not even be clear what exactly should be allowed and what not. Thus, for now, I simply check 3 my testing for mag<1>, and 4 continues to be handled through the explicit opt-in that you @mpusz recently implemented for the Representation concept via the character opt-ins.

Note that I included two slightly different spellings of scale; one that takes an explicit specification of the out-representation (through an instance of std::type_identity<R> as a boost::hana::type<R>-like type tag), while the other leaves the out-representation free to scaling implementation. The intent is for the former to be used when the user explicitly asks for a representation, whereas the latter should be used when only a change of units is requested (e.g. in a sum of two compatible quantities expressed in different units). The goal is to enable fancy number representations as those provided by the CNL to intelligently limit the precision of the scaling factor when the output is narrow anyway (first case), or allow to tailor the precision of the output representation to the precision of the input representation and the scaling factor (second case).

Finally, you may want to review the customisation rules of the scaling operation. The implementation of the two mp_units::scale overloads currently proceed as follows:

  1. Check if the source representation type has a suitable member .scale; if it does, use this
  2. Check if mp_units::scaling_traits<R> is defined; if it is, use nested static member functions (if no suitable one is found, fail to compile)
  3. Check if mp_units::treat_as_floating_point is true for either the input or the output representation (if given); if it is, approximate the scaling factor using a suitable standard floating point (long double unless we know that the representation's precision is comparable to a smaller floating point type), and use operator* (or operator/ for pure divisions).
  4. Check if both types are standard integral types (or at least their value_type_ts are and the representation type is implicitly convertible from and to their value_type_t); then approximate the scaling factor using a fixed-point representation with as many integral and fractional bits (each) as the larger of the two types, and perform a fixed-point multiplication.
  5. Otherwise, the program is ill-formed.

In general, I believe that is more or less all that I would put into this PR, and therefore remove the "draft" mark as soon as all tests pass.

@burnpanck
Copy link
Contributor Author

While merging the latest changes, I ran into the following (now) failing test: static_assert(quantity<s>{1ms} == 1 * ms); (in static/chrono_test.cpp). I should investigate more, but shouldn't this be a test for almost equal instead of exactly equal, given the floating-point types?

@burnpanck
Copy link
Contributor Author

Looks like I failed to correctly export mp_units::scale from mp-units/bits/scale.h and now the module-enabled tests fail. Unfortunately, I don't have any experience with modules so far; what is needed to properly export a name? That file is definitely indirectly included from mp-units-core.cpp, through mp-units/framework/representation_concepts.h...

Also, I presume mp-units/bits/scale.h is anyway the wrong place to export it from, and it should go somewhere into mp-units/framework/ instead, correct?

@@ -127,15 +149,23 @@ class measurement {
private:
value_type value_{};
value_type uncertainty_{};

// NOLINTNEXTLINE(bugprone-easily-swappable-parameters)
constexpr measurement(std::in_place_t, value_type val, value_type err) :
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not the right way of using in_place. in_place means that we pass a list of arguments for construction of one underlying type.

This is not the case here. We initialize two members with one argument each.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you would like to have in-place like construction for type then probably something like piecewise_construct is needed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I just wanted a tag to select the private constructor, indicating "I know what I'm doing - I guarantee that err is positive". It's mostly a performance optimisation though, so, for an example, we could just leave it out completely.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can either remove it or use a new dedicated tag. Maybe something similar to:

inline constexpr struct validated_tag {
} validated;

}

template<typename To, mp_units::Magnitude M>
[[nodiscard]] constexpr measurement<To> scale(std::type_identity<measurement<To>>, M scaling_factor) const
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This type_identity usage here is quite novel. I understand the rationale, but maybe we can find a more traditional way of doing it? @JohelEGP, do you have any ideas here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea comes from boost::hana::type. Would be great if the standard had its own dedicated "type tag".

if constexpr (std::is_integral_v<T>) {
return static_cast<To>(value);
} else {
return value;
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is meant to be a floating point value, then !treat_as_floating_point should be used.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really. The specification of cast_integral<To>(const From)& (an implementation detail) is "cast to To, if From is integral". The point is that the default implementation of scale basically has two separate cases:

  • "use operator*(float)": Applied whenever at least one of From and To is treat_as_floating_point.
  • "use a fixed-point multiplication": Can only be directly be applied to standard integral types, so is used just for those, plus custom types which are implicitly convertible and have an integral value_type.

So the first case is much broader. For standard types, this in particular includes the case where From is integer and To is floating-point. There, just multiplying the integer with some floating-point type may cause precision-loss warnigs to emitted. Those are "bogus", because whenever To is explicitly specified, we're in fact in an "explicit cast" path. So we need to do an explicit cast here. But on the other hand, we cannot unilaterally do an explicit cast, because many types that should go through that path are not at all convertible to that floating-point type (e.g. std::complex).

There are other ways to handle this. We could also just silence the warning locally, or branch-out the mixed int/float paths further above.

Comment on lines 56 to 61
template<typename T>
struct floating_point_scaling_factor_type {
// fallback implementation for types with a `value_type` nested type
using type =
std::enable_if_t<!std::is_same_v<value_type_t<T>, T>, floating_point_scaling_factor_type<value_type_t<T>>>::type;
};
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably, it is better to leave the primary template undefined and handle value_type_t in a separate partial specialization. SFINAE is also much slower to compile than constraints and provides worse diagnostics.

Comment on lines 63 to 64
template<std::floating_point T>
struct floating_point_scaling_factor_type<T> : std::type_identity<T> {};
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know type_identity pattern, but it is actually more to type and requires one more dynamic allocation (class template instantiation) at compile time for each case. This is why I typically prefer to just type using.

Comment on lines 193 to 194
// placeholder to report failure
return std::false_type{};
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be equivalent to the lack of definition for scaling_traits.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I understand correctly: you suggest select_scaling_traits should not be defined for those types? Indeed, that is what I wanted, but somehow, I was unable to make it work. Let me try again, maybe it was due to another bug that I have fixed in the meantime...

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think that having scaling_traits specialization for treat_as_floating_point and !treat_as_floating_point would be a better solution here. It would use interfaces that also the user has to use to make things like this work. We do not need any additional dispatch here.

In case of problems with constraints subsumption, we could expose proper concepts to make it work.

If there is no way to do it as above, then we could implement this in a similar way to CPOs (Niebloids).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The latest stint here now indeed works with a scaling_traits<From,To>, where To is optional. I provided an implementation for treat_as_floating_point (assuming that multiplication with a standard floating-point value is correct for all of those) and another one for "integer-likes" (standard integers, plus those which have a nested value_type that is a standard integer, and which further interconverts with that). Note that this does not cover all representation types (those which are neither treat_as_floating_point nor integer). That is on purpose. Any non-floating-point representation needs precise knowledge about range and precision, which we don't have. So requiring the user to provide a specialisation is the only right path here IMHO.

Comment on lines 204 to 216
template<typename To, Magnitude M, typename From>
requires detail::HasScalingTraits<To, From> ||
requires(const From& value) { value.scale(std::type_identity<To>{}, M{}); }
constexpr To scale(std::type_identity<To> to_type, M scaling_factor, const From& value)
{
if constexpr (requires {
{ value.scale(to_type, scaling_factor) } -> std::convertible_to<To>;
}) {
return value.scale(to_type, scaling_factor);
} else {
return detail::select_scaling_traits<To, From>.scale_from(scaling_factor, value);
}
}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the above suggestions, this will simplify to:

Suggested change
template<typename To, Magnitude M, typename From>
requires detail::HasScalingTraits<To, From> ||
requires(const From& value) { value.scale(std::type_identity<To>{}, M{}); }
constexpr To scale(std::type_identity<To> to_type, M scaling_factor, const From& value)
{
if constexpr (requires {
{ value.scale(to_type, scaling_factor) } -> std::convertible_to<To>;
}) {
return value.scale(to_type, scaling_factor);
} else {
return detail::select_scaling_traits<To, From>.scale_from(scaling_factor, value);
}
}
template<typename To, Magnitude M, typename From>
requires requires { typename scaling_traits<To, From>; } ||
requires(const From& value) { value.scale(std::type_identity<To>{}, M{}); }
constexpr To scale(std::type_identity<To> to_type, M scaling_factor, const From& value)
{
if constexpr (requires {
{ value.scale(to_type, scaling_factor) } -> std::convertible_to<To>;
}) {
return value.scale(to_type, scaling_factor);
} else {
return scaling_traits<To, From>.scale_from(scaling_factor, value);
}
}

Comment on lines 222 to 223
if constexpr (requires { value.scale(scaling_factor); }) {
return value.scale(scaling_factor);
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think that T::scale is a good idea. I do not know of any representation type that provides such a member function today. Also, requiring representation types to add it tightly couples them with mp-units, which is not good. I would strongly prefer to have some non-member or type_traits-based customization only.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fine with me. Explicit specialisations are sometimes a bit painful to do because of the namespaces. That said, there are plenty of precedents in the standard library. ADL is a bit more convenient, and modern approaches would go through a CPO. On the other hand, I would actually prefer not to dispatch on the magnitude type at all, because if a different implementation is chosen for different magnitudes by accident, that would be very confusing. Instead, all paths for different magnitudes should remain close and under the control of the same "customiser". Thus, I think I'll give it another shot just explicit specialisation of mp_units::scaling_traits<From, To=UnspecifiedRepresentation>.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ADL is not a good choice here as someone may want to provide a customization for a third-party existing type, and this would require injecting some stuff into the third-party namespace, which may not be allowed, like in the case of std.

* return an instance of @c Rep that approximates `scaling_factor * value`, another element of $\mathcal{V}$.
* This needs to be defined at least for `From = Rep`, as well as any other representation
* types for which interoperability is desired.
* - `template <typename To, Magnitude M> static constexpr auto scale(M scaling_factor, const Rep &value)`:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not clearly understand what is the difference here. It both examples in scaling.h you have exactly the same implementation. Do we need two functions here? Maybe one function would be enough with proper template parameters?

If we need both, then shouldn't those be named scale and scale_to instead?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We just use one of them so far, but I think we should later be using both. See the comment above: #615 (comment)

As for naming: naming is hard. Within the previous arrangement, where Rep in the scaling_traits<Rep> could take either the From or the To role, I concluded that when there is no To specified, scale is a good name, but when Rep is the To, the From type comes from the argument, so scale_from is correct because the Rep "scales from" the argument. I agree though, it's confusing. With scaling_traits<From, To=Unspecified>, scale is the only reasonable name.


template<typename T>
concept MagnitudeScalable = detail::WeaklyRegular<T> && requires(T a, T b, std::type_identity<T> to_type) {
{ mp_units::scale(mag<1>, a) };
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could check return type here as well. For example, WeaklyRegular could be better than nothing here.

@mpusz
Copy link
Owner

mpusz commented Nov 11, 2024

@mpusz
Copy link
Owner

mpusz commented Nov 11, 2024

Also, I presume mp-units/bits/scale.h is anyway the wrong place to export it from, and it should go somewhere into mp-units/framework/ instead, correct?

Yes, bits/... are implementation details and should not contain anything that is the library's public API. This is why all of them start with namespace mp_units::detail {.

@@ -201,6 +199,8 @@ concept HasScalingTraits = !std::convertible_to<decltype(select_scaling_traits<T

} // namespace detail

MP_UNITS_EXPORT_BEGIN
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned before, we should not provide public API in .../bits/....

Comment on lines 38 to 40
namespace mp_units {

namespace detail {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
namespace mp_units {
namespace detail {
namespace mp_units::detail {

The above is shorter and makes it harder to export public APIs from .../bits/....

@JohelEGP JohelEGP self-requested a review November 26, 2024 16:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants