mp-units talks #451
Replies: 10 comments 20 replies
-
Thanks for sharing these slides! Without speaker notes, I'm guessing at the point of a couple slides, but I'm familiar enough with the background context that I think I can fill in most of the blanks. I took some running notes as I read through them. The end result reads like a peer review for a paper (minus any kind of "accept/reject" decision 😉). So, consider me "Reviewer 1". 😁 One comment I'll make ahead of time is that a lot of my comments boil down to, essentially, "wouldn't this problem vanish if the units could just compose?" And a huge selling point of V2 is that... the units can just compose. So I think many of my comments end up in the "already solved" bucket. Slide 5What does this example have to do with downcasting? Wouldn't you expect to get Slide 6It looks like there's more highlighted text underneath the footer. Slide 12
Still, I think this ties into my comments on slide 5: I would expect this to simply be In the typename from the compiler error, it looks like we get an "unknown coherent unit" of an "unknown dimension". Is this because downcasting is automatically looking up the "preferred" name of the unit with this dimension-and-magnitude, and calling it "unknown unit" if not? The way I would state the problem with downcasting is this: it's a global registry for unit names. I think we should be suspicious of global registries both in general, and in this specific case. The alternative in a no-preferred-units framework is to simply form new "compound" units from constituent pieces, using named prefixes and arithmetic combinations. This gives us local reasoning, which is nice. Side note: I was extremely puzzled by all of the Slide 16It's unfortunately true that magnitudes made types harder to understand in mp-units. However, no-preferred-units would provide a solution here in a similar way as for the above cases: the resulting unit would be I think this philosophy is why even though Au uses essentially the same magnitude implementation, I can't recall the last time I've seen one show up in a compiler error. Slide 20This reminds me that actually understanding #427 is on my TODO list. 😁 I still suspect that trying to handle quantity "kinds" is scope creep, and that the effect of making the library more difficult to use will outweigh the prevention of bugs. However, I used to be much more confident of this viewpoint before I (superficially) followed the discussion in #427 --- it's moved my priors towards being more optimistic for kinds working out. Slide 44More and more over time, solution 3 (quantity references) just looks better and better to me. Also, why did we jump from slide 22 to 44? 😅 Slide 30Is this actual code? speed avg_speed(length l, time t)
{
return l / t;
}
auto res = avg_speed(120 * km, 2 * h); Wouldn't we need to say Slide 36Is "character" in-scope for a units library? That makes me a little nervous. Slide 42Here's one example for why I worry about kinds. The word "length" is overloaded between the "length, width, height" triad, and the "generic physical distance" meaning. And if we support kinds, we'll necessarily have both uses! This leads to a width being implicitly convertible to a length, but not vice versa, which I find confusing even as a "units library expert", which makes me worried about how the masses of end users will experience it. Slide 54I'm confused about recipes: how they work, what value they provide. Does this mean that we couldn't form a kinetic energy as Slide 82I'm confused about this: I thought the whole point of making Slide 90Neat! I hope this becomes the standard someday. I look forward to jettisoning the patent absurdity of Slide 109 -- 110This is really the main point: V2 is a dramatic leap forward along a wide variety of dimensions --- really impressive stuff! I do have lingering doubts about whether quantity kinds will do more good than harm. Perhaps only time will tell. I think it's quite clear that they're in the regime of diminishing returns (compared to plain vanilla dimensions-and-magnitudes), but how deep exactly is an open question. |
Beta Was this translation helpful? Give feedback.
-
I have just regenerated the slides, so hopefully they will now look correct. |
Beta Was this translation helpful? Give feedback.
-
Let's make this the "kinds discussion" thread! 😄 I think it'll be more coherent to split this off from my generic feedback.
ISO 80000 is a standard for units of measure, and this project aims to add units support to C++ --- another ISO standard. Thus, it's only natural that we should strive to follow ISO 80000. I'm not convinced this reasoning is correct, though. If a project wants to adhere to ISO 80000, it's an open question how much of that should be handled by the software, and how much should be handled "manually" or by other means. Of course, this question also applies to the very idea of a units library itself! A program using raw numeric types is perfectly valid (it's just a lot more effort to get right). I think the value proposition for "layer 1" of a units library --- that is, "pure" quantity calculus, dimensions-and-magnitudes --- is a slam dunk for most use cases. It robustly prevents the most common and pernicious category of errors (namely, "right-dimension-wrong-magnitude"). What's more, the system is easy to learn and reason about, and as far as I know it never produces a wrong answer or a need to "fight" the system. "Layer 2" --- that is, subdivision into "kinds" --- I'm less convinced about. It should be completely uncontroversial that we've entered the regime of diminishing returns. A hierarchy of kinds adds complexity for end users, and I expect that in practice, errors-of-kind are far less common (and still less damaging) than "vanilla" unit errors. So it's mainly a question of how deep into that regime we are, and whether the costs have begun to outweigh the benefits. Why am I worried about the costs and complexity? Take unlib for one example. Support for "kinds" (called "tags" in that library) was an essential feature for that library, whose absence in other libraries was a dealbreaker. And yet, we have this passage:
What this means to me is: "the system readily produces nonsense results, and you will routinely have to fight it". I think the root cause is a system of tags that gives useful results in certain cases --- e.g., it's satisfying that reactive power multiplied by time gives reactive energy! --- but which generalizes poorly outside of certain very special cases. Again, this is compared to "bare" quantity calculus, which is mature, easily learned and understood, and never produces nonsense. Now, tags are unlib's approach to kinds, not mp-units' approach. Maybe mp-units fares better. In fact, from reading the slides, I suspect it probably does! It seems like this is exactly the kind of use case which "recipes" are intended to address --- is that right? If so, I still think there's the broader question of whether this novel system produces nonsense results in other cases. I think the only way to answer that is with the kind of accumulated experience that can only come from broad deployment, at scale, in diverse use cases. And this creates significant schedule risk, which I don't think has been called out before. V2's system of kinds is impressive and exciting, but it also didn't exist before 2023. I'm worried that we won't be able to accumulate the experience we'll need to answer that question in time for the committee to feel comfortable including it in C++26. My goal in saying all this is not to argue that kinds are wrong, and that they should be removed. I don't actually know the answer to that. Rather, my goals are to point out that:
Or, put more simply: there's an alternative design target for mp-units which does dimensions-and-magnitudes only, sacrificing strong types for widths, radii, etc. in order to reduce: a) the learning curve, and b) the risk from standardizing a system that has never been tried in software at scale. If this alternative gets rejected because we found that the "kinds" layer does pass a cost/benefit analysis, great! I could never have designed such a solution, and my hat's off to @mpusz for doing so (which is really impressive work no matter the ultimate outcome). I just want to make sure that we explicitly realize that there is a choice to be made. |
Beta Was this translation helpful? Give feedback.
-
ACCU recordings are still not available, but today appeared my shorter talk on the same subject from the "using std::cpp 2023" conference in Madrid. Here is the link https://www.youtube.com/watch?v=3XSVCmWQklI. |
Beta Was this translation helpful? Give feedback.
-
Here's the talk's link: https://www.youtube.com/watch?v=l0rXdJfXLZc.
|
Beta Was this translation helpful? Give feedback.
-
Slide 15: |
Beta Was this translation helpful? Give feedback.
-
Slide 47: https://youtu.be/l0rXdJfXLZc?t=2402. |
Beta Was this translation helpful? Give feedback.
-
Slide 67: |
Beta Was this translation helpful? Give feedback.
-
My latest and greatest talk from the C++ on Sea 2023 is now online: https://www.youtube.com/watch?v=eUdz0WvOMm0. |
Beta Was this translation helpful? Give feedback.
-
The latest mp-units video from the C++ Online 2024 is online. |
Beta Was this translation helpful? Give feedback.
-
Hi,
I just provided a talk about V2 at the ACCU 2023 conference. You can find the slides here: https://github.com/train-it-eu/conf-slides/blob/master/2023.04%20-%20ACCU%202023/mp-units%20-%20Lessons%20learned%20and%20a%20new%20library%20design.pdf. In case you have any feedback, thoughts, or questions please do not hesitate to rise them here.
Beta Was this translation helpful? Give feedback.
All reactions