From 50fc6197e59a5e6b0e49fb6594225203604925ee Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Nicol=C3=B2=20Ribaudo?= Date: Thu, 7 Nov 2024 17:50:08 +0100 Subject: [PATCH] Fix my comments in 2024.10 notes --- meetings/2024-10/october-08.md | 12 ++++++------ meetings/2024-10/october-09.md | 12 ++++++------ meetings/2024-10/october-10.md | 27 +++++++++++++-------------- 3 files changed, 25 insertions(+), 26 deletions(-) diff --git a/meetings/2024-10/october-08.md b/meetings/2024-10/october-08.md index 2892b9d..ecd29cc 100644 --- a/meetings/2024-10/october-08.md +++ b/meetings/2024-10/october-08.md @@ -238,7 +238,7 @@ JKP: I am not 100% sure I understand and maybe if NRO is on the call could respo MM: So specifically, the approval is not asking that any technical content to be approved to become normative today? It’s that the process you have outlined is a good process to go forward, which I am very happy. -NRO: Okay. So the list has had normative changes. The last primitive change was many months ago. And the rest of work we have been doing is just clarifying previously unclear parts of the spec. This is similar to the approval we usually seek around—for ECMA262, we needed to say this is done. Let’s now, as a committee, take to ECMA GA for publishing. So we need now to understand saying TC39 is happy with this document being published as a spec. +NRO: Okay. So the spec has had normative changes. The last normative change was many months ago. And the rest of work we have been doing is just clarifying previously unclear parts of the spec. This is similar to the approval we usually seek around—for ECMA262, we needed to say this is done. Let’s now, as a committee, take to ECMA GA for publishing. So we need now to communicate that TC39 is happy with this document being published as a spec. RPR: So to be clear here, Mark, John is proposing the proposal, but this is also TC39’s confirmation point that this specification is ready to proceed. @@ -256,7 +256,7 @@ DE: Yeah, that sounds great. MM: Okay. -NRO: Yes. Specifically the last day we have to send the notice to the GA to a six-day period I think Thursday evening, because the GA will be on December 11th and we want to do it two months in advance. +NRO: Yes. Specifically the last day we have to send the notice to the GA to start a sixty-day opt-out period, I think Thursday evening, because the GA will be on December 11th and we want to do it two months in advance. MM: Okay, okay, thank you. I’ll have something to say on Thursday, thank you. @@ -270,7 +270,7 @@ MM: Okay, good. That was actually my main worry. Okay, great. Okay. So without t CM: So I think I also missed some stuff here. And this is more of a question about process, I don’t have any particular concerns or objections about these specific standards efforts. But it’s just interestingly different from the way the TG2 process works, where essentially all of the internationalization stuff ends up getting run in excruciating detail through the plenary, and I don’t think we’ve seen much, if any, of the detail of the source maps stuff running through plenary. I don’t have any objection to the way you’ve done it, I actually prefer it. But I’m curious why the process is different and if this different process is acceptable to everyone where the people are actually interested in the source map stuff just engage directly with the TG4 process, why the same logic is not applied equally to TG2? -NRO: We discussed this when TG4 was created and the reason for deferred process and it’s not something you can do within code. It’s just something that is implemented in, like, in develop tools that live in JavaScript execution, and there was—the committee was happy with giving more freedom for this reason to the TG4, also because most delegates were not as, I’d say, interested in source map as what TG2 does, so this has the result of not taking too much committee time. +NRO: We discussed this when TG4 was created and the reason for deferred process and it’s not something you can observe within JavaScript code. It’s just something that is implemented in, like, in developer tools , and there was — the committee was happy with giving more freedom for this reason to TG4, also because most delegates were not as, I’d say, interested in source map as much as what TG2 does, so this has the result of not taking too much committee time. CDP: Yeah, I just wanted to mention, these are all fair comments, but they did specifically bring this to committee and get consensus for this method of working. @@ -288,7 +288,7 @@ NRO: License problems is not about the test, it’s about the spec itself. DE: I see. Yeah, sorry. -NRO: Because it’s the comments and we are trying to re-license it to BSD, which Ecma uses. I—given that there is the 60-day time period in which company cans say we are not okay with content being contributed by us, I don’t think we need to block on that, but I’m not super sure on how these rules work. +NRO: Because it’s the comments and we are trying to re-license it to BSD, which Ecma uses. I—given that there is the 60-day time period in which company can say "we are not okay with content being contributed by us", I don’t think we need to block on that, but I’m not super sure on how these rules work. DE: I think Ecma’s BSD license is usually used just for software, and not for the spec text. So let’s be intentional about this. Yeah, I’m hopeful that it’s, as you say, because the contributing organizations are all present, then it should be okay with respect to the spec text. @@ -415,7 +415,7 @@ WH: I agree that this thing should throw on non-iterables. On the question of ea RPR: Thank you, WH. It’s a weak preference of 2 over 1. NRO prefers throwing. -NRO: Yeah, I—I think we should try here, because, like, it’s very likely—well, it’s definitely possible the user will make mistake by passing a non-traveler, and it’s based on what they want if we wrap it in array brackets. I prefer throwing eagerly, just because that thing will throw eventually, and it’s better to throw consistently rather than just depending on how exactly to use iterator. So that maybe at different times us you have different values and maybe it throws different times in the test. +NRO: Yeah, I think we should try here, because, like, it’s very likely—well, it’s definitely possible the user will make mistake by passing a non-traveler, and it’s based on what they want if we wrap it in array brackets. I prefer throwing eagerly, just because that thing will throw eventually, and it’s better to throw consistently rather than just depending on how exactly to use iterator. So that maybe at different times us you have different values and maybe it throws different times in the test. RPR: Thank you. YSV. @@ -1160,7 +1160,7 @@ CDA: Where I have heavy tooling is the least preferred developer experience/envi YSV: JSSugar is looking at one facet of the problem, which is the syntax side. But more broadly, how are we introducing new things into the language? How we benefit things in the I think beiges, I call it a waiting room for features. Developers can start using the feature. And we can get data about how the feature is being used, we have the rigor and dedication to backwards capability. And basically, the same way we do things. I still have more time to bring this proposal to committee and once it’s ready, this is something we can also discuss. But just to make it really clear. JSSugar is not intended to solve the entire scope of the problem statement we presented. There’s room for others to bring their suggestions. This isn’t the only way to approach this problem -NRO: Yeah. You mentioned that JSSugar has to be designed, keeping in mind lack of experience and making sure that work with the maps. There’s very far from being something that can be transpiled. Like we don’t know yet—like in which to improve because there’s so many. We need a little bit of time. I don’t know what your ideal timeline here would be, but I don’t think we can answer any time soon, if you want this JSSugar, JS 0 to be from developers +NRO: Yeah. You mentioned that JSSugar has to be designed, keeping in mind lack of experience and making sure that work with the source maps. There’s very far from being something that can be transpiled. Like we don’t know yet—like in which to improve because there’s so many. We need a little bit of time. I don’t know what your ideal timeline here would be, but I don’t think we can answer any time soon, if you want this JSSugar, JS 0 to be from developers SYG: I would say Yulia’s topic may be directly related to this. I will let her go diff --git a/meetings/2024-10/october-09.md b/meetings/2024-10/october-09.md index 61feba0..6df7476 100644 --- a/meetings/2024-10/october-09.md +++ b/meetings/2024-10/october-09.md @@ -639,7 +639,7 @@ JMN: We just wanted to leave some room for discussion. JHD already kicked off so SFC: Great, yeah. Thanks for the presentation, JMN. I had another item that disappeared, so I’ll cover that one here too. But I guess my main—I guess my first comment here is I feel like we’re getting ahead of ourselves by trying to figure out, like, the constraints of a primitive type and operator overloading and that type of thing, like, in a world that is at some point in the future, and we should focus on we have now, right? We should focus on building a really good, you know, object that does decimal 128, right, and sort of decouple these things, and the number with precision proposal, like, sort of is a step in that direction, but, like, I think we should really be thinking a lot about, you know, sort of focusing on that now, and leaving open the door for a completely standalone decimal proposal in the future from—by not forcing those invariants, because the things that, you know, MM and JHD and others have said that, like are requirements for primitive are absolutely not requirements for the object, and then it’s it sort of makes the object less capable when objects are, by definition, something that are more capable than primitives can be. -NRO: I think if you wanted to drop the primitives in the future, we should be—I mean, it’s a world in which we don’t have two different decimals. Where one is just, like, not just—like, the object wrapped around another. Because that will be a trigger for numbers in the language. And while today we don’t see a way to have primitives, I would still like to consider that possibility to be realistic one day, and having two different decimal types is just, like, not something that I wish for JavaScript to give us. +NRO: I think if you wanted to support the primitives in the future, we should. I mean, it’s a world in which we don’t have two different decimals. Where one is just, like, not just—like, the object wrapped around another. Because that is how other numbers in the language. And while today we don’t see a way to have primitives, I would still like to consider that possibility to be realistic one day, and having two different decimal types is just, like, not something that I wish for JavaScript to give us. SFC: Yeah, I don’t mean to have all my topics right next to each other, but, yeah, I mean, I would like to see these proposals, like—it—I don’t want to be in a situation where we base—like, where like, you know, the champions of this proposal basically said, like, oh, we have this other possible approach for numerics with precision, you can go solve it this way, go, shoo, get out my way, I’m wanting to move—force my way forward with this decimal proposal in a way we know doesn’t satisfy your requirements. I don’t want to be in that situation. I want to be in a situation where we have—where we’re introducing into the language a, you know, well-rounded solution to how you represent decimal numbers, right? And, you know, that—where we really think about the big picture. Because if we—if we force through this weird decimal object thing that’s designed for a possible future primitive, like, you know, like, what does that mean for, you know—for the—the ability to represent precision and other—you know, its impacts on insill. That seems silly on its face. So, like, if we want to move into this direction of having two objects, I wanted to see those coupled. I don’t want to see at some some future stage proposal, I want to see that together in one slide, like, here is how you can do these things, so that’s my next topic. @@ -649,7 +649,7 @@ NRO: I can reply first before you move. SFC: Sure, we have a reply. Okay, good. -NRO: So would you be okay—like, would but okay with having two separate proposals moved together? I believe that the concerns you’re trying to solve for decimal, while overlapping, they’re, like, we need to focus on different aspects in order to—for the two use case. So we’d be okay with two separate proposals that move step by step, like, side by side in the process instead of actually merging them. +NRO: So would you be okay—like, would but okay with having two separate proposals moved together? I believe that the concerns you’re trying to solve for decimal, while overlapping, they’re, like, we need to focus on different aspects for the two use case. So we’d be okay with two separate proposals that move step by step, like, side by side in the process instead of actually merging them. SFC: I mean, that’s a procedural concern. I mean, if there are two separate proposals, it means the committee could agree put one to Stage 2 and the other one, this is a silly proposal, I don’t any think it belongs in the language. We’re not going to get it to Stage 2. That is why I would be concerned about that. @@ -659,7 +659,7 @@ SFC: I say that’s a question for JMN, not for me. JMN: Yes, that’s right. And I take SFC’s point that—as well that this might be a kind of awkward object. But I guess the issue is, then, I mean, yes, I guess narrowly speaking we trying to not make it impossible. But if there’s some other—other elements that we’re missing here, I think we’re happy to incorporate them. -NRO: Yeah, so the proposal is not designed to behave exactly as the object of a primitive. Like, it behaves exactly like the number object, for example, except it has the various prototype methods. In another potential world that was the idea was pushing back against earlier where we have some type of decimal object now, and we say that doesn’t actually preclude us from having a primitive, the future primitive, just a different way. Like, something unrelated. I believe that’s the, like, worldview that you might consider to be just designed to not conflict with a primitive. While the one now, where we—the one I was, like, it’s designed to fit nicely with the decimal primitive. +NRO: Yeah, so the proposal is not designed to behave exactly as the object of a primitive. Like, it behaves exactly like the number object, for example, except it has the various prototype methods. In another potential world that was the idea was pushing back against earlier where we have some type of decimal object now, and we say that doesn’t actually preclude us from having a primitive, the future primitive, just a different way. Like, something unrelated. I believe that’s the worldview that you might consider to be just designed to not conflict with a primitive. While the one now, where we—the one I was, like, it’s designed to fit nicely with the decimal primitive. SFC: So, yeah, my next topic, so, you know, I appreciate that, you know, JMN and NRO have been, you know, trying to assuage my concerns with this over numerics with precision. But I just want to emphasize here, and I haven’t, you know, in my discussions with NRO and JMN and other, I haven’t really heard these topics be addressed, that, like, this the not just an Intl concern about precision. This concept of being able to represent, you know, the numbers with variable levels of precision has many other use cases, and I’ll just list a few here. So one is, you know, when we look at, you know, precedent in other decimal libraries and other languages, it’s, you know, basically the standard that, like, every other decimal or big decimal library you find in another programming language support this concept, and it’s been that way for a very long time. And given that—given that precedent, it means that, you know, I think JMN sort of—I very much disagree with the slide about halfway through that said, like, the 1, 2, 3 with big green check boxes. That’s just patently false, because we cannot round values to these other platforms if we don’t support precision in the data model. @@ -671,7 +671,7 @@ SFC: Now, all that said, I’m, you know, as I said previously, I’m very much CDA: All right, just noting we’ve got about 15 minutes left. There’s a reply from NRO. -NRO: Yeah. So actually I kept it in the queue moving and I want to reply different to SFC here. One about deviation from IEEE 754. Not in this case, where, like, the only deviation here is they’re exposing less info. For example, the equality within the spec already, it’s the number of trailing zeros, and the only thing we’re doing here is avoiding that precision to the user. Like, all the other—like, the way operation work, the way numbers round, like, we don’t have infinite precision here. So addition to numbers will round, and that will also round exactly how it’s defined at IEEE spec. Regarding the slide with the three check marks, yes, you’re right, this should have been a somewhat yellowish check mark. There are many cases where you actually do that exchange and talk about money is and do not care about, like, with exact precision number, because very often you are, like, working with some fixed precision, you’re working with dollars, you have fixed precision of two decimal digits. But it’s—it is true if cases where you do care about precision as defined by, like, about a number of years, in this case, this proposal does not cover it. +NRO: Yeah. So actually I kept it in the queue moving and I want to reply different to SFC here. One about deviation from IEEE 754. The only deviation here is they’re exposing less info. For example, the equality within the spec already, it’s the number of trailing zeros, and the only thing we’re doing here is avoid exposing that precision to the user. The way operations work, the way numbers round, like, we don’t have infinite precision here. So addition to numbers will round, and that will also round exactly how it’s defined at IEEE spec. Regarding the slide with the three check marks, yes, you’re right, this should have been a somewhat yellowish check mark. There are many cases where you actually do that exchange and talk about money is and do not care about, like, with exact precision number, because very often you are, like, working with some fixed precision, you’re working with dollars, you have fixed precision of two decimal digits. But it is true if cases where you do care about precision as defined by, like, about a number of years, in this case, this proposal does not cover it. NRO: Yeah, and that’s kind of like what we tried to get at with the first slide of the numeric values with precision idea. And that – @@ -1122,7 +1122,7 @@ SYG: Not event subclasses. But event instances. During the time of an applicatio ABO: If the page itself does not use AsyncContext at all, then the way I imagine is that you would always store the registration context. But those would point to the same internal mapping, which would be an internal object in the engines. And whether you create a JavaScript object from that—there was a possibility that you would have that property be a getter actually and only create the JavaScript object if that is needed. -NRO: So for context I help with the proposal, this proposal is now mostly blocked on figuring out exactly how events should work. When we proposed the idea of running the callback in the context where the event listener was registered, we received feedback that in some cases it’s desirable to run the callback in the context where the event was dispatched, if there is any. Mostly from framework authors, but also because this is how, for example, task attribution works in Chrome or how the `console.createTask` API works in Chrome. However, we believe one of the reasons we went with the registration time context was because it is easier to implement. Because you just need to capture that without propagating the snapshot through async steps in engines. +NRO: So, for context, I help with the proposal, this proposal is now mostly blocked on figuring out exactly how events should work. When we proposed the idea of running the callback in the context where the event listener was registered, we received feedback that in some cases it’s desirable to run the callback in the context where the event was dispatched, if there is any. Mostly from framework authors, but also because this is how, for example, task attribution works in Chrome or how the `console.createTask` API works in Chrome. However, we believe one of the reasons we went with the registration time context was because it is easier to implement. Because you just need to capture that without propagating the snapshot through async steps in engines. NRO: So I know we cannot get an answer for this now, but it would be great if the browser people here could talk with their DOM colleagues to understand the feasibility of actually propagating the dispatch time snapshot for async events. For example, if I have an `XMLHttpRequest`, how feasible is it to propagate the context from when `.send()` is called to when the event listener is run? Because once we figure out what to do exactly with the events and once we have an answer people are happy with, then we can advance the proposal to 2.7. @@ -1434,7 +1434,7 @@ YSV: Yes. We are much more nervous about—like, for example, what could potenti RPR: Nicolo? -NRO: Yeah. You said that core JS hits the path because it checks the species and polyfills it. So like given that I don’t understand why we’re concerned about core JS. The polytriggers, so web with old versions of correspond JS would still have that +NRO: Yeah. You said that core JS hits the path because it checks the species and polyfills it. So like given that I don’t understand why we’re concerned about core JS. The polygill triggers, so websites with old versions of it would still have that polyfill. YSV: So the problem comes from the fact that we are not certain about how much this is actually core JS and because we’re talking about millions of page views, this is going to be a lot to check by hand. In order to validate that things are not going to break. It’s a massive amount of work we’re talking about. diff --git a/meetings/2024-10/october-10.md b/meetings/2024-10/october-10.md index d5e3ae2..4c032aa 100644 --- a/meetings/2024-10/october-10.md +++ b/meetings/2024-10/october-10.md @@ -593,16 +593,16 @@ Presenter: Nicolò Ribaudo (NRO) - [proposal](https://github.com/tc39/proposal-defer-import-eval/) - [slides](https://docs.google.com/presentation/d/1yFbqn6px5rIwAVjBbXgrYgql1L90tKPTWZq2A5D6f5Q/edit) -NRO: Okay. Thank you. Okay, so this is our Stage 2.7 update for the deferred import evaluation. So we were running tests, and we’re in parallel working on experiment 8 implementation. But just to make sure that these steps are all correct. And spoiler, we found some bugs. So just to very quickly recap, I’m not going to go through how it works at this point, but, like, what’s important for this presentation is to remember when a deferred module is evaluated. It’s not deferred when you evaluate. That’s the whole point. It’s not evaluate when you reference the deferredness space. It’s only deferred when you rate it. Specifically, when you try to read a string property, it always triggers evaluation regardless of whether that string is an actual export of the module or not. And reason in modules when accessing unknown export names is that tools or platforms that can skip actually in the module list don’t need to pre-collect a list of exported names, and also so that the behavior evaluation—you can have a function that says with this property access to the relation and return. When you get a symbol, that never triggers evaluation, because modules cannot export symbol key properties, so we already know that the property will not exist anyway. This can be determined without PTM concept of modules, and this means, for example, you can get the symbol toStringTag, symbol is present on to object you have or not. For the other object operation, it depends on whether the internal can get or not. So by implementing this, we found that [[get]] is called less frequently than what is assumed when writing the spec text. +NRO: Okay. Thank you. Okay, so this is our Stage 2.7 update for the deferred import evaluation. So we were running tests, and we’re in parallel working on experimental implementation. But just to make sure that these steps are all correct. And spoiler, we found some bugs. So just to very quickly recap, I’m not going to go through how it works at this point, but, like, what’s important for this presentation is to remember when a deferred module is evaluated. It’s not evaluated when you import it. That’s the whole point. It’s not evaluated when you reference the deferred namespace. It’s only evaluate when you use it. Specifically, when you try to read a string property, it always triggers evaluation regardless of whether that string is an actual export of the module or not. And reason in modules when accessing unknown export names is that tools or platforms that can skip actually loading the module don’t need to pre-collect a list of exported names, and also so that the behavior of evaluation — you can have a function that says "access this random property to trigget evaluation". When you get a symbol, that never triggers evaluation, because modules cannot export symbol key properties, so we already know that the property will not exist anyway. This can be determined without looking at the internals of modules, and this means, for example, you can get the symbol toStringTag, symbol is present on to object you have or not. For the other object operation, it depends on whether the internal calls [[Get]] or not. So by implementing this, we found that [[Get]] is called less frequently than what is assumed when writing the spec text. -NRO: So, for example, [[GetOwnProperty]] we first check if it’s an unexplored name and then call get. So it behaves differently. And this means that actually only evaluating get, we don’t get the benefits because we can just use another of the object methods and we’ll assume it to nod list of expert names, so the processed solution here is always check deferred name of exported names. This means that also if, for example, trying to reflect on keys with the relation, object keys are already evaluation because it calls get internally, but there are other ways to get list of keys from an object that do not call get internally, so it would be to get it covered. So that the rules becomes whenever you try to query properties about the object, if this properties depend on the contents of the module, the module would be evaluated. The end second problem we found was that import.defer, which is the dynamic form proposal, does not truly work as we were expecting it to work. So if you have this example here, when would you expect evaluation to happen? Well, it happens there, in the import of defer call. And not when you actually write something await space, and this is, like, completely against the proposal. There’s not deferred execution going on here. And why is this happening? Well, if we try to just, like think of how import defer works and in the code, import defer, we create a promise, we load the module, load the premises and then resolve the premise with the deferred namespace object. And how does resolve work? Well, resolves check if the given value has a property and if it is thenable, otherwise it actually resolves the promise with the values it’s given. And it’s this check that’s causing problems. We’re performing of get of this `then` property on the namespace object, and this always triggering evaluation of the module before the promise to import the deferred is resolved. +NRO: So, for example, [[GetOwnProperty]] we first check if it’s an exported name and then call [[Get]]. So it behaves differently. And this means that actually only evaluating [[Get]], we don’t get the benefits because we can just use another of the object methods and we’ll assume it to nod list of expert names, so the processed solution here is always check deferred name of exported names. This means that also if, for example, you trying to `Reflect.keys` it does not trigger evaluation, while `Object.keys` triggets evaluation because it calls [[Get]] internally, but there are other ways to get list of keys from an object that do not call [[Get]] internally, so it would be to get it covered. So that the rules becomes whenever you try to query properties about the object, if this properties depend on the contents of the module, the module would be evaluated. The second problem we found was that `import.defer`, which is the dynamic form proposal, does not truly work as we were expecting it to work. So if you have this example here, when would you expect evaluation to happen? Well, it happens there, in the `import.defer` call. And not when you actually write something `await namespace`, and this is, like, completely against the proposal. There’s not deferred execution going on here. And why is this happening? Well, if we try to just, like think of how `import.defer` works and internally, we create a promise, we load the module, load the dependencies and then resolve the promise with the deferred namespace object. And how does resolve work? Well, resolves checks if the given value has a .`.then` property and if it is callable, otherwise it actually resolves the promise with the values it’s given. And it’s this check that’s causing problems. We’re performing of get of this `then` property on the namespace object, and this always triggering evaluation of the module before the promise to import the deferred is resolved. -NRO: What can we do about this? I’m going to list some solution here. And to be clear, I dislike all of them. But let’s go ahead and let’s see. So the first solution, like, that we could apply is that [[Get]] on the evaluation for known names, and these is probably good for some platforms, but it means that both tools and platforms that would like to skip actually a lot of the module will collect data, and so it makes implementation more difficult. And also there would be no way to trigger modules that have no exports. Maybe we could make import defer if a model is linking ever, but that will be—that was suggested in the past and was considered to be possibly surprising. +NRO: What can we do about this? I’m going to list some solutions here. And to be clear, I dislike all of them. But let’s go ahead and let’s see. So the first solution, like, that we could apply is that [[Get]] on the evaluation for known names, and these is probably good for some platforms, but it means that both tools and platforms that would like to skip actually a lot of the module will collect data, and so it makes implementation more difficult. And also there would be no way to trigger modules that have no exports. Maybe we could make import defer if a module is linking ever, but that will be—that was suggested in the past and was considered to be possibly surprising. -NRO: Okay. Then another solution is that [[Get]] specifically `then` the property on the trigger evaluation if there’s an export, which is the same, like, unknown as both. There is just moment that you collect the module. But it could be okay because you’re just bound to one property and not to however many exports the model has. A third solution is that [[Get]] and `then` +NRO: Okay. Then another solution is that [[Get]] specifically of `then` the property only triggers evaluation if there’s an export, which is the same, like, unknown as both. There is just moment that you collect the module. But it could be okay because you’re just bound to one property and not to however many exports the model has. A third solution is that [[Get]] and `then` never triggers evaluation. So getting `then` from a namespace, from a deferred namespace would return undefined, but then if it triggers evaluation through some other way, like through some other property, then dot dev property would show the value of the then export. -NRO: Another solution considered, and that we was, like, add some sort of symbol dot thenable to the defer model namespace and the promise protocol would check the symbol of thenable before checking if there’s a then property. This was proposed in the past for non-deferred module namespaces. But it was rejected because it was not web compatible. And at the time, it was already shipping. +NRO: Another solution considered, and that we was, like, add some sort of `Symbol.unthenable` to the defer model namespace and the promise protocol would check the `Symbol.unthenable` before checking if there’s a `then` property. This was proposed in the past for non-deferred module namespaces. But it was rejected because it was not web compatible. And at the time, it was already shipping. NRO: And there are other solution if there’s an object that has a property that points to the deferred namespace, so that the—so that even if the promise proposal tries to go, nothing happens. Or maybe we just don’t do `import.defer()` at all. This syntax that we went for with import defer was designed to be—to be compatible across this different, like, module loading layers, so it would actually stick with that and make sure there’s a symbol that the developers have to learn what the syntax is, if you have syntax error, like, per phase, and lastly, actually, this animation was as a result of that. We could also just special case defer to model namespaces in the promised protocol and just before checking them, check the protocol namespace, and in that case, just resolve the promise to the top. @@ -613,7 +613,7 @@ exports from module namespaces, from deferred module namespaces, so trying to re YSV: So just thinking about why you would want to use a deferred module. You’re trying to pay half of the cost of module ex-substantiation and evaluation up front. So you pay half and then you pay the second half later. It feels like using dynamic import for something like this, I’m wondering, like, I’m wondering how much you’re saving there, because already you’ve deferred the cost of loading and evaluation until a later point. You’re already in an asynchronous context, if I understand correctly, it’s still going to be async, so you’re not getting the synchronous behavior, which I know is something that’s been requested, back when we doing dynamic import to begin with, they were asking for a synchronous import. As far as I understand, we’re not getting that here either. So it feels like we’re not getting much of benefit from `import.defer` or maybe I’m missing the benefit here. -NRO: The use case is indeed much smaller than the static import. The only use case I could, like—I so mentioned using `import.defer` is to have, like, a conditional import of the top levels of module, so you could doing like `if(node)` and import this and import that and after the module, you—while still then deferring the relation of the module two later when needed in some other synchronous path. You still get the—some benefit from being able to do a synchronous relation, but, yes, it’s true that the use case is much smaller here. +NRO: The use case is indeed much smaller than the static import. The only use case I could, like—I so mentioned using `import.defer` is to have, like, a conditional import of the top level of a module, so you could doing like `if (isNodeJS)` and import this and import that and after the module, you—while still then deferring the evaluation of the module two later when needed in some other synchronous path. You still get the—some benefit from being able to do a synchronous evaluation, but, yes, it’s true that the use case is much smaller here. YSV: Okay, so in this case, what makes me uncomfortable about, like, hiding `then` or shadowing it is then we are changing how users’ modules are going to potentially work. My preference in that case is actually to not do import defer. Unless there’s a really strong reason why we might want to have it. @@ -629,7 +629,7 @@ ACE: The door isn’t completely closed, so I can still be convinced. SYG: Say more about why it’s harder—why we close the door more than usual by not doing it right now? -NRO: I can answer that. So the reason is that we need to decide now how deferred import objects work, even if we do not give an `import.defer` API. So let’s say we do not do `import.defer` and do no special, like, hiding of `then`. Then today if you get a deferred module next case from a static import and you pass it to promise.resolve, that would trigger evaluation, and in that case, it’s perfectly fine to trigger evaluation, and it’s also how modules work. If we call them. However, this means that if the future we decide to actually let’s introduce import.defer, then behavior of the object returned to `import.defer` is already defined, and we cannot change it and so `import.defer` would already trigger evaluation. And the only way around this is to use option F on screen. Which is, like, not a very nice looking option, but, like, that’s what it means if we not doing `import.defer` now doesn’t mean that we are deferring the problem, just means we’re saying we’re not going to have import.defer. +NRO: I can answer that. So the reason is that we need to decide now how deferred import objects work, even if we do not give an `import.defer` API. So let’s say we do not do `import.defer` and do no special, like, hiding of `then`. Then today if you get a deferred module next case from a static import and you pass it to promise.resolve, that would trigger evaluation, and in that case, it’s perfectly fine to trigger evaluation, and it’s also how modules work. If we call them. However, this means that if the future we decide to actually let’s introduce `import.defer`, then behavior of the object returned to `import.defer` is already defined, and we cannot change it and so `import.defer` would already trigger evaluation. And the only way around this is to use option F on screen. Which is, like, not a very nice looking option, but, like, that’s what it means if we not doing `import.defer` now doesn’t mean that we are deferring the problem, just means we’re saying we’re not going to have `import.defer`. YSV: okay, I have a response to this. Import—the dynamic import is already a deferred import, but it’s deferring all the work. So by—so imagine the use case where you—you want to save the up front work of loading the file, and then executing it. So you put—in this case, you would be branching in two ways. If you—okay, so the idea of static import is you’ve got a long running static defer on import, sorry, oh, you I got words. Static import with defer allows you to write long running application that has infrequently used libraries, loaded statically, but you pay the cost for those infrequently used libraries later on, and these infrequently used libraries are ubiquitous and difficult to refactor out of the code base. That’s the goal there. It’s a transparent hint about optimization to the compiler. But if you want to defer the loading of the file, then the right tool for that is dynamic import, because otherwise you are putting an if statement at the top level of your—of your file, and importing either A or B, but you are going to need to branch later in order to import that—in order to execute that code, because there’s going to be a fork—you’re going to have forking twice. So I don’t see why you wouldn’t in fact just prefer using the regular dynamic import in this case. @@ -666,15 +666,14 @@ YSV: The reason I was asking -- NRO: Sorry. Can I interrupt a second. Given that use case hypothetical, can I go through the queue and come back to this. JRL: We have eight minutes left and four more items. -[ WRITER SWITCHOVER ] WH: I’m trying to understand why people use thenable modules. Some do. Why? -NRO: It was to support last. And that was in the conversations when we were discussing this problem for the dynamic part before that existed. But then it only worked with the dynamic import and not the static import. But, yes, it’s generally considered to be bad practice. +NRO: It was to support async intialization. And that was in the conversations when we were discussing this problem for the dynamic `import()` before that existed. But then it only worked with the dynamic import and not the static import. But, yes, it’s generally considered to be bad practice. WH: But some people do it. So if we hide `then` for them, that will break them? -NRO: No. Because deferred cases have separate from the mixed cases and we can hide it only from the deferred namespace objects and not for the old plastic space object. Not break existing code. It means that those people would be—those modules not expecting. +NRO: No. Because deferred namespaces are separate from the sync cases and we can hide it only from the deferred namespace objects and not for the old namespace object. Not break existing code. It means that those people would be—those modules not expecting. WH: If they did do dynamic import defer, they would get everything except the `then` method? @@ -706,7 +705,7 @@ GB: Thanks JRL. Yeah, I mean, if I can speak briefly to that. We definitely did JRL: So I have a slightly different perspective on this than what GB just said. In the promise machinery the reason that when you `promise.resolve` with the thenable the reason that we have to do access to then and invoke the then is because we have to adopt the state of whatever it is. If we just skip that step of calling `promise.resolve` and fulfill the promise object directly without going into the nested adoption phase, we can avoid censoring then in module, you can export whatever you want. We won’t do the dynamic promised behavior for the namespaces. Something like that seems better than censoring and preventing a then export from even existing. -NRO: So that would be—I guess `.then` to the promise, it changes to trigger – +NRO: So that would be—I access `.then` to the promise, it changes to not trigger – JRL: Somewhere in the module resolution there is a thing that wraps it in a promise and either calling promise resolve or choosing the promise capability and calling the resolve function on that. All that internally does the same stuff and tries to adopt the state of the internal thing. We have special promise handling of that thenables instead of trying to adopt the state of the value you’re resolving with just fulfill directly, turn the promise to the fulfilled state. Don’t try to adopt. We can skip this evaluation entirely. @@ -718,7 +717,7 @@ JHD: Okay. I guess what I meant is—like intended to mean I don’t think it sh JRL: We have three topics left on the queue and we’re now a minute over our time box. Okay to go on? -NRO: JRL was not suggesting that. JRL was suggesting special casing the first promise creation from import from that number import, the then property. Just the first axis. And, yes, so I see the rest in the queue. Deferred use case. Complete use case from SFC, but I believe that this case is perfectly solved because if you have a Wasm instance, you don’t need `import.defer` and import the source and then later go with the send substantiate. So I heard Agoric opinion for actually doing ACE’s. But given the general discussion, I would prefer to ask if anybody has any like call it trunk position to just dropping it for defer. +NRO: JRL was not suggesting that. JRL was suggesting special casing the first promise creation from import from that dynamic import, the then property. Just the first access. And, yes, so I see the rest in the queue. Deferred use case. Complete use case from SFC, but I believe that this case is perfectly solved because if you have a Wasm instance, you don’t need `import.defer` and import the source and then later go with the send substantiate. So I heard Agoric opinion for actually doing ACE’s. But given the general discussion, I would prefer to ask if anybody has any like call it trunk position to just dropping it for defer. JWK: I support A but it does increase the difficulty for some implementations that needed to cross-file information to transform a file. The next choice is to drop `import.defer` and if we really have a need for that, we can add that in the future. @@ -829,7 +828,7 @@ NRO: Thank you. That makes it clear. So I need a second to collect my thoughts. JKP: I’m just wondering if it would help for other people if we presented something and we can talk more concretely. Let me have the mic back. -NRO: Slash slash at the beginning. Yes. So the concrete problem here is that if we do the next parsing, then—so for more context there are two ways of extracting comments from the spec. This is because some tools like Chrome DevTools parse it and look for tokens and the tools in this case would find source map. There are other tools such as VS Code or JS that instead take a different approach where they start from the bottom of the file and run on each line looking for things that look like comments and they bail out as soon as they find something that is not a comment. So in this case here, those tools would see line 6 is a comment and proceed to line 5 and see line 5 as a comment and do this source map and search the source map by notifying the server that this 5 has been debugged. +NRO: `//` at the beginning. Yes. So the concrete problem here is that if we do the next parsing, then—so for more context there are two ways of extracting comments from the spec. This is because some tools like Chrome DevTools parse it and look for tokens and the tools in this case would find source map. There are other tools such as VS Code or JS that instead take a different approach where they start from the bottom of the file and run on each line looking for things that look like comments and they bail out as soon as they find something that is not a comment. So in this case here, those tools would see line 6 is a comment and proceed to line 5 and see line 5 as a comment and do this source map and search the source map by notifying the server that this 5 has been debugged. MM: To make a point I’m sure you would appreciate is you can write a REGEX that detects all of the closed bracket issues, you know, the back quote and the quote and the double quote, the closed multi-line comment that detects and rejects all of those. So it’s the requirement that this recognition happened by REGX does not prevent us from adopting the safer rule. @@ -976,7 +975,7 @@ NRO Yeah. Just I wanted to mention that I think getting useful results of home c DE: Historically in TC39, we sometimes overestimated what could be done by engines. We have said, “this optimization should be able to make the semantics free or very cheap” but it doesn’t always work out that way in actual JavaScript engines. We shouldn’t make the same mistakes with tools. Tools also face limitations on what they can do in practice. In particular, although there are some optimizations that sometimes happen in tools, the basics of transpiling is really about very local transforms that don’t cross lots of boundaries and don’t do a lot of analysis. So I think overall, JSSugar doesn’t allow us to do different kinds of things in JavaScript that aren’t possible to implement interpreters because transpilers and interpreters are both limited, in practice, to the analysis they could do in a very limited local online way. -NRO: I fully agree with what DE said. And I want to bring up some counterexamples thought about tools that did try to do like to adjust this based on very powerful code analysis. One of them was just called tray pack, developed by FaceBook, five to ten years ago, I don’t know exactly when doing analysis to see how various could line—what values would variables have with types, like be sure about the types, but about then—let’s hope developers get the approach. And it didn’t go, not get to go something concrete. Because although they worked in a lot of, like, types of analysis, they just kept the steps and it was difficult to cover. There is another very powerful one that was again developed by FaceBook and released very frequently. That is the—I don’t remember the name. The react compiler, which actually is a proper compiler, that is hair own, does their loan optimization passes and everything. But the only way that they can do that is by not actually following the semantics. They have react its own subset of JavaScript saying you can only call functions at this time and you can only call these types of functions. And so that compiler has to assume that developers have been very good at following those rules, just working with generic semantics. +NRO: I fully agree with what DE said. And I want to bring up some counterexamples thought about tools that did try to do like to adjust this based on very powerful code analysis. One of them was just called Prepack, developed by Facebook, five to ten years ago, I don’t know exactly when doing analysis to see how various could line—what values would variables have with types, like be sure about the types, but about then—let’s hope developers get the approach. And it didn’t go, not get to go something concrete. Because although they worked in a lot of, like, types of analysis, they just kept the steps and it was difficult to cover. There is another very powerful one that was again developed by Facebook and released very recently. That is the—I don’t remember the name. The react compiler, which actually is a proper compiler, that does their optimization passes and everything. But the only way that they can do that is by not actually following the semantics. They have react its own subset of JavaScript saying you can only call functions at this time and you can only call these types of functions. And so that compiler has to assume that developers have been very good at following those rules, just working with generic semantics. KM: Yeah. I think—I think there’s—my understanding and obviously SYG can correct me if I am wrong, the idea is not so much tools would remove these overheads. It’s that by existing in tools for incubating longer, that developers will start to see the potential, issues and performance from the APIs at a stage where we could actually revisit it. Rather than, like, after it’s too late. And so I don’t know. That’s just my two thoughts—two cents.