Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Agenda for Mar 2, 2023 #291

Closed
nairnandu opened this issue Feb 24, 2023 · 2 comments
Closed

Agenda for Mar 2, 2023 #291

nairnandu opened this issue Feb 24, 2023 · 2 comments
Labels
agenda Agenda item for the next meeting

Comments

@nairnandu
Copy link
Contributor

nairnandu commented Feb 24, 2023

Here is the proposed agenda for our meeting on Mar 2, 2023

@nairnandu nairnandu added the agenda Agenda item for the next meeting label Feb 24, 2023
@cookiecrook
Copy link

Accessibility Investigation Scoring Criteria
web-platform-tests/interop-accessibility#3

@nairnandu
Copy link
Contributor Author

Attendees: Alex Russell, @bkardell, @chrishtr, @dandclark, @foolip, @gsneddon, @jamesgraham, @jensimmons, @meyerweb, @nairnandu, @nt1m, @zcorpan

Notes

Notetaker: foolip@

Interop 2023 process and launch retrospective #274

  • foolip: We got through a lot of proposals and launched!
  • jamesgraham: Big picture it seems to have worked very well, looks like a successful project.
  • jensimmons: Definitely agree. We worked through some complex issues we worked through, didn’t necessarily agree at the beginning of those discussions. Ended up with a better outcome through our debate. I appreciate that! The list of 26 areas feel like they’re super important to web developers, that were focusing on the right things. We’re moving the web forward by working on these things.
  • jamesgraham: Even the things we didn’t pick, the process of going through them improved our confidence about developer needs. So even if not on the horizon for Interop, it still helps.
  • jensimmons: I agree with that. Outside, people must assume that things we don’t chose must mean we don’t want to do them, but the proposals spark discussions internally. Some proposals are rejected for unrelated reasons, and the work might get done all the same.
  • nairnandu: feedback summarized on the issue - #285
  • foolip: Agree, overall the feedback was positive. People appreciated being able to submit proposals.
  • jamesgraham: For us, the proposal selection process was exhausting. Maybe there’s some overhead we can cut out. There were a bunch of proposals not on a standards track. People said they would be, but most weren’t. We could set the expectation that it happen before, not during. We’ve also discussed having a fixed number of focus areas.
  • jensimmons: As successful and important as it is, it’s important to keep the work required in balance with the benefit of using the process. I can imagine we’ll get even more proposals next time, especially if we seek out even more feedback. We went from 10 to 26, we can’t go to 50. In the next few months, have a look at our process, will it really work to handle each proposal individually? Introduce ranking earlier in the process? The process worked well, but I don’t know that it’s scaling, especially for those of us in this group and the people we’re involving.
  • foolip: I agree, we did talk about limiting focus areas, and maybe ranking can help us solve that.
  • bkardell: Agree we have an issue with scaling. If we’re not successful at prioritizing the things we’ve taken on, we’ll lose some credibility. “If everything is important nothing is important” and we’ll be back where we started. We’ll know later in the year how tenable this was.
  • jamesgraham: I appreciate getting input from web developers, but how often are we taken by surprise? Can we get the same value from surveys? I’m not saying we shouldn’t be open to proposals from web developers of course. Can we tell developers that the best way to have input is to participate in these surveys?
  • jensimmons: In 2022 we relied on the proposal process to filter for the things that would rise to the top. That worked because most proposals came from us browser vendors. The closer we get to everything being proposed, we don’t have a filtering process. Then we could just take all of the features from caniuse or something. Then we had a hard time writing “rejection letters”' because we had a hard time saying no, even though the point was prioritizing a certain number of features.
  • chrishtr: On proposals that weren’t accepted. This group strives to be open and transparent, and we derive credibility from that, compared to a secret process. I and Google were a bit disappointed we didn’t have a bit more transparency on proposals that weren’t included. For next time, it would be good to be explicit up front about what will be public and private.
  • nairnandu: That’s a good segue into the transparency discussion.

Transparency

  • Retrospective: transparency in proposal selection and feedback #290
  • chrishtr: On proposals that weren’t accepted. This group strives to be open and transparent, and we derive credibility from that, compared to a secret process. I and Google were a bit disappointed we didn’t have a bit more transparency on proposals that weren’t included. For next time, it would be good to be
  • foolip: gave a quick summary on Retrospective: transparency in proposal selection and feedback #290
  • chrishtr: I think more transparency would be good, but at the very least it would be useful to make that more clear next year.
  • Alex: Thank you Philip for correcting me about the data collection in the issue. It would be helpful to me to track things back to developer feedback, a trail of evidence for why something was included not. From my perspective the value of interop is prioritizing things based on identified and identifiable needs. If there’s anything we can do to highlight the source of the data, I think that could bolster the credibility of the project.
  • bkardell: Are you saying from the dashboard you want an obvious way to learn why container queries (for example) was included?
  • Alex: Comparing this to the Blink launch process, there’s a public “vote”, and we might not always agree. When we don’t, it’s useful to have a paper trail. I don’t think that’s had a negative impact on our decision making. I’m not going to design the process and we shouldn’t in this meeting, but that paper trail seems extremely valuable.
  • bkardell: I’m not sure I wholly agree. Some of these things aren’t very easy to explain. Taking MathML as an example, what’s shipping in Chromium has a bit more than what’s in Firefox and Safari. Somehow you need to say we picked these things, and sometimes the reason is we had too many things and can’t do all of them.
  • chrishtr: As a data point, the Chrome team wasn’t able to staff MathML, and the reason it’s now shipping is because of Igalia. Even though I couldn’t convince Google to staff it.
  • bkardell: I’m OK that MathML wasn’t chosen, but the real answer there are too many things.
  • chrishtr: Igalia has gone above and beyond on MathML. For Google, Subgrid was more important than MathML. Maybe we were wrong, but putting that out there is a useful thing. It certainly helps long term to maintain the credibility.
  • bkardell: In the final vote it would have been Mozilla that said no, and I’d be worried about putting that out there in a way that makes Mozilla take a beating. If there’s a way to do it that has the appropriate nuance that’s fine.
  • foolip: Perhaps we can distinguish between evidence like survey results and the positions of browser vendors, an easy part and a hard part. For the hard problem, I’d like to suggest some changes, and maybe it would help if we don’t need to have a position of everything.
  • Alex: There’s credibility comes from sunlight.
  • jamesgraham: I think credibility comes from results. I’m nervous about changes that aren’t don’t help us deliver better from users or developers. Starting from the position that we have to change to have credibility isn’t backed up by what I’m seeing.
  • Alex: I agree that delivering on what’s included is key to credibility. But there’s always going to be what’s not included.
  • jamesgraham: Re what Brian said. We can say even more clearly that not being included doesn’t mean it’s not important. What’s included is a priority list. Not making the cut isn’t indicative of it being a bad feature. It’s not standards positions, not making a judgment on that feature. Some things that were in the process might happen sooner because it was.
  • nairnandu: To summarize, we agree to discuss this more in the near future.
  • foolip: The natural point would be when looking at the process for Interop 2024. I’d like to get the charter done first. And let’s not forget we actually launched Interop 2023! Thank you everyone

Investigation efforts

  • nairnandu: We still need to figure out who the chair is going to be. Chris, Simon?
  • chrishtr: Boaz is willing to lead this, and Simon scheduled a kickoff meeting next week.
  • nairnandu: A question about scoring criteria was posted here
  • gsneddon: The comment was a question about which group should decide on the scoring.
  • chrishtr: I think the subgroup should be empowered to set their own goals and scoring.
  • jamesgraham: I’ll send an email on mobile testing, expect a kickoff meeting next week or after that.

Team charter

  • foolip: I haven’t made any proposed changes to the charter yet.
  • nairnandu: Push this to next meeting

Dashboard changes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
agenda Agenda item for the next meeting
Projects
None yet
Development

No branches or pull requests

2 participants