You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As currently specified in FLEDGE, generate_bid and score_ad are pure functions. This poses challenges for debugging and experimentation: how do you know whether your code is operating correctly on user devices? (ex: #146) Event-level reporting would not be compatible with the privacy model, but what about aggregate reporting?
If the worklets had the ability to invoke the aggregate reporting API they could transmit this information privately. For example a seller might want, for their own debugging, to know why they rejected an ad, and could send ad123 rejected for reason X. If that was a sufficiently common occurrence the seller could learn about it. The seller might also want to send more-aggregated information like two separate messages, ad123 rejected and rejected for reason X, to collect information in cases where the combination of the ad ID and the rejection reason might be insufficiently common. Alternatively, a seller might want to send additional information from the auction that they wouldn't have access to in report_result because of joining concerns.
Similarly, a buyer or seller could run an experiment, perhaps to test a change in their logic or verify that a release didn't introduce regressions. They could divert when serving their logic (as in #149) or randomly during the auction, and then use aggregate reports to evaluate the effect of the change.
We believe this covers the use cases of report_loss, while also providing additional options for debugging and experimentation.
The text was updated successfully, but these errors were encountered:
Private Aggregation API is accessible from Protected Audience bidding, scoring and reporting scripts. I'm closing this issue as I think the concern was addressed. Feel free to reopen or file another if you have further concerns.
As currently specified in FLEDGE,
generate_bid
andscore_ad
are pure functions. This poses challenges for debugging and experimentation: how do you know whether your code is operating correctly on user devices? (ex: #146) Event-level reporting would not be compatible with the privacy model, but what about aggregate reporting?If the worklets had the ability to invoke the aggregate reporting API they could transmit this information privately. For example a seller might want, for their own debugging, to know why they rejected an ad, and could send
ad123 rejected for reason X
. If that was a sufficiently common occurrence the seller could learn about it. The seller might also want to send more-aggregated information like two separate messages,ad123 rejected
andrejected for reason X
, to collect information in cases where the combination of the ad ID and the rejection reason might be insufficiently common. Alternatively, a seller might want to send additional information from the auction that they wouldn't have access to inreport_result
because of joining concerns.Similarly, a buyer or seller could run an experiment, perhaps to test a change in their logic or verify that a release didn't introduce regressions. They could divert when serving their logic (as in #149) or randomly during the auction, and then use aggregate reports to evaluate the effect of the change.
We believe this covers the use cases of
report_loss
, while also providing additional options for debugging and experimentation.The text was updated successfully, but these errors were encountered: