-
Notifications
You must be signed in to change notification settings - Fork 297
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Testing the Removal of Artificial Latency behind a feature flag #5480
Conversation
78313ff
to
eb48cda
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is getting feature flags really fast, when they're cached? I'm just wondering if making getFeatureFlag
async means you're going to introduce another kind of random delay that depends on how busy the main thread is.
Could we drill down on the point mentioned in the test?
e6c82fc
to
b954605
Compare
originalFeatureFlagProviderInstance = featureFlagProvider.instance | ||
enabledFeatureFlags.clear() | ||
featureFlagProvider.instance = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can use vi.spyOn
here to spy on relevant featureFlagProvider
methods and replace their return values with spy.mockReturnValue()
. Or we can mock the whole import by the path. With that in place, vi.restoreAllMocks()
would make featureFlagProvider.instance = originalFeatureFlagProviderInstance
obsolete.
Also, we can spy on the graphql client, which resolves feature flag values. That would test the feature flag provider logic, too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@valerybugakov I got something better. I adopted the new logic to get the value of the feature flag by doing a subscription
and then changed the function to have the args
export function getArtificialDelay(params: {
featureFlags: LatencyFeatureFlags
uri: string
languageId: string
codyCompletionDisableLowPerfLangDelay: boolean
completionIntent?: CompletionIntent
}): number {
This made the testing of the function much easier as I only have to tweak the params codyCompletionDisableLowPerfLangDelay
.
NOTE:
- The logic of subscription to a feature flag + GQL CLient is also tested here(but that's a separate thing). Since that logic is already tested I don't see the reason to make this an additional E2E test when those are two separate things.
- NOTE: I also tested the logic locally for this as Quinn recently merged this PR make some services more reactive to config #5550 So whenever Client Config is changed the Feature flags are "reloaded" and I could verify that subscription logic. The nuance here is that everytime you change the feature flag on the server that doesn't mean its automatically uploaded on the client. You either reload the client or change the config settings of the client for the feature flags to be reloaded(Quinn explained that to me and I tried it).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great!
29f1434
to
fe00758
Compare
|
35730ba
to
9e29f3b
Compare
@valerybugakov Apologies I was still in the middle of fixing the PR when you saw the new changes. It wasn't in the right state(I will try to be mindful about communicating that the PR is not in the right state) |
7bf82ef
to
760ff67
Compare
@valerybugakov I updated the PR again with all the recommendations that you suggested and now its a lot lot cleaner. With the latest changes from Quinn I have the subscription/update logic for feature flags also(Read this comment) As for testing I have also incorporated the changes that you recommended so its easy to understand the tests too. Thanks again for such a thorough review, I would like another one now. |
cc7d2cf
to
04d3ae5
Compare
// Fetch the initial value for disableLowPerfLangDelay and set it | ||
firstValueFrom(completionProviderConfig.completionDisableLowPerfLangDelay).then( | ||
disableLowPerfLangDelay => { | ||
this.disableLowPerfLangDelay = disableLowPerfLangDelay | ||
} | ||
) | ||
|
||
// Subscribe to changes in disableLowPerfLangDelay and update the value accordingly | ||
this.disposables.push( | ||
subscriptionDisposable( | ||
completionProviderConfig.completionDisableLowPerfLangDelay.subscribe( | ||
disableLowPerfLangDelay => { | ||
this.disableLowPerfLangDelay = disableLowPerfLangDelay | ||
} | ||
) | ||
) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need both of these? There's an example of using CodyAutocompleteTracing
in the InlineCompletionItemProvider
constructor, which relies only on one call sequence to initialize and keep the local value up to date.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both of these are technically NOT needed but I wanted to maintain the logical consistency with "initializing" something and then subscribing to its value but yeah since its redundant I don't see the reason to actually keep it for anything other than having a reasonably understandable logic.
So I removed it
145bfb1
to
3590e29
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for addressing all the comments! 💜
user: false, | ||
user: codyAutocompleteDisableLowPerfLangDelay, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to update the variable name here because it doesn't match the underlying feature flag.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
3590e29
to
e8d9499
Compare
Solves [Linear Issue](https://linear.app/sourcegraph/issue/CODY-3759/remove-codyautocompleteuserlatency-feature-flag) ## History: When working on the Removal of Artificial Latency [behind a feature flag](#5480) Valery said FeatureFlag.CodyAutocompleteUserLatency feature flag and all the related logic is not used anymore and plan on conducting [no further experiments with it](#5480 (comment)). So I removed that and a lot of related logic that made this function and its tests very intricate. ### This PR does - Remove user-based latency adjustments and related feature flag - Simplify getArtificialDelay function to only consider language and completion intent - Update lowPerformanceConfig to include both language IDs and completion intents - Remove resetArtificialDelay function and related calls(This function would reset User based latency for when completions were rejected. This stuff is not relevant after the removal of the feature flag) - Update tests to reflect new simplified logic - Remove unused LatencyFeatureFlags interface and related parameters ## Test plan Added new tests and made the logic very clean and self explainatory. <!-- Required. See https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles. --> ## Changelog <!-- OPTIONAL; info at https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c -->
This PR aims to remove the artificial delay introduced for low-performance languages and specific AST nodes, as outlined in th issue. The artificial delay was originally implemented for low performance languages so that users are not spammed with unhelpful suggestions. , but it has become less relevant over time, and it may now impact the predictability of autocomplete latency and user experience (UX).
Changes:
Introduce a feature flag:
A new feature flag has been added to toggle the artificial delay.
Objective: The main objective is to achieve predictable latency and a consistent autocomplete UX for all languages.
After merging this PR, the feature flag will be enabled for controlled testing, and data will be collected to assess the impact on CAR and latency for all languages
Depending on the results, the team will decide whether to remove the artificial delay logic entirely.
Test Plan:
I tested this PR locally by enabling and disabling the feature flag on my local Sourcegraph instance(I added the feature flag on it and then set the true/false values). I also used the debugger to run VS Code Extension on Desktop
Changelog