Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing the Removal of Artificial Latency behind a feature flag #5480

Merged
merged 1 commit into from
Sep 17, 2024

Conversation

arafatkatze
Copy link
Contributor

@arafatkatze arafatkatze commented Sep 5, 2024

This PR aims to remove the artificial delay introduced for low-performance languages and specific AST nodes, as outlined in th issue. The artificial delay was originally implemented for low performance languages so that users are not spammed with unhelpful suggestions. , but it has become less relevant over time, and it may now impact the predictability of autocomplete latency and user experience (UX).

Changes:

Introduce a feature flag:

A new feature flag has been added to toggle the artificial delay.

  • Objective: The main objective is to achieve predictable latency and a consistent autocomplete UX for all languages.

  • After merging this PR, the feature flag will be enabled for controlled testing, and data will be collected to assess the impact on CAR and latency for all languages

  • Depending on the results, the team will decide whether to remove the artificial delay logic entirely.

Test Plan:

I tested this PR locally by enabling and disabling the feature flag on my local Sourcegraph instance(I added the feature flag on it and then set the true/false values). I also used the debugger to run VS Code Extension on Desktop

Changelog

@arafatkatze arafatkatze marked this pull request as ready for review September 5, 2024 15:06
@arafatkatze arafatkatze changed the title Artificial latency fix Testing the Removal of Artificial Latency behind a feature flag Sep 5, 2024
@arafatkatze arafatkatze force-pushed the artificial-latency-fix branch 2 times, most recently from 78313ff to eb48cda Compare September 5, 2024 15:25
Copy link
Contributor

@dominiccooney dominiccooney left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is getting feature flags really fast, when they're cached? I'm just wondering if making getFeatureFlag async means you're going to introduce another kind of random delay that depends on how busy the main thread is.

Could we drill down on the point mentioned in the test?

vscode/src/completions/artificial-delay.test.ts Outdated Show resolved Hide resolved
@arafatkatze arafatkatze force-pushed the artificial-latency-fix branch 3 times, most recently from e6c82fc to b954605 Compare September 8, 2024 14:14
vscode/src/completions/completion-provider-config.ts Outdated Show resolved Hide resolved
vscode/src/completions/artificial-delay.ts Outdated Show resolved Hide resolved
vscode/src/completions/artificial-delay.ts Outdated Show resolved Hide resolved
vscode/src/completions/artificial-delay.test.ts Outdated Show resolved Hide resolved
vscode/src/completions/inline-completion-item-provider.ts Outdated Show resolved Hide resolved
vscode/src/completions/artificial-delay.test.ts Outdated Show resolved Hide resolved
Comment on lines 16 to 18
originalFeatureFlagProviderInstance = featureFlagProvider.instance
enabledFeatureFlags.clear()
featureFlagProvider.instance = {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can use vi.spyOn here to spy on relevant featureFlagProvider methods and replace their return values with spy.mockReturnValue(). Or we can mock the whole import by the path. With that in place, vi.restoreAllMocks() would make featureFlagProvider.instance = originalFeatureFlagProviderInstance obsolete.

Also, we can spy on the graphql client, which resolves feature flag values. That would test the feature flag provider logic, too.

Copy link
Contributor Author

@arafatkatze arafatkatze Sep 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@valerybugakov I got something better. I adopted the new logic to get the value of the feature flag by doing a subscription
and then changed the function to have the args


export function getArtificialDelay(params: {
    featureFlags: LatencyFeatureFlags
    uri: string
    languageId: string
    codyCompletionDisableLowPerfLangDelay: boolean
    completionIntent?: CompletionIntent
}): number {

This made the testing of the function much easier as I only have to tweak the params codyCompletionDisableLowPerfLangDelay.

NOTE:

  1. The logic of subscription to a feature flag + GQL CLient is also tested here(but that's a separate thing). Since that logic is already tested I don't see the reason to make this an additional E2E test when those are two separate things.
  2. NOTE: I also tested the logic locally for this as Quinn recently merged this PR make some services more reactive to config #5550 So whenever Client Config is changed the Feature flags are "reloaded" and I could verify that subscription logic. The nuance here is that everytime you change the feature flag on the server that doesn't mean its automatically uploaded on the client. You either reload the client or change the config settings of the client for the feature flags to be reloaded(Quinn explained that to me and I tried it).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great!

Copy link

‼️ Hey @sourcegraph/cody-security, please review this PR carefully as it introduces the usage of an unsafe_ function or abuses PromptString.

@arafatkatze arafatkatze force-pushed the artificial-latency-fix branch 2 times, most recently from 35730ba to 9e29f3b Compare September 13, 2024 19:34
@arafatkatze
Copy link
Contributor Author

@valerybugakov Apologies I was still in the middle of fixing the PR when you saw the new changes. It wasn't in the right state(I will try to be mindful about communicating that the PR is not in the right state)

@arafatkatze arafatkatze force-pushed the artificial-latency-fix branch 5 times, most recently from 7bf82ef to 760ff67 Compare September 16, 2024 01:54
@arafatkatze
Copy link
Contributor Author

@valerybugakov I updated the PR again with all the recommendations that you suggested and now its a lot lot cleaner.

With the latest changes from Quinn I have the subscription/update logic for feature flags also(Read this comment)

As for testing I have also incorporated the changes that you recommended so its easy to understand the tests too.

Thanks again for such a thorough review, I would like another one now.

lib/shared/src/experimentation/FeatureFlagProvider.ts Outdated Show resolved Hide resolved
vscode/src/completions/artificial-delay.test.ts Outdated Show resolved Hide resolved
vscode/src/completions/inline-completion-item-provider.ts Outdated Show resolved Hide resolved
Comment on lines 184 to 206
// Fetch the initial value for disableLowPerfLangDelay and set it
firstValueFrom(completionProviderConfig.completionDisableLowPerfLangDelay).then(
disableLowPerfLangDelay => {
this.disableLowPerfLangDelay = disableLowPerfLangDelay
}
)

// Subscribe to changes in disableLowPerfLangDelay and update the value accordingly
this.disposables.push(
subscriptionDisposable(
completionProviderConfig.completionDisableLowPerfLangDelay.subscribe(
disableLowPerfLangDelay => {
this.disableLowPerfLangDelay = disableLowPerfLangDelay
}
)
)
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need both of these? There's an example of using CodyAutocompleteTracing in the InlineCompletionItemProvider constructor, which relies only on one call sequence to initialize and keep the local value up to date.

Copy link
Contributor Author

@arafatkatze arafatkatze Sep 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both of these are technically NOT needed but I wanted to maintain the logical consistency with "initializing" something and then subscribing to its value but yeah since its redundant I don't see the reason to actually keep it for anything other than having a reasonably understandable logic.
So I removed it

vscode/src/completions/artificial-delay.test.ts Outdated Show resolved Hide resolved
vscode/src/completions/artificial-delay.test.ts Outdated Show resolved Hide resolved
@arafatkatze arafatkatze force-pushed the artificial-latency-fix branch 4 times, most recently from 145bfb1 to 3590e29 Compare September 17, 2024 04:02
Copy link
Member

@valerybugakov valerybugakov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for addressing all the comments! 💜

Comment on lines 195 to 259
user: false,
user: codyAutocompleteDisableLowPerfLangDelay,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to update the variable name here because it doesn't match the underlying feature flag.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@arafatkatze arafatkatze enabled auto-merge (squash) September 17, 2024 04:07
@arafatkatze arafatkatze merged commit 03fcec8 into main Sep 17, 2024
18 of 19 checks passed
@arafatkatze arafatkatze deleted the artificial-latency-fix branch September 17, 2024 04:12
arafatkatze added a commit that referenced this pull request Oct 1, 2024
Solves [Linear
Issue](https://linear.app/sourcegraph/issue/CODY-3759/remove-codyautocompleteuserlatency-feature-flag)

## History: 
When working on the Removal of Artificial Latency [behind a feature
flag](#5480) Valery said
FeatureFlag.CodyAutocompleteUserLatency feature flag and all the related
logic is not used anymore and plan on conducting [no further experiments
with
it](#5480 (comment)).
So I removed that and a lot of related logic that made this function and
its tests very intricate.


### This PR does 
- Remove user-based latency adjustments and related feature flag
- Simplify getArtificialDelay function to only consider language and
completion intent
- Update lowPerformanceConfig to include both language IDs and
completion intents
- Remove resetArtificialDelay function and related calls(This function
would reset User based latency for when completions were rejected. This
stuff is not relevant after the removal of the feature flag)
- Update tests to reflect new simplified logic
- Remove unused LatencyFeatureFlags interface and related parameters



## Test plan
Added new tests and made the logic very clean and self explainatory.

<!-- Required. See
https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles.
-->

## Changelog

<!-- OPTIONAL; info at
https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c
-->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants