Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Self-reporting error component #17676

Closed

Conversation

hujambo-dunia
Copy link
Contributor

@hujambo-dunia hujambo-dunia commented Mar 12, 2024

**WORK-IN-PROGRESS, not yet "code complete", at which point should address #17560 **

How to test the changes?

(Select all options that apply)

  • I've included appropriate automated tests.
  • This is a refactoring of components with existing test coverage.
  • Instructions for manual testing are as follows:
    1. [add testing steps and prerequisites here if you didn't write automated tests covering all your changes]

License

  • I agree to license these and all my past contributions to the core galaxy codebase under the MIT license.

}
);
} else {
sendErrorReport(dataset, message, email, this.transcript).then(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused by this - didn't you just dispatch on this.transcript in the if above?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed- I didn't update the structure from prior testing, so I can condense this in the next update.

import { getAppRoot } from "onload/loadConfig";
import { rethrowSimple } from "utils/simple-error";

export async function sendErrorReportTool(dataset, message, email, transcript) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should be using the typescript fetcher stuff for this and not axios. Want me to take a pass at adjusting that for you? Do you have a setup for testing this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, it would be great to see an example (or even a scaffold) of what you had in mind with the "fetcher" amongst the existing code. Let me first push another update of the code so you have something more functional (will post back here when ready).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jmchilton Code updated and ready for your suggestions / adjustments.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've made a modification to this file with #17969 - that updates the older existing axios call to use a typescript fetcher. I think you'll need to rebuild your schema for your new API with make update-client-api-schema from Galaxy's root and then create a typed fetcher for your new API path - it will be something like const errorReporter = fetcher.path("/api/user-reporting/error").method("post").create();. If everything is setup, your IDE should be able to do typed checks of how you call errorReporter that it cannot do with axios to ensure the API is only supplied with valid parameters.

transcript: Optional[str] = Field(
default=None,
title="Transcript",
description="The optional tool transcript sent with the error report.",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know what you mean by transcript here - is this language you have gotten from somewhere or are you adding it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's just the existing wording on usegalaxy.org:

Greenshot 2024-04-09 17 02 17

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see - I think in this case it feels wrong to call it a tool transcript because no tool has been executed. A "tool transcript" sounds like stdout or something to me. "Tool submission transcript" might make more sense... but like that language is less than ideal IMO. I would just say "API request content" or "API request".

Noodling on it... it is a transcript but in the way everything on the screen is a transcript of something. If you're designing a component to report tool submission failures (the very specific thing this is in the screenshot) I would call it "Job Submission Request" in email and such and probably just job_request in the FastAPI fields. If you're interested instead of building a more general component for reporting all sorts of API failures (this would be my preference for the direction of the work but doesn't seem the way the backend has been structured), I would call this "API Request" and I would call it request in the FastAPI fields.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jmchilton Agreed - "API Request" is a better naming convention.

self.app = trans.app
user = trans.get_user()
self.tool = trans.app.toolbox.get_tool(transcriptJson["tool_id"], tool_version=transcriptJson["tool_version"])
messages = trans.app.error_reports.default_error_plugin.submit_report(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like trans.app.error_reports has a submit_report that will find all the configured reports that are user_submission enabled and send to each. I think that should be the preferred behavior over using the default one here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the suggestion @jmchilton! I'll be sure to explore this further.

<pre style="white-space: pre-wrap;background: #eeeeee;border:1px solid black;padding:1em;">
${tool_transcript}
</pre>

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The report type thing is good I think - I think we should push it down a level though. These templates are going to have very different things filled in based on the report type right? I think it would simplify what we present to user support and to the users to have a different template for each report type. That way we know we're only displaying useful information for the relevant report type. Not displaying job details if no job was created for instance.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually by not using the default_plugin thing like I mentioned before - and doing submit_error on the ErrorReports object - we can also decorate each plugin with the report types it knows how to respond to and then we do that loop in there we can use the report_type information to only submit errors to the relevant plugins:

So this code:

        responses = []
        for plugin in self.plugins:
            if user_submission == plugin.user_submission:
                try:
                    response = plugin.submit_report(dataset, job, tool, **kwargs)
                    log.debug("Bug report plugin %s generated response %s", plugin, response)
                    if plugin.verbose and response:
                        responses.append(response)
                except Exception:
                    log.exception("Failed to generate submit_report commands for plugin %s", plugin)
        return responses

can become something like:

        responses = []
        for plugin in self.plugins:
            if user_submission == plugin.user_submission && kwargs["report_type"] in plugin.report_types:
                try:
                    response = plugin.submit_report(dataset, job, tool, **kwargs)
                    log.debug("Bug report plugin %s generated response %s", plugin, response)
                    if plugin.verbose and response:
                        responses.append(response)
                except Exception:
                    log.exception("Failed to generate submit_report commands for plugin %s", plugin)
        return responses

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your insight is helpful as this particular legacy code block was a sticking point.

@jmchilton
Copy link
Member

I think this is trying to do too much at once. I think we should build up the framework for dispatching on multiple different kinds of report_types in a branch that is just focused on that and built on top of #17953 so we have test cases to ensure everything works.

I would also then separate the refactoring of the Vue components into a separate PR from adding new functionality. For instance one PR where we add your client/src/components/Common/UserReportingError.vue component and none of the functionality is changed and then a separate PR where you use the now test API we integrate separately and use your component in a new scope with a new report type.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants