Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prompt::fake() incompatible with PendingCommand created by $this->artisan()? #158

Closed
jesseleite opened this issue Aug 4, 2024 · 4 comments · Fixed by laravel/framework#52408
Assignees

Comments

@jesseleite
Copy link

jesseleite commented Aug 4, 2024

Laravel Prompts Version

0.1.24

Laravel Version

11.19.0

PHP Version

8.3.7

Operating System & Version

macOS 14.2

Terminal Application

WezTerm

Description

Working on some tests for Statamic... I did a code dive and found your Prompt::fake() helper, which is genious! 🔥

However, I've noticed an issue where tests containing Prompt::fake() pass when run in isolation...

CleanShot 2024-08-03 at 23 15 08

But they fail when the whole test case and/or suite is ran...

CleanShot 2024-08-03 at 23 15 40

We're hoping to use Prompt::fake() in our Statamic core test suite, but I can't get it to play nicely with other tests where we use $this->artisan().

It seems that PendingCommand is leaking mock expectations into other tests which use Prompt::fake(). I'm not sure how to solve, or if there's a way to improve Prompt::fake() for packages who wish to use it? 🤔

I've created a repo with an example test case to make it easier to reproduce, if that helps!

Any thoughts? 🙏

Steps To Reproduce

@jesseleite
Copy link
Author

A coworker and I found the issue, which in hindsight I guess makes sense given that stack trace....

When $this->artisan() is run in a test, the ConfiguresPrompts trait sets fallbacks on all the different prompt types, storing those fallbacks via static property on the Prompt class, which leak between tests at runtime because of the static property.

As a hacky workaround, we were able to use late static binding to reset the $shouldFallback boolean, to trick Prompt::fake() from trying to use those fallbacks.

I'm open to creating a PR for laravel/prompts here, but not sure the best approach... For example, we could create a Prompt::resetFallbacks() for people to use in their own test suites? Or maybe we could prevent Prompt::fake() from ever trying to use fallbacks altogether?

Curious on your thoughts?

@jesseleite
Copy link
Author

jesseleite commented Aug 6, 2024

Hey @driesvints! Re: that needs more info label, is there something specific you are looking for?

You can see the actual PR where we're trying to use Prompt::fake() mentioned above, but I've distilled what we're doing into a more grokkable gist here.

Happy to help with a PR, just curious on @jessarcher's thoughts before I dig any further (this is NOT a rush by any means though!) ❤️

@jessarcher
Copy link
Member

Hey @jesseleite!

Prompt::fake() never reached a point where I felt it was good enough for folks to use in their own test suites. It's essentially a low-level internal testing helper.

From a consuming app's POV, I don't know how useful it is to queue up fake keypresses and assert against rendered output. Generally, you just want to test the logic of your own command when a user chooses a specific response without worrying about which keys they pressed to get there.

At the moment, the recommended way to test a command that uses Prompts is to use Laravel's built-in test helpers. E.g.

// TestCommand.php

public function handle()
{
    $result = text('Enter some text');

    $this->info($result);
}
// TestCommandTest.php

$this->artisan('statamic:test-command')
    ->expectsQuestion('Enter some text', 'Test command successful!')
    ->expectsOutput('Test command successful!')
    ->assertExitCode(0);

I have wanted to build some better userland test helpers into Prompts for a while, but probably following a similar API to the above where you're really just testing that your command prompted the user and that it handles the response.

I'm open to ideas, though! And if you still feel like Prompt::fake() is useful outside of internal testing, then I'm open to making it easier to work with.

@jesseleite
Copy link
Author

jesseleite commented Aug 7, 2024

Yeah @jessarcher, I think you are absolutely right... Maybe I should be worrying about the expected questions/answers, rather than the exact keys they've pressed via Prompt::fake(). I appreciate your guidance here ❤️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants