Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run success on local and xcode on CI Runner but test failed when using fastlane run_tests #810

Open
poomwtd opened this issue Nov 15, 2023 · 5 comments

Comments

@poomwtd
Copy link

poomwtd commented Nov 15, 2023

I set precision of test to 95% and try to run test on

  • xcode local success
  • xcode on CI Runner success
  • terminal on CI Runner using fastlane run_tests failed with error -> snapshot not match refference

How can i solve this

@adozenlines
Copy link

If your local is Apple Silicon (arm64) and the CI runner is intel, this is a known architectural issue.

@TomaszLizer
Copy link

TomaszLizer commented Dec 12, 2023

@adozenlines Author specifically mentioned that tests are being run on CI using fastlane as well as Xcode - architecture difference between those two runs should not exist (unless it is eg M1 CI and Xcode is running through rosetta).

@goodboygb1 Have you tried running fastlane locally too?

TLDR: Try running tests with different simulator or reset that simulator.

Full Story:
I am experiencing similar issue.
Running test that should fail (changing compared view to something completely different or removing test data so view for sure has changed) ends up as an success on Xcode, it fails as expected using fastlane.
This is on M2 Max MBP 14 with MacOS 14.0, Xcode 15.0.1 and using swift-snapshot-testing 1.15.1.
I downgraded back to 1.15.0 but the results are the same.

I started debugging and it seems there is some kind of issue in function perceptuallyCompare (UIImage.swift:193) and more exactly in ThresholdImageProcessorKernel (UIImage.swift:245).
It seems that process function was never called hence generated thresholdOutputImage was just empty image.
Strange thing is that no error was thrown or anything else happened.
At the same time delta image was showing obvious differences.

In the end it happened to be some simulator (Metal?) issue. I change simulator from iPhone 15 Pro (iOS 17.0.1) to iPhone 15 (iOS 17.0.1) and it worked (fastlane was using iPhone 15).

deltaOutputImage thresholdOutputImage

Maybe it is possible to add some additional safeguard check, eg in

guard actualPixelPrecision < pixelPrecision else { return nil }

Add additional validation that deltaOutputImage is empty image?
Also why do we compare actual pixel precision using thresholdOutputImage that is generated with perceptualPrecision?
It does not seem right to me, but maybe I am missing something? 😬

FYI: @stephencelis @mbrandonw
I think that is critical issue as it can cause false positives. We may need to investigate that deeper. 🤔
Can try to have a look later this week if no-one gets there before me - can also try to train it on my "broken" simulator.

@poomwtd
Copy link
Author

poomwtd commented Jan 9, 2024

If your local is Apple Silicon (arm64) and the CI runner is intel, this is a known architectural issue.

local and CI runner both are Apple Silicon

@poomwtd
Copy link
Author

poomwtd commented Jan 9, 2024

@adozenlines Author specifically mentioned that tests are being run on CI using fastlane as well as Xcode - architecture difference between those two runs should not exist (unless it is eg M1 CI and Xcode is running through rosetta).

@goodboygb1 Have you tried running fastlane locally too?

TLDR: Try running tests with different simulator or reset that simulator.

Full Story: I am experiencing similar issue. Running test that should fail (changing compared view to something completely different or removing test data so view for sure has changed) ends up as an success on Xcode, it fails as expected using fastlane. This is on M2 Max MBP 14 with MacOS 14.0, Xcode 15.0.1 and using swift-snapshot-testing 1.15.1. I downgraded back to 1.15.0 but the results are the same.

I started debugging and it seems there is some kind of issue in function perceptuallyCompare (UIImage.swift:193) and more exactly in ThresholdImageProcessorKernel (UIImage.swift:245). It seems that process function was never called hence generated thresholdOutputImage was just empty image. Strange thing is that no error was thrown or anything else happened. At the same time delta image was showing obvious differences.

In the end it happened to be some simulator (Metal?) issue. I change simulator from iPhone 15 Pro (iOS 17.0.1) to iPhone 15 (iOS 17.0.1) and it worked (fastlane was using iPhone 15).

deltaOutputImage thresholdOutputImage
Maybe it is possible to add some additional safeguard check, eg in

guard actualPixelPrecision < pixelPrecision else { return nil }

Add additional validation that deltaOutputImage is empty image? Also why do we compare actual pixel precision using thresholdOutputImage that is generated with perceptualPrecision? It does not seem right to me, but maybe I am missing something? 😬

FYI: @stephencelis @mbrandonw I think that is critical issue as it can cause false positives. We may need to investigate that deeper. 🤔 Can try to have a look later this week if no-one gets there before me - can also try to train it on my "broken" simulator.

I already reset and change simulator, but for me I saw that the result image from my local and from CI runner is not the same image the resolution and image size is different so I thinks this is a root cause. And I also run with fastlane local it success but it failed with fastlane on runner

@TomaszLizer
Copy link

I already reset and change simulator, but for me I saw that the result image from my local and from CI runner is not the same image the resolution and image size is different so I thinks this is a root cause. And I also run with fastlane local it success but it failed with fastlane on runner

@goodboygb1
Different resolution of the image indicates that you may be running different simulator locally and in CI. It can occur when tests are run on 2x and 3x device. I would check that first.
In order to have reliable snapshot tests you need to strictly control environment:

  • Xcode version (hence iOS base SDK for which app is built)
  • iOS version in simulator (different os-es / simulator os versions renders same content differently)
  • iPhone model (different screen resolution can cause issues, see 2x/3x retina screens)
  • Host platform - I believe in the past there were issues with tests run between Intel and Apple processors architecture

In my personal experience maintaining same simulator model and os version was having biggest impact.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants