Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NoMethodError: undefined method rails for nil #2386

Closed
MrSerth opened this issue Aug 22, 2024 · 4 comments · Fixed by #2428
Closed

NoMethodError: undefined method rails for nil #2386

MrSerth opened this issue Aug 22, 2024 · 4 comments · Fixed by #2428

Comments

@MrSerth
Copy link
Contributor

MrSerth commented Aug 22, 2024

Issue Description

In my server logs (but not in Sentry), I found another NoMethodError: This time, rails was called on nil.

Stack Trace of the Issue

NoMethodError (undefined method `rails' for nil):

sentry-rails (5.18.2) lib/sentry/rails/rescued_exception_interceptor.rb:20:in `report_rescued_exceptions?'
sentry-rails (5.18.2) lib/sentry/rails/rescued_exception_interceptor.rb:14:in `rescue in call'
sentry-rails (5.18.2) lib/sentry/rails/rescued_exception_interceptor.rb:11:in `call'
actionpack (7.1.3.4) lib/action_dispatch/middleware/debug_exceptions.rb:29:in `call'
sentry-ruby (5.18.2) lib/sentry/rack/capture_exceptions.rb:30:in `block (2 levels) in call'
sentry-ruby (5.18.2) lib/sentry/hub.rb:258:in `with_session_tracking'
sentry-ruby (5.18.2) lib/sentry-ruby.rb:404:in `with_session_tracking'
sentry-ruby (5.18.2) lib/sentry/rack/capture_exceptions.rb:21:in `block in call'
sentry-ruby (5.18.2) lib/sentry/hub.rb:59:in `with_scope'
sentry-ruby (5.18.2) lib/sentry-ruby.rb:384:in `with_scope'
sentry-ruby (5.18.2) lib/sentry/rack/capture_exceptions.rb:20:in `call'
actionpack (7.1.3.4) lib/action_dispatch/middleware/show_exceptions.rb:31:in `call'
railties (7.1.3.4) lib/rails/rack/logger.rb:37:in `call_app'
railties (7.1.3.4) lib/rails/rack/logger.rb:24:in `block in call'
activesupport (7.1.3.4) lib/active_support/tagged_logging.rb:135:in `block in tagged'
activesupport (7.1.3.4) lib/active_support/tagged_logging.rb:39:in `tagged'
activesupport (7.1.3.4) lib/active_support/tagged_logging.rb:135:in `tagged'
activesupport (7.1.3.4) lib/active_support/broadcast_logger.rb:240:in `method_missing'
railties (7.1.3.4) lib/rails/rack/logger.rb:24:in `call'
actionpack (7.1.3.4) lib/action_dispatch/middleware/remote_ip.rb:92:in `call'
actionpack (7.1.3.4) lib/action_dispatch/middleware/request_id.rb:28:in `call'
rack (3.1.7) lib/rack/method_override.rb:28:in `call'
rack (3.1.7) lib/rack/runtime.rb:24:in `call'
actionpack (7.1.3.4) lib/action_dispatch/middleware/executor.rb:14:in `call'
rack (3.1.7) lib/rack/sendfile.rb:114:in `call'
actionpack (7.1.3.4) lib/action_dispatch/middleware/ssl.rb:79:in `call'
actionpack (7.1.3.4) lib/action_dispatch/middleware/assume_ssl.rb:21:in `call'
lib/middleware/web_socket_sentry_headers.rb:12:in `call'
mnemosyne-ruby (2.0.0) lib/mnemosyne/middleware/rack.rb:78:in `call'
telegraf (3.0.0) lib/telegraf/rack.rb:67:in `call'
railties (7.1.3.4) lib/rails/engine.rb:536:in `call'
rack (3.1.7) lib/rack/urlmap.rb:76:in `block in call'
rack (3.1.7) lib/rack/urlmap.rb:60:in `each'
rack (3.1.7) lib/rack/urlmap.rb:60:in `call'
puma (6.4.2) lib/puma/configuration.rb:272:in `call'
puma (6.4.2) lib/puma/request.rb:100:in `block in handle_request'
puma (6.4.2) lib/puma/thread_pool.rb:378:in `with_force_shutdown'
puma (6.4.2) lib/puma/request.rb:99:in `handle_request'
puma (6.4.2) lib/puma/server.rb:464:in `process_client'
puma (6.4.2) lib/puma/server.rb:245:in `block in run'
puma (6.4.2) lib/puma/thread_pool.rb:155:in `block in spawn_thread'

Reproduction Steps

Unfortunately, I don't have clear reproduction steps. I know that the error occurred during this trace and doesn't occur regularly.

The issue occurred as part of a WebSocket connection in a Rack hijack block.

Expected Behavior

No error is generated in Sentry, and rails can be called successfully.

Actual Behavior

A NoMethodError is raised, see above stack trace for details

Ruby Version

3.3.4

SDK Version

5.18.2

Integration and Its Version

Rails 7.1.3.4

Sentry Config

No response

@sl0thentr0py
Copy link
Member

hmm interesting because we're already checking Sentry.initialized so this shouldn't happen.

The issue occurred as part of a WebSocket connection in a Rack hijack block.

Could you point to some code on your side or in a gem on how this is setup? Maybe that helps me repro.

@MrSerth
Copy link
Contributor Author

MrSerth commented Aug 28, 2024

I wish, I would have a simple response, but unfortunately I don't. I'll try to provide some pointers and aid in answering your questions as good as possible.

The app I observed the issue is CodeOcean, which is fully open source. Within the app, we allow learners to write source code and execute this code remotely (in a secure sandbox). The exception I reported here occurred during the execution of learners' code, specifically while running the SubmissionsController#run action.

During the run action, we proxy output received from another backend service to the learner's browser. Therefore, we keep two WebSocket connections: One to the learner's browser, and one to the backend service.

For the learner, we setup the WebSocket connection using the tubesock gem with the gem's hijack method. This method will, in turn, use Rack hijack and handle the connection for us.

For the backend service, we use faye-websocket gem as a WebSocket client. This gem makes heavy use of eventmachine, which might run code outside the regular main thread and request workflow (which would normally provide the access to thread variables, such as the :sentry_hub information you store with the SDK). We start the WebSocket connection to the backend service here.

Now, in this complex setup with two WebSocket connections, errors might occur. In most cases, the SDK is able to capture them and report them properly, allowing us to investigate. Still, there are occurrences where this is not the case, as reported here.


While investigating for the current issue, I found another indication that the error of this issue might be related with #2378. Do you see a possible "combination", that one issue could be triggered first and the other one is just a consequence?

To be very clear: The stack trace of this issue was contained in the server's log. Another error tracing tool we still have for historical reasons, however, shows the same strack trace as I included as a screenshot in #2378. To me, it currently looks like both exceptions occurred during the same request. So far, I don't have any indication yet that a non-library error caused the issue.

@sl0thentr0py
Copy link
Member

which might run code outside the regular main thread and request workflow (which would normally provide the access to thread variables, such as the :sentry_hub information

this is very likely the source of the problem but I don't really know how to repro. Either way, I guess we can just add extra guards, not ideal but should hopefully fix the problem.

@MrSerth
Copy link
Contributor Author

MrSerth commented Aug 28, 2024

Sounds good, thanks! I can also add some further log statements before the respective line, in case you are interested in some specific variables, settings, ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

2 participants