Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
fix: multiaddress not constructed on startup (#524)
* Refactor network address handling and improve IP retrieval Restructure how multiaddresses and IP addresses are handled within the codebase to improve maintainability and efficiency. Added logic for obtaining GCP external IP addresses and restructured public address retrieval. Moved HTTP client functionality to a new package and updated references accordingly. * Update max remote workers * Improve worker selection resilience - Handle invalid multiaddresses gracefully - Continue searching for eligible workers on errors - Add more detailed logging for debugging - Prevent potential nil pointer dereference - Log warning if no workers found * Remove IsActive and timeout checks from CanDoWork method This commit simplifies the worker eligibility criteria in the CanDoWork method of the NodeData struct. The following changes were made: - Removed the check for node active status (IsActive) - Removed the worker timeout check - Retained the check for staked status (IsStaked) The method now only considers if a node is staked and configured for the specific worker type when determining eligibility. This change allows for more inclusive worker participation, as nodes are no longer excluded based on active status or timeout conditions. * extend context deadline timeout for a connection attempt * Add `MergeMultiaddresses` method and update nodes management Introduce a `MergeMultiaddresses` method to `NodeData` to handle multiaddress management more efficiently. Update the oracle node logic to use this method for merging incoming multiaddresses instead of replacing them, and add a new `NewWorker` function to initialize workers, enhancing logging and error handling for multiaddress processing. * Switch to DHT for peer address lookup Replaced the multiaddress-based peer information retrieval with a DHT lookup to simplify the process. Removed unnecessary imports and optimized the code for finding and connecting to peers in the Distributed Hash Table (DHT). * Improve Twitter API rate limit error handling and propagation - Modified ScrapeTweetsByQuery to immediately return rate limit errors - Updated TwitterQueryHandler to properly propagate scraper errors - Adjusted handleWorkResponse to correctly handle and return errors to the client - Ensured rate limit errors are logged and returned with appropriate HTTP status codes - Improved error message clarity for better debugging and user feedback This commit enhances the system's ability to detect, log, and respond to Twitter API rate limit errors, providing clearer feedback to both developers and end-users when such limits are encountered. * chore: remove unused publishWorkRequest function - Deleted the publishWorkRequest function from pkg/api/handlers_data.go - This function was not being used in the current codebase - Removing it simplifies the code and reduces maintenance overhead * refactor: handleWorkResponse with functional programming concepts - Decompose handleWorkResponse into smaller, focused functions - Introduce higher-order function for error response handling - Separate concerns for improved modularity and testability - Reduce mutable state and side effects where possible - Maintain idiomatic Go while incorporating functional principles - Improve error handling granularity and response structure --------- Co-authored-by: Bob Stevens <[email protected]>
- Loading branch information