Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kush/node runners #29

Merged
merged 9 commits into from
Jul 8, 2024
Merged

Kush/node runners #29

merged 9 commits into from
Jul 8, 2024

Conversation

kush-alloralabs
Copy link
Contributor

No description provided.

Copy link

netlify bot commented Jun 30, 2024

Deploy Preview for alloradocs ready!

Name Link
🔨 Latest commit 312dfad
🔍 Latest deploy log https://app.netlify.com/sites/alloradocs/deploys/668bcf4a3d776600083021b6
😎 Deploy Preview https://deploy-preview-29--alloradocs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@kush-alloralabs kush-alloralabs marked this pull request as ready for review July 8, 2024 11:36
@kush-alloralabs kush-alloralabs merged commit ebdb56e into main Jul 8, 2024
5 checks passed
@kush-alloralabs kush-alloralabs deleted the kush/node-runners branch July 8, 2024 11:41
- The Allora Network's Public Head Node sends a request for inferences to the EC2 instance within the AWS environment.
- The request passes through the VPC Internet Gateway and reaches the Inference Base in the public subnet.
- The Inference Base forwards the request to the Node Function.
- The Node Function calls `Main.py` on the Model Server to generate the required inferences.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The python file name is lower case: main.py

3. **EC2 Instance (Allora Worker Node)**
- **Inference Base**: This component handles network communication, receiving requests from the Allora Network's Public Head Node and sending responses back.
- **Node Function**: Processes requests by interfacing with the private model server. It acts as an intermediary, ensuring the requests are correctly formatted and the responses are appropriately handled.
- **Model Server**: Hosts the proprietary model. It executes the main inference script (`Main.py`) to generate inferences based on the received requests.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants