Replies: 2 comments 2 replies
-
I propose to set a perf parameter, simple value, taken from (example) CPU benchmark.net. So for example, master = 16217, slave1=10243, slave2=31216. Other info (#cores/RAM) are accessible from gns3 process. Data storage is irrelevant since all GNS3 server should share the same storage (to be able to run any node). Physical network design is also irrelevant since it should be ready so node can work and data exchange. That is a farm :-) Other option : ignore CPU model (meaning Intel vs AMD or Intel vX vs Intel vY) since NO NODE will have -cpu host. IF any slave can run any node of the projects controller by the master, then master could propose (option) to decide where whatever node will be launched or if compute_id is enforced. Since console infos etc are provided by master, no issue for gns3 client. Potential issues:
|
Beta Was this translation helpful? Give feedback.
-
Realizing something MAJOR: once a node could be ran on any server, you opened the door of...High Availability. |
Beta Was this translation helpful? Give feedback.
-
There are multiple options to do this.
Current situation
Currently, each node has a
compute_id
field to tell the controller on which compute to run.Some nodes, like cloud nodes can have their
compute_id
set to null/none, in this case the user has to manually select what compute to use (this happens when you drag & drop the node in the user interface)Once created, a node cannot be moved to another compute.
Option 1 - Controller decides
Given a pool of computes, the controller automatically decides which compute to use in case no
compute_id
is set. However, not all computes perform the same so the controller must decide which compute is the "best".The decision could be based on multiple metrics:
Also, a way to migrate a node to another compute should be provided but transferring images and project files could take sometime.
Option 2 - Central storage
A central storage on controller with the image pushed as needed could be an option. The only problem would be large images transfer time. An alternative would be for the controller to store images and projects in a network drive shared with computes.
One big advantage is a node could easily be moved to another compute.
Downside is this could be complicated to set up for an user. They would have to created a NFS and SMB network share and then update
images_path
andprojects_path
ingns3_server.conf
to point to the network share.Note
Both these options should be optional, users should retain the possibility to select or force a compute.
Beta Was this translation helpful? Give feedback.
All reactions