-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rework hypervisor concept #47
Comments
I'm curious about how we should deal with |
We can not really spawn new processes for a partition. This is why we should allow a partition to fork, with us intercepting the fork. Should the partition fork when it is not allowed to, we can do an action according to the health Monitor table. Through the interception, we know the process id and can throw it in its own cgroup |
The more I think about the possibility of using ptrace, the uglier it gets. Although I enjoyed ptrace at first, it seems like a hack to me now, especially after I digged myself a bit deeper into the material. Why ptrace is terrible
What are the alternativesMy idea would be, that the parent inherits a socketpair (or pipe), through which the child will send syscalls in a fixed data structure. The requests would be executed in sequential order, with the parent sending a fixed response data structure. If possible, we might use stdin and stdout for this, as they have fixed file descriptors. Alternatively, we could inherit a socketpair with a welcome message in it's buffer. The child process would then (at its startup) try to read that welcome message from all file descriptors found inside Some questions:
Some resources: |
@emilengler could you do a simple performance analysis of your pipe idea, as well? |
Sure, but I will probably only be able to do so the week after next weel, if that's okay. I want to redeem my overhours in order to study for my exams. |
Okay my benchmarks are effectively done. I"ll do some adjustments tomorrow and publish the code afterwards. Emitting |
Done. I have published the code in this semi-public repository. The benchmark results are as follows:
The sockets approach truly wins. |
Update: The current solution will probably be centered around ptrace(2) with the hypervisor being the monitoring process. A combination of |
As it stands, it is not clear whether or not the current concept is arinc 653 compliant.
Also the current concept may not be extendable for arinc 653 part 1.
Issues:
Possible Solution:
Error System / Health Monitor
level
from errors (errors only have a type)Error
state handles the error according to the health monitor table"Systemcall" from partition to hypervisor
Use
ptrace(2)
together withPTRACE_SYSEMU
(which is already used for realizing user-mode Linux) for trapping partitions processes on systemcalls, replacing the call with the desired behaviour within the hypervisor.Theoretically, non-existent systemcall ids could be used for identifying APEX functions when using
ptrace(2)
.When clone(3) is used for spawning the main process of a partition,
PTRACE_TRACEME
can be called for allowing ptrace.The hypervisor can wait on the partitions 'SIGTRAP' with
sigtimedwait
fromsigwaitinfo(2)
utilizing a timeout.Hypervisor Main Loop
SIGCHLD
SIGCHLD
or timeout elapse (timeout from remaining time until next event in "EventList")SIGCHLD
or timeout elapseSIGTRAP
SIGTRAP
are already servedTODO
The text was updated successfully, but these errors were encountered: