Skip to content

Latest commit

 

History

History
181 lines (157 loc) · 6.53 KB

25-multiple-users.org

File metadata and controls

181 lines (157 loc) · 6.53 KB

Multiple users

Now that we have directories, we can combine them with the ability to customise the Virtual File System (VFS) of programs to create a multi-user system. Programs can only communicate with drivers etc. if they have a communication handle, so users can be given only limited access to the system drivers etc.

Specifying a subprocess VFS

New processes are created with the exec syscall. The param argument (pointer in the rdx register) has not been used yet but was a placeholder for some way to specify the virtual file system of the new process.

The default is that the new process shares a VFS with its parent. Any new mount points added by one process will be seen by the other. Another option is to start with a copy of the parent VFS, removing or adding mount points to customise. Finally, we could start with an empty VFS and add paths to it.

To be consistent with other messages we could try using JSON format to specify the VFS paths. That would mean adding a quite complex parser to the kernel, but what we need to do is quite simple. Instead the kernel reads a custom string format:

  • First byte specifies the VFS to start with: (S)hared with parent, (C)opy of parent’s VFS, (N)ew VFS.
  • Following clauses modify the VFS:
    • Remove path: A ‘-’ byte followed by a path terminated with a ‘:’ e.g. -/path: This unmounts a path from the VFS so that the new process can’t see it.
    • Mount communication handle: An ‘m’ followed by the handle number (in ASCII), a non-numerical delimiter, and a path terminating in ‘:’ e.g. m/path/to/mount: The communication handle is removed from calling process. [Note: If the exec command fails after this is processed, the handle may be lost/closed].

Other kinds of modifications could be added but this is sufficient to enable the caller to customise the VFS of the new process.

Standard library support

In euralios_std::syscalls we can define a builder to create the VFS parameter string

pub struct VFS {
    s: String
}

That string is going to be the parameter string sent to the sys_exec syscall. We then create three constructors: shared(), copy() and new(). For example

impl VFS {
    /// Copy current VFS for new process
    pub fn copy() -> Self {
        VFS{s: String::from("C")}
    }
    ...
}

Modifications to this VFS then append strings

impl VFS {
    /// Add a communication handle as a path in the VFS
    pub fn mount(mut self, mut handle: CommHandle, path: &str) -> Self {
        // String containing handle number
        let handle_s = unsafe{handle.take().to_string()};

        self.s.push('m'); // Code for "mount"
        self.s.push_str(&handle_s);
        self.s.push('|'); // Terminates handle
        self.s.push_str(path);
        self.s.push(':'); // Terminates path
        self
    }

    /// Remove a path from the VFS
    pub fn remove(mut self, path: &str) -> Self {
        self.s.push('-');
        self.s.push_str(path);
        self.s.push(':'); // Terminates path
        self
    }
}

so that in init we can start a shell process with

// Start the process
syscalls::exec(
    include_bytes!("../../user/shell"),
    0,
    input2,
    console.output.clone(),
    VFS::copy().remove("/pci").remove("/dev/nic")).expect("[init] Couldn't start user program");

which sends a parameter string “C-/pci:-/dev/nic:” to the sys_exec system call. Running the mount command in the shell now just lists ["/ramdisk","/tcp",] because both /pci and /dev/nic have been removed.

Running a shell inside a shell

In init we can make a directory to put all the system executables into, and add the shell:

fs::create_dir("/ramdisk/bin");
if let Ok(mut file) = File::create("/ramdisk/bin/shell") {
    file.write(include_bytes!("../../user/shell"));
}
...

./img/26-01-shell-in-shell.png

Login process

Using this VFS customisation method we can create processes that are in their own sandbox, and control the resources they and any of their child processes can access. To isolate users from each other we can now run shells with a different VFS, depending on which user logs in.

We’ll change init so that rather than launching shell on each VGA console, instead it launches a new process login. The login process will enter a loop waiting for a user to log in:

let stdin = io::stdin();
let mut username = String::new();

loop {
    print!("login: ");

    username.clear();
    stdin.read_line(&mut username);

    ...
}

Eventually we should have a password file, that most users can’t directly access, but for now we can just hard-wire some user names and not bother with passwords. The VFS of the shell that is launched depends on whether the login is for ‘root’ or ‘user’:

let vfs = match username.trim() {
    "root" => VFS::shared(), // Root sees everything
    "user" => {
        // Open bin directory read-only
        let bin = OpenOptions::new().open("/ramdisk/bin").unwrap();
        // User's home directory read-write
        let home = OpenOptions::new().write(true).open("/ramdisk/user").unwrap();
        // TCP stack read-write
        let tcp = OpenOptions::new().write(true).open("/tcp").unwrap();
        VFS::new()
            .mount(bin.to_CommHandle(), "/bin")
            .mount(home.to_CommHandle(), "/ramdisk")
            .mount(tcp.to_CommHandle(), "/tcp")
    },
    _ => {
        println!("Unknown login. Try 'root' or 'user'...");
        continue;
    }
};

Here if “root” logs in then the shell shares the VFS with the login process; If “user” logs in then they get a new VFS with only some paths mounted:

  • The /ramdisk/bin directory is opened read-only, the file converted to a communication handle, and mounted in the user VFS as /bin. The user can therefore read (and execute) these files, but can’t modify or delete them.
  • A directory in the ramdisk, /ramdisk/user is opened read-write, and mounted as /ramdisk in the user’s VFS. They can read and write to that directory, but can’t access any other part of the ramdisk.
  • The /tcp directory is mounted in the user VFS as /tcp. This allows the user to open connections and run e.g. the gopher client. They can’t however access the /dev/nic driver or /pci process directly.