Skip to content

Commit

Permalink
doc: update release_2.2 branch documentation
Browse files Browse the repository at this point in the history
Update documentaiton in the release_2.2 branch with changes made after
tagged for code freeze

Signed-off-by: David B. Kinder <[email protected]>
  • Loading branch information
dbkinder committed Sep 30, 2020
1 parent 3b6b5fb commit 7e676db
Show file tree
Hide file tree
Showing 61 changed files with 907 additions and 511 deletions.
2 changes: 2 additions & 0 deletions doc/develop.rst
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ Service VM Tutorials
:maxdepth: 1

tutorials/running_deb_as_serv_vm
tutorials/using_yp

User VM Tutorials
*****************
Expand Down Expand Up @@ -72,6 +73,7 @@ Enable ACRN Features
tutorials/acrn_on_qemu
tutorials/using_grub
tutorials/pre-launched-rt
tutorials/enable_ivshmem

Debug
*****
Expand Down
2 changes: 1 addition & 1 deletion doc/developer-guides/hld/hld-emulated-devices.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,4 @@ documented in this section.
Hostbridge emulation <hostbridge-virt-hld>
AT keyboard controller emulation <atkbdc-virt-hld>
Split Device Model <split-dm>
Shared memory based inter-vm communication <ivshmem-hld>
Shared memory based inter-VM communication <ivshmem-hld>
4 changes: 2 additions & 2 deletions doc/developer-guides/hld/hld-overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ ACRN high-level design overview

ACRN is an open source reference hypervisor (HV) that runs on top of Intel
platforms (APL, KBL, etc) for heterogeneous scenarios such as the Software Defined
Cockpit (SDC), or the In-Vehicle Experience (IVE) for automotive, or HMI & Real-Time OS for industry. ACRN provides embedded hypervisor vendors with a reference
Cockpit (SDC), or the In-Vehicle Experience (IVE) for automotive, or HMI & real-time OS for industry. ACRN provides embedded hypervisor vendors with a reference
I/O mediation solution with a permissive license and provides auto makers and
industry users a reference software stack for corresponding use.

Expand Down Expand Up @@ -124,7 +124,7 @@ ACRN 2.0
========

ACRN 2.0 is extending ACRN to support pre-launched VM (mainly for safety VM)
and Real-Time (RT) VM.
and real-time (RT) VM.

:numref:`overview-arch2.0` shows the architecture of ACRN 2.0; the main difference
compared to ACRN 1.0 is that:
Expand Down
2 changes: 1 addition & 1 deletion doc/developer-guides/hld/hld-security.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1016,7 +1016,7 @@ access is like this:
#. If the verification is successful in eMMC RPMB controller, then the
data will be written into storage device.

This work flow of authenticated data read is very similar to this flow
This workflow of authenticated data read is very similar to this flow
above, but in reverse order.

Note that there are some security considerations in this design:
Expand Down
2 changes: 1 addition & 1 deletion doc/developer-guides/hld/hld-virtio-devices.rst
Original file line number Diff line number Diff line change
Expand Up @@ -358,7 +358,7 @@ general workflow of ioeventfd.
:align: center
:name: ioeventfd-workflow

ioeventfd general work flow
ioeventfd general workflow

The workflow can be summarized as:

Expand Down
4 changes: 2 additions & 2 deletions doc/developer-guides/hld/hv-ioc-virt.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ SoC and back, as well as signals the SoC uses to control onboard
peripherals.

.. note::
NUC and UP2 platforms do not support IOC hardware, and as such, IOC
Intel NUC and UP2 platforms do not support IOC hardware, and as such, IOC
virtualization is not supported on these platforms.

The main purpose of IOC virtualization is to transfer data between
Expand Down Expand Up @@ -131,7 +131,7 @@ There are five parts in this high-level design:
* State transfer introduces IOC mediator work states
* CBC protocol illustrates the CBC data packing/unpacking
* Power management involves boot/resume/suspend/shutdown flows
* Emulated CBC commands introduces some commands work flow
* Emulated CBC commands introduces some commands workflow

IOC mediator has three threads to transfer data between User VM and Service VM. The
core thread is responsible for data reception, and Tx and Rx threads are
Expand Down
4 changes: 2 additions & 2 deletions doc/developer-guides/hld/hv-partitionmode.rst
Original file line number Diff line number Diff line change
Expand Up @@ -57,8 +57,8 @@ configuration and copies them to the corresponding guest memory.
.. figure:: images/partition-image18.png
:align: center

ACRN set-up for guests
**********************
ACRN setup for guests
*********************

Cores
=====
Expand Down
8 changes: 4 additions & 4 deletions doc/developer-guides/hld/hv-rdt.rst
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ resource allocator.) The user can check the cache capabilities such as cache
mask and max supported CLOS as described in :ref:`rdt_detection_capabilities`
and then program the IA32_type_MASK_n and IA32_PQR_ASSOC MSR with a
CLOS ID, to select a cache mask to take effect. These configurations can be
done in scenario xml file under ``FEATURES`` section as shown in the below example.
done in scenario XML file under ``FEATURES`` section as shown in the below example.
ACRN uses VMCS MSR loads on every VM Entry/VM Exit for non-root and root modes
to enforce the settings.

Expand All @@ -52,7 +52,7 @@ to enforce the settings.
<CLOS_MASK desc="Cache Capacity Bitmask">0xF</CLOS_MASK>
Once the cache mask is set of each individual CPU, the respective CLOS ID
needs to be set in the scenario xml file under ``VM`` section. If user desires
needs to be set in the scenario XML file under ``VM`` section. If user desires
to use CDP feature, CDP_ENABLED should be set to ``y``.

.. code-block:: none
Expand Down Expand Up @@ -106,7 +106,7 @@ that corresponds to each CLOS and then setting IA32_PQR_ASSOC MSR with CLOS
users can check the MBA capabilities such as mba delay values and
max supported CLOS as described in :ref:`rdt_detection_capabilities` and
then program the IA32_MBA_MASK_n and IA32_PQR_ASSOC MSR with the CLOS ID.
These configurations can be done in scenario xml file under ``FEATURES`` section
These configurations can be done in scenario XML file under ``FEATURES`` section
as shown in the below example. ACRN uses VMCS MSR loads on every VM Entry/VM Exit
for non-root and root modes to enforce the settings.

Expand All @@ -120,7 +120,7 @@ for non-root and root modes to enforce the settings.
<MBA_DELAY desc="Memory Bandwidth Allocation delay value">0</MBA_DELAY>
Once the cache mask is set of each individual CPU, the respective CLOS ID
needs to be set in the scenario xml file under ``VM`` section.
needs to be set in the scenario XML file under ``VM`` section.

.. code-block:: none
:emphasize-lines: 2
Expand Down
109 changes: 17 additions & 92 deletions doc/developer-guides/hld/ivshmem-hld.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,17 +15,24 @@ Inter-VM Communication Overview
:align: center
:name: ivshmem-architecture-overview

ACRN shared memory based inter-vm communication architecture
ACRN shared memory based inter-VM communication architecture

The ``ivshmem`` device is emulated in the ACRN device model (dm-land)
and its shared memory region is allocated from the Service VM's memory
space. This solution only supports communication between post-launched
VMs.
There are two ways ACRN can emulate the ``ivshmem`` device:

.. note:: In a future implementation, the ``ivshmem`` device could
instead be emulated in the hypervisor (hypervisor-land) and the shared
memory regions reserved in the hypervisor's memory space. This solution
would work for both pre-launched and post-launched VMs.
``ivshmem`` dm-land
The ``ivshmem`` device is emulated in the ACRN device model,
and the shared memory regions are reserved in the Service VM's
memory space. This solution only supports communication between
post-launched VMs.

``ivshmem`` hv-land
The ``ivshmem`` device is emulated in the hypervisor, and the
shared memory regions are reserved in the hypervisor's
memory space. This solution works for both pre-launched and
post-launched VMs.

While both solutions can be used at the same time, Inter-VM communication
may only be done between VMs using the same solution.

ivshmem hv:
The **ivshmem hv** implements register virtualization
Expand Down Expand Up @@ -98,89 +105,7 @@ MMIO Registers Definition
Usage
*****

To support two post-launched VMs communicating via an ``ivshmem`` device,
add this line as an ``acrn-dm`` boot parameter::

-s slot,ivshmem,shm_name,shm_size

where

- ``-s slot`` - Specify the virtual PCI slot number

- ``ivshmem`` - Virtual PCI device name

- ``shm_name`` - Specify a shared memory name. Post-launched VMs with the
same ``shm_name`` share a shared memory region.

- ``shm_size`` - Specify a shared memory size. The two communicating
VMs must define the same size.

.. note:: This device can be used with Real-Time VM (RTVM) as well.

Inter-VM Communication Example
******************************

The following example uses inter-vm communication between two Linux-based
post-launched VMs (VM1 and VM2).

.. note:: An ``ivshmem`` Windows driver exists and can be found `here <https://github.com/virtio-win/kvm-guest-drivers-windows/tree/master/ivshmem>`_

1. Add a new virtual PCI device for both VMs: the device type is
``ivshmem``, shared memory name is ``test``, and shared memory size is
4096 bytes. Both VMs must have the same shared memory name and size:

- VM1 Launch Script Sample

.. code-block:: none
:emphasize-lines: 7
acrn-dm -A -m $mem_size -s 0:0,hostbridge \
-s 2,pci-gvt -G "$2" \
-s 5,virtio-console,@stdio:stdio_port \
-s 6,virtio-hyper_dmabuf \
-s 3,virtio-blk,/home/acrn/uos1.img \
-s 4,virtio-net,tap0 \
-s 6,ivshmem,test,4096 \
-s 7,virtio-rnd \
--ovmf /usr/share/acrn/bios/OVMF.fd \
$vm_name
- VM2 Launch Script Sample

.. code-block:: none
:emphasize-lines: 5
acrn-dm -A -m $mem_size -s 0:0,hostbridge \
-s 2,pci-gvt -G "$2" \
-s 3,virtio-blk,/home/acrn/uos2.img \
-s 4,virtio-net,tap0 \
-s 5,ivshmem,test,4096 \
--ovmf /usr/share/acrn/bios/OVMF.fd \
$vm_name
2. Boot two VMs and use ``lspci | grep "shared memory"`` to verify that the virtual device is ready for each VM.

- For VM1, it shows ``00:06.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)``
- For VM2, it shows ``00:05.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)``

3. Use these commands to probe the device::

$ sudo modprobe uio
$ sudo modprobe uio_pci_generic
$ sudo echo "1af4 1110" > /sys/bus/pci/drivers/uio_pci_generic/new_id

4. Finally, a user application can get the shared memory base address from
the ``ivshmem`` device BAR resource
(``/sys/class/uio/uioX/device/resource2``) and the shared memory size from
the ``ivshmem`` device config resource
(``/sys/class/uio/uioX/device/config``).

The ``X`` in ``uioX`` above, is a number that can be retrieved using the
``ls`` command:

- For VM1 use ``ls -lh /sys/bus/pci/devices/0000:00:06.0/uio``
- For VM2 use ``ls -lh /sys/bus/pci/devices/0000:00:05.0/uio``
For usage information, see :ref:`enable_ivshmem`

Inter-VM Communication Security hardening (BKMs)
************************************************
Expand Down
2 changes: 1 addition & 1 deletion doc/developer-guides/hld/system-timer-hld.rst
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ I/O ports definition::
RTC emulation
=============

ACRN supports RTC (Real-Time Clock) that can only be accessed through
ACRN supports RTC (real-time clock) that can only be accessed through
I/O ports (0x70 and 0x71).

0x70 is used to access CMOS address register and 0x71 is used to access
Expand Down
2 changes: 1 addition & 1 deletion doc/developer-guides/hld/virtio-gpio.rst
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ Add the following parameters into the command line::
controller_name, you can use it as controller_name directly. You can
also input ``cat /sys/bus/gpio/device/XXX/dev`` to get device id that can
be used to match /dev/XXX, then use XXX as the controller_name. On MRB
and NUC platforms, the controller_name are gpiochip0, gpiochip1,
and Intel NUC platforms, the controller_name are gpiochip0, gpiochip1,
gpiochip2.gpiochip3.

- **offset|name**: you can use gpio offset or its name to locate one
Expand Down
4 changes: 2 additions & 2 deletions doc/developer-guides/hld/watchdog-hld.rst
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,8 @@ It receives read/write commands from the watchdog driver, does the
actions, and returns. In ACRN, the commands are from User VM
watchdog driver.

User VM watchdog work flow
**************************
User VM watchdog workflow
*************************

When the User VM does a read or write operation on the watchdog device's
registers or memory space (Port IO or Memory map I/O), it will trap into
Expand Down
12 changes: 6 additions & 6 deletions doc/developer-guides/l1tf.rst
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ PTEs (with present bit cleared, or reserved bit set) pointing to valid
host PFNs, a malicious guest may use those EPT PTEs to construct an attack.

A special aspect of L1TF in the context of virtualization is symmetric
multi threading (SMT), e.g. Intel |reg| Hyper-Threading Technology.
multi threading (SMT), e.g. Intel |reg| Hyper-threading Technology.
Logical processors on the affected physical cores share the L1 Data Cache
(L1D). This fact could make more variants of L1TF-based attack, e.g.
a malicious guest running on one logical processor can attack the data which
Expand All @@ -88,11 +88,11 @@ Guest -> guest Attack
=====================

The possibility of guest -> guest attack varies on specific configuration,
e.g. whether CPU partitioning is used, whether Hyper-Threading is on, etc.
e.g. whether CPU partitioning is used, whether Hyper-threading is on, etc.

If CPU partitioning is enabled (default policy in ACRN), there is
1:1 mapping between vCPUs and pCPUs i.e. no sharing of pCPU. There
may be an attack possibility when Hyper-Threading is on, where
may be an attack possibility when Hyper-threading is on, where
logical processors of same physical core may be allocated to two
different guests. Then one guest may be able to attack the other guest
on sibling thread due to shared L1D.
Expand Down Expand Up @@ -221,7 +221,7 @@ This mitigation is always enabled.
Core-based scheduling
=====================

If Hyper-Threading is enabled, it's important to avoid running
If Hyper-threading is enabled, it's important to avoid running
sensitive context (if containing security data which a given VM
has no permission to access) on the same physical core that runs
said VM. It requires scheduler enhancement to enable core-based
Expand Down Expand Up @@ -265,9 +265,9 @@ requirements:
- Doing 5) is not feasible, or
- CPU sharing is enabled (in the future)

If Hyper-Threading is enabled, there is no available mitigation
If Hyper-threading is enabled, there is no available mitigation
option before core scheduling is planned. User should understand
the security implication and only turn on Hyper-Threading
the security implication and only turn on Hyper-threading
when the potential risk is acceptable to their usage.

Mitigation Status
Expand Down
2 changes: 1 addition & 1 deletion doc/developer-guides/sw_design_guidelines.rst
Original file line number Diff line number Diff line change
Expand Up @@ -566,7 +566,7 @@ The following table shows some use cases of module level configuration design:
- This module is used to virtualize part of LAPIC functionalities.
It can be done via APICv or software emulation depending on CPU
capabilities.
For example, KBL NUC doesn't support virtual-interrupt delivery, while
For example, KBL Intel NUC doesn't support virtual-interrupt delivery, while
other platforms support it.
- If a function pointer is used, the prerequisite is
"hv_operation_mode == OPERATIONAL".
Expand Down
4 changes: 2 additions & 2 deletions doc/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,8 @@ details:
* :option:`CONFIG_UOS_RAM_SIZE`
* :option:`CONFIG_HV_RAM_SIZE`

For example, if the NUC's physical memory size is 32G, you may follow these steps
to make the new uefi ACRN hypervisor, and then deploy it onto the NUC board to boot
For example, if the Intel NUC's physical memory size is 32G, you may follow these steps
to make the new UEFI ACRN hypervisor, and then deploy it onto the Intel NUC to boot
the ACRN Service VM with the 32G memory size.

#. Use ``make menuconfig`` to change the ``RAM_SIZE``::
Expand Down
16 changes: 11 additions & 5 deletions doc/getting-started/building-from-source.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ distribution.

.. note::
ACRN uses ``menuconfig``, a python3 text-based user interface (TUI)
for configuring hypervisor options and using python's ``kconfiglib``
for configuring hypervisor options and using Python's ``kconfiglib``
library.

Install the necessary tools for the following systems:
Expand All @@ -79,8 +79,17 @@ Install the necessary tools for the following systems:
libblkid-dev \
e2fslibs-dev \
pkg-config \
libnuma-dev
libnuma-dev \
liblz4-tool \
flex \
bison
$ sudo pip3 install kconfiglib
$ wget https://acpica.org/sites/acpica/files/acpica-unix-20191018.tar.gz
$ tar zxvf acpica-unix-20191018.tar.gz
$ cd acpica-unix-20191018
$ make clean && make iasl
$ sudo cp ./generate/unix/bin/iasl /usr/sbin/
.. note::
ACRN requires ``gcc`` version 7.3.* (or higher) and ``binutils`` version
Expand Down Expand Up @@ -274,7 +283,4 @@ of the acrn-hypervisor directory):
from XML files. If the ``TARGET_DIR`` is not specified, the original
configuration files of acrn-hypervisor would be overridden.

In the 2.1 release, there is a known issue (:acrn-issue:`5157`) that
``TARGET_DIR=xxx`` does not work.

Follow the same instructions to boot and test the images you created from your build.
Loading

0 comments on commit 7e676db

Please sign in to comment.