mirror of
https://github.com/projectacrn/acrn-hypervisor.git
synced 2025-08-09 03:58:34 +00:00
doc: spelling fixes
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
parent
33901fd287
commit
69b207ac6a
@ -839,7 +839,7 @@ In the system, there are three different schedulers for the GPU:
|
||||
- i915 Service VM scheduler
|
||||
|
||||
Since User VM always uses the host-based command submission (ELSP) model,
|
||||
and it never accesses the GPU or the Graphic Micro Controller (GuC)
|
||||
and it never accesses the GPU or the Graphic Micro Controller (:term:`GuC`)
|
||||
directly, its scheduler cannot do any preemption by itself.
|
||||
The i915 scheduler does ensure batch buffers are
|
||||
submitted in dependency order, that is, if a compositor had to wait for
|
||||
@ -867,12 +867,12 @@ also implies that this scheduler does not do any preemption of
|
||||
workloads.
|
||||
|
||||
Finally, there is the i915 scheduler in the Service VM. This scheduler uses the
|
||||
GuC or ELSP to do command submission of Service VM local content as well as any
|
||||
:term:`GuC` or ELSP to do command submission of Service VM local content as well as any
|
||||
content that GVT is submitting to it on behalf of the User VMs. This
|
||||
scheduler uses GuC or ELSP to preempt workloads. GuC has four different
|
||||
scheduler uses :term:`GuC` or ELSP to preempt workloads. :term:`GuC` has four different
|
||||
priority queues, but the Service VM i915 driver uses only two of them. One of
|
||||
them is considered high priority and the other is normal priority with a
|
||||
GuC rule being that any command submitted on the high priority queue
|
||||
:term:`GuC` rule being that any command submitted on the high priority queue
|
||||
would immediately try to preempt any workload submitted on the normal
|
||||
priority queue. For ELSP submission, the i915 will submit a preempt
|
||||
context to preempt the current running context and then wait for the GPU
|
||||
@ -881,14 +881,14 @@ engine to be idle.
|
||||
While the identification of workloads to be preempted is decided by
|
||||
customizable scheduling policies, once a candidate for preemption is
|
||||
identified, the i915 scheduler simply submits a preemption request to
|
||||
the GuC high-priority queue. Based on the HW's ability to preempt (on an
|
||||
the :term:`GuC` high-priority queue. Based on the HW's ability to preempt (on an
|
||||
Apollo Lake SoC, 3D workload is preemptible on a 3D primitive level with
|
||||
some exceptions), the currently executing workload is saved and
|
||||
preempted. The GuC informs the driver using an interrupt of a preemption
|
||||
preempted. The :term:`GuC` informs the driver using an interrupt of a preemption
|
||||
event occurring. After handling the interrupt, the driver submits the
|
||||
high-priority workload through the normal priority GuC queue. As such,
|
||||
the normal priority GuC queue is used for actual execbuf submission most
|
||||
of the time with the high-priority GuC queue only being used for the
|
||||
high-priority workload through the normal priority :term:`GuC` queue. As such,
|
||||
the normal priority :term:`GuC` queue is used for actual execbuf submission most
|
||||
of the time with the high-priority :term:`GuC` queue only being used for the
|
||||
preemption of lower-priority workload.
|
||||
|
||||
Scheduling policies are customizable and left to customers to change if
|
||||
|
@ -87,7 +87,7 @@ options:
|
||||
--intr_monitor: enable interrupt storm monitor
|
||||
its params: threshold/s,probe-period(s),delay_time(ms),delay_duration(ms),
|
||||
--virtio_poll: enable virtio poll mode with poll interval with ns
|
||||
--acpidev_pt: acpi device ID args: HID in ACPI Table
|
||||
--acpidev_pt: ACPI device ID args: HID in ACPI Table
|
||||
--mmiodev_pt: MMIO resources args: physical MMIO regions
|
||||
--vtpm2: Virtual TPM2 args: sock_path=$PATH_OF_SWTPM_SOCKET
|
||||
--lapic_pt: enable local apic passthrough
|
||||
@ -698,16 +698,16 @@ The PIRQ routing for IOAPIC and PIC is dealt with differently.
|
||||
pins permitted for PCI devices. The IRQ information will be built
|
||||
into ACPI DSDT table then passed to guest VM.
|
||||
|
||||
* For PIC, the pin2irq information is maintained in a pirqs[] array (the array size is 8
|
||||
* For PIC, the ``pin2irq`` information is maintained in a ``pirqs[]`` array (the array size is 8
|
||||
representing 8 shared PIRQs). When a PCI device tries to allocate a
|
||||
pirq pin, it will do a balancing calculation to figure out a best pin
|
||||
vs. IRQ pair. The irq# will be programed into PCI INTLINE config space
|
||||
and the pin# will be built into ACPI DSDT table then passed to guest VM.
|
||||
pIRQ pin, it will do a balancing calculation to figure out a best pin
|
||||
vs. IRQ pair. The IRQ number will be programed into PCI INTLINE config space
|
||||
and the pin number will be built into ACPI DSDT table then passed to guest VM.
|
||||
|
||||
.. note:: "IRQ" here is also called as "GSI" in ACPI terminology.
|
||||
|
||||
Regarding to INT A/B/C/D for PCI devices, DM just allocates them evenly
|
||||
prior to PIRQ routing and then programs into PCI INTPIN config space.
|
||||
prior to pIRQ routing and then programs into PCI INTPIN config space.
|
||||
|
||||
ISA and PCI Emulation
|
||||
*********************
|
||||
@ -969,7 +969,7 @@ each of them, as shown here:
|
||||
:align: center
|
||||
|
||||
|
||||
For each VM, its ACPI tables are a standalone copy, not related to the
|
||||
For each VM, its ACPI tables are a stand-alone copy, not related to the
|
||||
tables for other VMs. Opregion also must be copied for different VMs.
|
||||
|
||||
For each table, we make modifications, based on the physical table, to
|
||||
@ -1041,13 +1041,13 @@ including following elements:
|
||||
{ basl_fwrite_dsdt, DSDT_OFFSET, true }
|
||||
};
|
||||
|
||||
The main function to create virtual ACPI tables is acpi_build that calls
|
||||
basl_compile for each table. basl_compile does the following:
|
||||
The main function to create virtual ACPI tables is ``acpi_build`` that calls
|
||||
``basl_compile`` for each table. ``basl_compile`` does the following:
|
||||
|
||||
1. create two temp files: infile and outfile
|
||||
2. with output handler, write table contents stream to infile
|
||||
3. use iasl tool to assemble infile into outfile
|
||||
4. load outfile contents to the required memory offset
|
||||
1. create two temp files: ``infile`` and ``outfile``
|
||||
2. with output handler, write table contents stream to ``infile``
|
||||
3. use ``iasl`` tool to assemble ``infile`` into ``outfile``
|
||||
4. load ``outfile`` contents to the required memory offset
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
|
@ -115,7 +115,7 @@ assumptions:
|
||||
7) S3 is only supported at platform level - not VM level.
|
||||
|
||||
ACRN has a common implementation for notification between lifecycle manager
|
||||
in different guest. Which is vUART based cross-vm notification. But user
|
||||
in different guest. Which is vUART based cross-VM notification. But user
|
||||
could customize it according to their hardware/software requirements.
|
||||
|
||||
:numref:`systempmdiag` shows the basic system level S3/S5 diagram.
|
||||
@ -165,7 +165,7 @@ For system power state entry:
|
||||
vUART for S5 request.
|
||||
3. Guest lifecycle manager initializes S5 action and guest enters S5.
|
||||
4. RTOS cleanup RT task, send response of S5 request back to Service
|
||||
VM and RTVM enter S5.
|
||||
VM and RTVM enters S5.
|
||||
5. After get response from RTVM and all User VM are shutdown, Service VM
|
||||
enter S5.
|
||||
6. OSPM in ACRN hypervisor checks all guests are in S5 state and shuts down
|
||||
|
@ -4,13 +4,13 @@ Tracing and Logging high-level design
|
||||
#####################################
|
||||
|
||||
Both Trace and Log are built on top of a mechanism named shared
|
||||
buffer (Sbuf).
|
||||
buffer (sbuf).
|
||||
|
||||
Shared Buffer
|
||||
*************
|
||||
|
||||
Shared Buffer is a ring buffer divided into predetermined-size slots. There
|
||||
are two use scenarios of Sbuf:
|
||||
are two use scenarios of sbuf:
|
||||
|
||||
- sbuf can serve as a lockless ring buffer to share data from ACRN HV to
|
||||
Service VM in non-overwritten mode. (Writing will fail if an overrun
|
||||
@ -19,15 +19,15 @@ are two use scenarios of Sbuf:
|
||||
over-written mode. A lock is required to synchronize access by the
|
||||
producer and consumer.
|
||||
|
||||
Both ACRNTrace and ACRNLog use sbuf as a lockless ring buffer. The Sbuf
|
||||
Both ACRNTrace and ACRNLog use sbuf as a lockless ring buffer. The sbuf
|
||||
is allocated by Service VM and assigned to HV via a hypercall. To hold pointers
|
||||
to sbuf passed down via hypercall, an array ``sbuf[ACRN_SBUF_ID_MAX]``
|
||||
is defined in per_cpu region of HV, with predefined sbuf ID to identify
|
||||
the usage, such as ACRNTrace, ACRNLog, etc.
|
||||
|
||||
For each physical CPU, there is a dedicated Sbuf. Only a single producer
|
||||
is allowed to put data into that Sbuf in HV, and a single consumer is
|
||||
allowed to get data from Sbuf in Service VM. Therefore, no lock is required to
|
||||
For each physical CPU, there is a dedicated sbuf. Only a single producer
|
||||
is allowed to put data into that sbuf in HV, and a single consumer is
|
||||
allowed to get data from sbuf in Service VM. Therefore, no lock is required to
|
||||
synchronize access by the producer and consumer.
|
||||
|
||||
sbuf APIs
|
||||
@ -46,17 +46,17 @@ hypervisor. Scripts to analyze the collected trace data are also
|
||||
provided.
|
||||
|
||||
As shown in :numref:`acrntrace-arch`, ACRNTrace is built using
|
||||
Shared Buffers (Sbuf), and consists of three parts from bottom layer
|
||||
Shared Buffers (sbuf), and consists of three parts from bottom layer
|
||||
up:
|
||||
|
||||
- **ACRNTrace userland app**: Userland application collecting trace data to
|
||||
files (Per Physical CPU)
|
||||
|
||||
- **Service VM Trace Module**: allocates/frees SBufs, creates device for each
|
||||
SBuf, sets up sbuf shared between Service VM and HV, and provides a dev node for the
|
||||
userland app to retrieve trace data from Sbuf
|
||||
- **Service VM Trace Module**: allocates/frees sbufs, creates device for each
|
||||
sbuf, sets up sbuf shared between Service VM and HV, and provides a dev node for the
|
||||
userland app to retrieve trace data from sbuf
|
||||
|
||||
- **Trace APIs**: provide APIs to generate trace event and insert to Sbuf.
|
||||
- **Trace APIs**: provide APIs to generate trace event and insert to sbuf.
|
||||
|
||||
.. figure:: images/log-image50.png
|
||||
:align: center
|
||||
@ -77,19 +77,19 @@ Service VM Trace Module
|
||||
The Service VM trace module is responsible for:
|
||||
|
||||
- allocating sbuf in Service VM memory range for each physical CPU, and assign
|
||||
the GPA of Sbuf to ``per_cpu sbuf[ACRN_TRACE]``
|
||||
the GPA of sbuf to ``per_cpu sbuf[ACRN_TRACE]``
|
||||
- create a misc device for each physical CPU
|
||||
- provide mmap operation to map entire Sbuf to userspace for high
|
||||
- provide mmap operation to map entire sbuf to userspace for high
|
||||
flexible and efficient access.
|
||||
|
||||
On Service VM shutdown, the trace module is responsible to remove misc devices, free
|
||||
SBufs, and set ``per_cpu sbuf[ACRN_TRACE]`` to null.
|
||||
sbufs, and set ``per_cpu sbuf[ACRN_TRACE]`` to null.
|
||||
|
||||
ACRNTrace Application
|
||||
=====================
|
||||
|
||||
ACRNTrace application includes a binary to retrieve trace data from
|
||||
Sbuf, and Python scripts to convert trace data from raw format into
|
||||
sbuf, and Python scripts to convert trace data from raw format into
|
||||
readable text, and do analysis.
|
||||
|
||||
.. note:: There was no Figure showing the sequence of trace
|
||||
@ -98,7 +98,7 @@ readable text, and do analysis.
|
||||
With a debug build, trace components are initialized at boot
|
||||
time. After initialization, HV writes trace event date into sbuf
|
||||
until sbuf is full, which can happen easily if the ACRNTrace app is not
|
||||
consuming trace data from Sbuf on Service VM user space.
|
||||
consuming trace data from sbuf on Service VM user space.
|
||||
|
||||
Once ACRNTrace is launched, for each physical CPU a consumer thread is
|
||||
created to periodically read RAW trace data from sbuf and write to a
|
||||
@ -121,27 +121,27 @@ See :ref:`acrntrace` for details and usage.
|
||||
ACRN Log
|
||||
********
|
||||
|
||||
acrnlog is a tool used to capture ACRN hypervisor log to files on
|
||||
``acrnlog`` is a tool used to capture ACRN hypervisor log to files on
|
||||
Service VM filesystem. It can run as a Service VM service at boot, capturing two
|
||||
kinds of logs:
|
||||
|
||||
- Current runtime logs;
|
||||
- Logs remaining in the buffer, from last crashed running.
|
||||
- Logs remaining in the buffer, from the last crashed run.
|
||||
|
||||
Architectural diagram
|
||||
=====================
|
||||
|
||||
Similar to the design of ACRN Trace, ACRN Log is built on the top of
|
||||
Shared Buffer (Sbuf), and consists of three parts from bottom layer
|
||||
Similar to the design of ACRN Trace, ACRN Log is built on top of
|
||||
Shared Buffer (sbuf), and consists of three parts from bottom layer
|
||||
up:
|
||||
|
||||
- **ACRN Log app**: Userland application collecting hypervisor log to
|
||||
files;
|
||||
- **Service VM ACRN Log Module**: constructs/frees SBufs at reserved memory
|
||||
- **Service VM ACRN Log Module**: constructs/frees sbufs at reserved memory
|
||||
area, creates dev for current/last logs, sets up sbuf shared between
|
||||
Service VM and HV, and provides a dev node for the userland app to
|
||||
retrieve logs
|
||||
- **ACRN log support in HV**: put logs at specified loglevel to Sbuf.
|
||||
- **ACRN log support in HV**: put logs at specified loglevel to sbuf.
|
||||
|
||||
.. figure:: images/log-image73.png
|
||||
:align: center
|
||||
@ -152,11 +152,11 @@ up:
|
||||
ACRN log support in Hypervisor
|
||||
==============================
|
||||
|
||||
To support acrn log, the following adaption was made to hypervisor log
|
||||
To support ``acrnlog``, the following adaption was made to hypervisor log
|
||||
system:
|
||||
|
||||
- log messages with severity level higher than a specified value will
|
||||
be put into Sbuf when calling logmsg in hypervisor
|
||||
be put into sbuf when calling ``logmsg`` in hypervisor
|
||||
- allocate sbuf to accommodate early hypervisor logs before Service VM
|
||||
can allocate and set up sbuf
|
||||
|
||||
@ -164,7 +164,7 @@ There are 6 different loglevels, as shown below. The specified
|
||||
severity loglevel is stored in ``mem_loglevel``, initialized
|
||||
by :option:`CONFIG_MEM_LOGLEVEL_DEFAULT`. The loglevel can
|
||||
be set to a new value
|
||||
at runtime via hypervisor shell command "loglevel".
|
||||
at runtime via hypervisor shell command ``loglevel``.
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
@ -200,11 +200,11 @@ On Service VM boot, Service VM acrnlog module is responsible to:
|
||||
these last logs
|
||||
|
||||
- construct sbuf in the usable buf range for each physical CPU,
|
||||
assign the GPA of Sbuf to ``per_cpu sbuf[ACRN_LOG]`` and create a misc
|
||||
assign the GPA of sbuf to ``per_cpu sbuf[ACRN_LOG]`` and create a misc
|
||||
device for each physical CPU
|
||||
|
||||
- the misc devices implement read() file operation to allow
|
||||
userspace app to read one Sbuf element.
|
||||
userspace app to read one sbuf element.
|
||||
|
||||
When checking the validity of sbuf for last logs examination, it sets the
|
||||
current sbuf with magic number ``0x5aa57aa71aa13aa3``, and changes the
|
||||
@ -212,7 +212,7 @@ magic number of last sbuf to ``0x5aa57aa71aa13aa2``, to distinguish which is
|
||||
the current/last.
|
||||
|
||||
On Service VM shutdown, the module is responsible to remove misc devices,
|
||||
free SBufs, and set ``per_cpu sbuf[ACRN_TRACE]`` to null.
|
||||
free sbufs, and set ``per_cpu sbuf[ACRN_TRACE]`` to null.
|
||||
|
||||
ACRN Log Application
|
||||
====================
|
||||
|
@ -390,8 +390,8 @@ The workflow can be summarized as:
|
||||
1. vhost device init. Vhost proxy creates two eventfd for ioeventfd and
|
||||
irqfd.
|
||||
2. pass irqfd to vhost kernel driver.
|
||||
3. pass irq fd to vhm driver
|
||||
4. vhost device driver triggers irq eventfd signal once related native
|
||||
3. pass IRQ fd to vhm driver
|
||||
4. vhost device driver triggers IRQ eventfd signal once related native
|
||||
transfer is completed.
|
||||
5. irqfd related logic traverses the irqfd list to retrieve related irq
|
||||
information.
|
||||
|
@ -397,7 +397,7 @@ that will trigger an error message and return without handling:
|
||||
* - VMX_EXIT_REASON_IO_INSTRUCTION
|
||||
- pio_instr_vmexit_handler
|
||||
- Emulate I/O access with range in IO_BITMAP,
|
||||
which may have a handler in hypervisor (such as vuart or vpic),
|
||||
which may have a handler in hypervisor (such as vUART or vPIC),
|
||||
or need to create an I/O request to DM
|
||||
|
||||
* - VMX_EXIT_REASON_RDMSR
|
||||
@ -511,7 +511,7 @@ request as shown below.
|
||||
|
||||
* - ACRN_REQUEST_EXTINT
|
||||
- Request for extint vector injection
|
||||
- vcpu_inject_extint, triggered by vpic
|
||||
- vcpu_inject_extint, triggered by vPIC
|
||||
- vcpu_do_pending_extint
|
||||
|
||||
* - ACRN_REQUEST_NMI
|
||||
|
@ -24,8 +24,8 @@ There are some restrictions for hypercall and upcall:
|
||||
HV and Service VM both use the same vector (0xF3) reserved as x86 platform
|
||||
IPI vector for HV notification to the Service VM. This upcall is necessary whenever
|
||||
there is device emulation requirement to the Service VM. The upcall vector (0xF3) is
|
||||
injected to Service VM vCPU0. The Service VM will register the irq handler for vector (0xF3) and notify the I/O emulation
|
||||
module in the Service VM once the irq is triggered.
|
||||
injected to Service VM vCPU0. The Service VM will register the IRQ handler for vector (0xF3) and notify the I/O emulation
|
||||
module in the Service VM once the IRQ is triggered.
|
||||
View the detailed upcall process at :ref:`ipi-management`
|
||||
|
||||
Hypercall APIs reference:
|
||||
|
@ -231,7 +231,7 @@ delivered to a specific Guest's CPU. Timer interrupts are an exception -
|
||||
these are always delivered to the CPU which programs the LAPIC timer.
|
||||
|
||||
x86-64 supports per CPU IDTs, but ACRN uses a global shared IDT,
|
||||
with which the interrupt/irq to vector mapping is the same on all CPUs. Vector
|
||||
with which the interrupt/IRQ to vector mapping is the same on all CPUs. Vector
|
||||
allocation for CPUs is shown here:
|
||||
|
||||
.. figure:: images/interrupt-image89.png
|
||||
@ -251,7 +251,7 @@ all CPUs.
|
||||
|
||||
The *irq_desc[]* array's index represents IRQ number. A *handle_irq*
|
||||
will be called from *interrupt_dispatch* to commonly handle edge/level
|
||||
triggered irq and call the registered *action_fn*.
|
||||
triggered IRQ and call the registered *action_fn*.
|
||||
|
||||
Another reverse mapping from vector to IRQ is used in addition to the
|
||||
IRQ descriptor table which maintains the mapping from IRQ to vector.
|
||||
|
@ -12,23 +12,24 @@ VM structure
|
||||
************
|
||||
|
||||
The ``acrn_vm`` structure is defined to manage a VM instance, this structure
|
||||
maintained a VM's HW resources like vcpu, vpic, vioapic, vuart, vpci. And at
|
||||
the same time ``acrn_vm`` structure also recorded a bunch of SW information
|
||||
related with corresponding VM, like info for VM identifier, info for SW
|
||||
loader, info for memory e820 entries, info for IO/MMIO handlers, info for
|
||||
platform level cpuid entries, and so on.
|
||||
maintained a VM's HW resources such as vCPU, vPIC, vIOAPIC, vUART, and vPCI.
|
||||
At
|
||||
the same time ``acrn_vm`` structure also records SW information
|
||||
related with corresponding VM, such as info for VM identifier, info for SW
|
||||
loader, info for memory e820 entries, info for IO/MMIO handlers, and info for
|
||||
platform level cpuid entries.
|
||||
|
||||
The ``acrn_vm`` structure instance will be created by create_vm API, and then
|
||||
The ``acrn_vm`` structure instance will be created by ``create_vm`` API, and then
|
||||
work as the first parameter for other VM APIs.
|
||||
|
||||
VM state
|
||||
********
|
||||
|
||||
Generally, a VM is not running at the beginning: it is in a 'powered off'
|
||||
state. After it got created successfully, the VM enter a 'created' state.
|
||||
Then the VM could be kick to run, and enter a 'started' state. When the
|
||||
state. After it is created successfully, the VM enters a 'created' state.
|
||||
Then the VM could be kicked to run, and enter a 'started' state. When the
|
||||
VM powers off, the VM returns to a 'powered off' state again.
|
||||
A VM can be paused to wait some operation when it is running, so there is
|
||||
A VM can be paused to wait for some operation when it is running, so there is
|
||||
also a 'paused' state.
|
||||
|
||||
:numref:`hvvm-state` illustrates the state-machine of a VM state transition,
|
||||
@ -47,7 +48,7 @@ Pre-launched and Service VM
|
||||
===========================
|
||||
|
||||
The hypervisor is the owner to control pre-launched and Service VM's state
|
||||
by calling VM APIs directly, and it follows the design of system power
|
||||
by calling VM APIs directly, following the design of system power
|
||||
management. Please refer to ACRN power management design for more details.
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@ control virtqueues are used to communicate information between the
|
||||
device and the driver, including: ports being opened and closed on
|
||||
either side of the connection, indication from the host about whether a
|
||||
particular port is a console port, adding new ports, port
|
||||
hot-plug/unplug, indication from the guest about whether a port or a
|
||||
hot-plug or unplug, indication from the guest about whether a port or a
|
||||
device was successfully added, or a port opened or closed.
|
||||
|
||||
The virtio-console architecture diagram in ACRN is shown below.
|
||||
@ -36,7 +36,7 @@ OS. No changes are required in the frontend Linux virtio-console except
|
||||
that the guest (User VM) kernel should be built with
|
||||
``CONFIG_VIRTIO_CONSOLE=y``.
|
||||
|
||||
The virtio console FE driver registers a HVC console to the kernel if
|
||||
The virtio console FE driver registers an HVC console to the kernel if
|
||||
the port is configured as console. Otherwise it registers a char device
|
||||
named ``/dev/vportXpY`` to the kernel, and can be read and written from
|
||||
the user space. There are two virtqueues for a port, one is for
|
||||
@ -75,21 +75,21 @@ The device model configuration command syntax for virtio-console is::
|
||||
[,[@]stdio|tty|pty|file:portname[=portpath][:socket_type]]
|
||||
|
||||
- Preceding with ``@`` marks the port as a console port, otherwise it is a
|
||||
normal virtio serial port
|
||||
normal virtio-serial port
|
||||
|
||||
- The ``portpath`` can be omitted when backend is stdio or pty
|
||||
- The ``portpath`` can be omitted when backend is ``stdio`` or ``pty``
|
||||
|
||||
- The ``stdio/tty/pty`` is tty capable, which means :kbd:`TAB` and
|
||||
- The ``stdio/tty/pty`` is TTY capable, which means :kbd:`TAB` and
|
||||
:kbd:`BACKSPACE` are supported, as on a regular terminal
|
||||
|
||||
- When tty is used, please make sure the redirected tty is sleeping,
|
||||
- When TTY is used, please make sure the redirected TTY is sleeping,
|
||||
(e.g., by ``sleep 2d`` command), and will not read input from stdin before it
|
||||
is used by virtio-console to redirect guest output.
|
||||
|
||||
- When virtio-console socket_type is appointed to client, please make sure
|
||||
server VM(socket_type is appointed to server) has started.
|
||||
|
||||
- Claiming multiple virtio serial ports as consoles is supported,
|
||||
- Claiming multiple virtio-serial ports as consoles is supported,
|
||||
however the guest Linux OS will only use one of them, through the
|
||||
``console=hvcN`` kernel parameter. For example, the following command
|
||||
defines two backend ports, which are both console ports, but the frontend
|
||||
@ -109,7 +109,7 @@ The following sections elaborate on each backend.
|
||||
STDIO
|
||||
=====
|
||||
|
||||
1. Add a pci slot to the device model (``acrn-dm``) command line::
|
||||
1. Add a PCI slot to the device model (``acrn-dm``) command line::
|
||||
|
||||
-s n,virtio-console,@stdio:stdio_port
|
||||
|
||||
@ -120,11 +120,11 @@ STDIO
|
||||
PTY
|
||||
===
|
||||
|
||||
1. Add a pci slot to the device model (``acrn-dm``) command line::
|
||||
1. Add a PCI slot to the device model (``acrn-dm``) command line::
|
||||
|
||||
-s n,virtio-console,@pty:pty_port
|
||||
|
||||
#. Add the ``console`` parameter to the guest os kernel command line::
|
||||
#. Add the ``console`` parameter to the guest OS kernel command line::
|
||||
|
||||
console=hvc0
|
||||
|
||||
@ -136,14 +136,14 @@ PTY
|
||||
|
||||
virt-console backend redirected to /dev/pts/0
|
||||
|
||||
#. Use a terminal emulator, such as minicom or screen, to connect to the
|
||||
tty node:
|
||||
#. Use a terminal emulator, such as ``minicom`` or ``screen``, to connect to the
|
||||
TTY node:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# minicom -D /dev/pts/0
|
||||
|
||||
or :
|
||||
or:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -152,10 +152,10 @@ PTY
|
||||
TTY
|
||||
===
|
||||
|
||||
1. Identify your tty that will be used as the User VM console:
|
||||
1. Identify your TTY that will be used as the User VM console:
|
||||
|
||||
- If you're connected to your device over the network via ssh, use
|
||||
the linux ``tty`` command, and it will report the node (may be
|
||||
- If you're connected to your device over the network via ``ssh``, use
|
||||
the Linux ``tty`` command, and it will report the node (may be
|
||||
different in your use case):
|
||||
|
||||
.. code-block:: console
|
||||
@ -164,7 +164,7 @@ TTY
|
||||
# sleep 2d
|
||||
|
||||
- If you do not have network access to your device, use screen
|
||||
to create a new tty:
|
||||
to create a new TTY:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -177,15 +177,15 @@ TTY
|
||||
|
||||
/dev/pts/0
|
||||
|
||||
Prevent the tty from responding by sleeping:
|
||||
Prevent the TTY from responding by sleeping:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# sleep 2d
|
||||
|
||||
and detach the tty by pressing :kbd:`CTRL-A` :kbd:`d`.
|
||||
and detach the TTY by pressing :kbd:`CTRL-A` :kbd:`d`.
|
||||
|
||||
#. Add a pci slot to the device model (``acrn-dm``) command line
|
||||
#. Add a PCI slot to the device model (``acrn-dm``) command line
|
||||
(changing the ``dev/pts/X`` to match your use case)::
|
||||
|
||||
-s n,virtio-console,@tty:tty_port=/dev/pts/X
|
||||
@ -194,7 +194,7 @@ TTY
|
||||
|
||||
console=hvc0
|
||||
|
||||
#. Go back to the previous tty. For example, if you're using
|
||||
#. Go back to the previous TTY. For example, if you're using
|
||||
``screen``, use:
|
||||
|
||||
.. code-block:: console
|
||||
@ -207,7 +207,7 @@ FILE
|
||||
|
||||
The File backend only supports console output to a file (no input).
|
||||
|
||||
1. Add a pci slot to the device model (``acrn-dm``) command line,
|
||||
1. Add a PCI slot to the device model (``acrn-dm``) command line,
|
||||
adjusting the ``</path/to/file>`` to your use case::
|
||||
|
||||
-s n,virtio-console,@file:file_port=</path/to/file>
|
||||
@ -220,16 +220,16 @@ SOCKET
|
||||
======
|
||||
|
||||
The virtio-console socket-type can be set as socket server or client. Device model will
|
||||
create an unix domain socket if appointed the socket_type as server, then server VM or
|
||||
create a Unix domain socket if appointed the socket_type as server, then server VM or
|
||||
another user VM can bind and listen for communication requirement. If appointed to
|
||||
client, please make sure the socket server is ready prior to launch device model.
|
||||
|
||||
1. Add a pci slot to the device model (``acrn-dm``) command line, adjusting
|
||||
1. Add a PCI slot to the device model (``acrn-dm``) command line, adjusting
|
||||
the ``</path/to/file.sock>`` to your use case in the VM1 configuration::
|
||||
|
||||
-s n,virtio-console,socket:socket_file_name=</path/to/file.sock>:server
|
||||
|
||||
#. Add a pci slot to the device model (``acrn-dm``) command line, adjusting
|
||||
#. Add a PCI slot to the device model (``acrn-dm``) command line, adjusting
|
||||
the ``</path/to/file.sock>`` to your use case in the VM2 configuration::
|
||||
|
||||
-s n,virtio-console,socket:socket_file_name=</path/to/file.sock>:client
|
||||
@ -248,6 +248,6 @@ client, please make sure the socket server is ready prior to launch device model
|
||||
|
||||
# minicom -D /dev/vport3p0
|
||||
|
||||
#. Input into minicom window of VM1 or VM2, the minicom window of VM1
|
||||
will indicate the input from VM2, the minicom window of VM2 will
|
||||
#. Input into ``minicom`` window of VM1 or VM2, the ``minicom`` window of VM1
|
||||
will indicate the input from VM2, the ``minicom`` window of VM2 will
|
||||
indicate the input from VM1.
|
||||
|
@ -24,14 +24,14 @@ changes are required in the frontend Linux virtio-gpio except that the
|
||||
guest (User VM) kernel should be built with ``CONFIG_VIRTIO_GPIO=y``.
|
||||
|
||||
There are three virtqueues used between FE and BE, one for gpio
|
||||
operations, one for irq request and one for irq event notification.
|
||||
operations, one for IRQ request and one for IRQ event notification.
|
||||
|
||||
Virtio-gpio FE driver will register a gpiochip and irqchip when it is
|
||||
probed, the base and number of gpio are generated by the BE. Each
|
||||
gpiochip or irqchip operation(e.g. get_direction of gpiochip or
|
||||
irq_set_type of irqchip) will trigger a virtqueue_kick on its own
|
||||
virtqueue. If some gpio has been set to interrupt mode, the interrupt
|
||||
events will be handled within the irq virtqueue callback.
|
||||
events will be handled within the IRQ virtqueue callback.
|
||||
|
||||
GPIO mapping
|
||||
************
|
||||
|
@ -6,7 +6,7 @@ Virtio-i2c
|
||||
Virtio-i2c provides a virtual I2C adapter that supports mapping multiple
|
||||
client devices under multiple native I2C adapters to one virtio I2C
|
||||
adapter. The address for the client device is not changed. Virtio-i2c
|
||||
also provides an interface to add an acpi node for client devices so that
|
||||
also provides an interface to add an ACPI node for client devices so that
|
||||
the client device driver in the guest OS does not need to change.
|
||||
|
||||
:numref:`virtio-i2c-1` below shows the virtio-i2c architecture.
|
||||
@ -19,14 +19,15 @@ the client device driver in the guest OS does not need to change.
|
||||
|
||||
Virtio-i2c is implemented as a virtio legacy device in the ACRN device
|
||||
model (DM) and is registered as a PCI virtio device to the guest OS. The
|
||||
Device ID of virtio-i2c is 0x860A and the Sub Device ID is 0xFFF6.
|
||||
Device ID of virtio-i2c is ``0x860A`` and the Sub Device ID is
|
||||
``0xFFF6``.
|
||||
|
||||
Virtio-i2c uses one **virtqueue** to transfer the I2C msg that is
|
||||
received from the I2C core layer. Each I2C msg is translated into three
|
||||
parts:
|
||||
|
||||
- Header: includes addr, flags, and len.
|
||||
- Data buffer: includes the pointer to msg data.
|
||||
- Header: includes ``addr``, ``flags``, and ``len``.
|
||||
- Data buffer: includes the pointer to ``msg`` data.
|
||||
- Status: includes the process results at the backend.
|
||||
|
||||
In the backend kick handler, data is obtained from the virtqueue, which
|
||||
@ -46,39 +47,47 @@ notifies the frontend. The msg process flow is shown in
|
||||
Message Process Flow
|
||||
|
||||
**Usage:**
|
||||
-s <slot>,virtio-i2c,<bus>[:<client_addr>[@<node>]][:<client_addr>[@<node>]][,<bus>[:<client_addr>[@<node>]][:<client_addr>][@<node>]]
|
||||
|
||||
bus:
|
||||
.. code-block:: none
|
||||
|
||||
-s <slot>,virtio-i2c,<bus>[:<client_addr>[@<node>]][:<client_addr>[@<node>]][,<bus>[:<client_addr>[@<node>]][:<client_addr>][@<node>]]
|
||||
|
||||
``bus``:
|
||||
The bus number for the native I2C adapter; ``2`` means ``/dev/i2c-2``.
|
||||
|
||||
client_addr:
|
||||
``client_addr``:
|
||||
The address for the native client devices such as ``1C``, ``2F`` ...
|
||||
|
||||
@:
|
||||
The prefix for the acpi node.
|
||||
``@``:
|
||||
The prefix for the ACPI node.
|
||||
|
||||
node:
|
||||
The acpi node name supported in the current code. You can find the
|
||||
supported name in the acpi_node_table[] from the source code. Currently,
|
||||
``node``:
|
||||
The ACPI node name supported in the current code. You can find the
|
||||
supported name in the ``acpi_node_table[]`` from the source code. Currently,
|
||||
only ``cam1``, ``cam2``, and ``hdac`` are supported for MRB. These nodes are
|
||||
platform-specific.
|
||||
|
||||
|
||||
**Example:**
|
||||
|
||||
-s 19,virtio-i2c,0:70@cam1:2F,4:1C
|
||||
.. code-block:: none
|
||||
|
||||
This adds client devices 0x70 and 0x2F under the native adapter
|
||||
/dev/i2c-0, and 0x1C under /dev/i2c-6 to the virtio-i2c adapter. Since
|
||||
0x70 includes '@cam1', acpi info is also added to it. Since 0x2F and
|
||||
0x1C have '@<node>', no acpi info is added to them.
|
||||
-s 19,virtio-i2c,0:70@cam1:2F,4:1C
|
||||
|
||||
This adds client devices ``0x70`` and ``0x2F`` under the native adapter
|
||||
``/dev/i2c-0``, and ``0x1C`` under ``/dev/i2c-6`` to the virtio-i2c
|
||||
adapter. Since ``0x70`` includes ``@cam1``, ACPI info is also added to
|
||||
it. Since ``0x2F`` and ``0x1C`` have ``@<node>``, no ACPI info is added
|
||||
to them.
|
||||
|
||||
|
||||
**Simple use case:**
|
||||
|
||||
When launched with this cmdline:
|
||||
|
||||
-s 19,virtio-i2c,4:1C
|
||||
.. code-block:: none
|
||||
|
||||
-s 19,virtio-i2c,4:1C
|
||||
|
||||
a virtual I2C adapter will appear in the guest OS:
|
||||
|
||||
@ -93,7 +102,8 @@ a virtual I2C adapter will appear in the guest OS:
|
||||
i2c-0 i2c i915 gmbus dpb I2C adapter
|
||||
i2c-5 i2c DPDDC-C I2C adapter
|
||||
|
||||
You can find the client device 0x1C under the virtio I2C adapter i2c-6:
|
||||
You can find the client device 0x1C under the virtio I2C adapter
|
||||
``i2c-6``:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@ -108,7 +118,7 @@ You can find the client device 0x1C under the virtio I2C adapter i2c-6:
|
||||
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
|
||||
70: -- -- -- -- -- -- -- --
|
||||
|
||||
You can dump the i2c device if it is supported:
|
||||
You can dump the I2C device if it is supported:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|
@ -115,7 +115,7 @@ Initialization in virtio-net Frontend Driver
|
||||
|
||||
**virtio_pci_probe**
|
||||
|
||||
- Construct virtio device using virtual pci device and register it to
|
||||
- Construct virtio device using virtual PCI device and register it to
|
||||
virtio bus
|
||||
|
||||
**virtio_dev_probe --> virtnet_probe --> init_vqs**
|
||||
@ -480,7 +480,7 @@ Run ``brctl show`` to see the bridge ``acrn-br0`` and attached devices:
|
||||
acrn-br0 8000.b25041fef7a3 no tap0
|
||||
enp3s0
|
||||
|
||||
Add a pci slot to the device model acrn-dm command line (mac address is
|
||||
Add a PCI slot to the device model acrn-dm command line (mac address is
|
||||
optional):
|
||||
|
||||
.. code-block:: none
|
||||
|
@ -31,7 +31,7 @@ them to the frontend.
|
||||
How to Use
|
||||
**********
|
||||
|
||||
Add a pci slot to the device model acrn-dm command line; for example::
|
||||
Add a PCI slot to the device model acrn-dm command line; for example::
|
||||
|
||||
-s <slot_number>,virtio-rnd
|
||||
|
||||
|
@ -99,7 +99,7 @@ Usage
|
||||
port_base and IRQ in ``misc/vm_configs/scenarios/<scenario
|
||||
name>/vm_configurations.c``. If the IRQ number has been used in your
|
||||
system ( ``cat /proc/interrupt``), you can choose other IRQ number. Set
|
||||
the .irq =0, the vUART will work in polling mode.
|
||||
the ``.irq =0``, the vUART will work in polling mode.
|
||||
|
||||
- COM1_BASE (0x3F8) + COM1_IRQ(4)
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user