mirror of
https://github.com/projectacrn/acrn-hypervisor.git
synced 2025-06-21 05:02:24 +00:00
Project ACRN hypervisor
The Intel Trace Hub (aka. North Peak, NPK) is a trace aggregator for Software, Firmware, and Hardware. On the virtualization platform, it can be used to output the traces from SOS/UOS/Hypervisor/FW together with unified timestamps. There are 2 software visible MMIO space in the npk pci device. One is the CSR which maps the configuration registers, and the other is the STMR which is organized as many Masters, and used to send the traces. Each Master has a fixed number of Channels, which is 128 on GP. Each channel occupies 64B, so the offset of each Master is 8K (64B*128). Here is the detailed layout of STMR: M=NPK_SW_MSTR_STP (1024 on GP) +-------------------+ | m[M],c[C-1] | Base(M,C-1) +-------------------+ | ... | +-------------------+ | m[M],c[0] | Base(M,0) +-------------------+ | ... | +-------------------+ | m[i+1],c[1] | Base(i+1,1) +-------------------+ | m[i+1],c[0] | Base(i+1,0) +-------------------+ | ... | +-------------------+ | m[i],c[1] | Base(i,1)=SW_BAR+0x40 +-------------------+ | m[i],c[0] | 64B Base(i,0)=SW_BAR +-------------------+ i=NPK_SW_MSTR_STRT (256 on GP) CSR and STMR are treated differently in npk virtualization because: 1. CSR configuration should come from just one OS, instead of each OS. In our case, it should come from SOS. 2. For performance and timing concern, the traces from each OS should be written to STMR directly. Based on these, the npk virtualization is implemented in this way: 1. The physical CSR is owned by SOS, and dm/npk emulates a software one for the UOS, to keep the npk driver on UOS unchanged. Some CSR initial values are configured to make the UOS npk driver think it is working on a real npk. The CSR configuration from UOS is ignored by dm, and it will not bring any side-effect. Because traces are the only things needed from UOS, the location to send traces to and the trace format are not affected by the CSR configuration. 2. Part of the physical STMR will be reserved for the SOS, and the others will be passed through to the UOS, so that the UOS can write the traces to the MMIO space directly. A parameter is needed to indicate the offset and size of the Masters to pass through to the UOS. For example, "-s 0:2,npk,512/256", there are 256 Masters from #768 (256+512, #256 is the starting Master for software tracing) passed through to the UOS. CSR STMR SOS: +--------------+ +----------------------------------+ | physical CSR | | Reserved for SOS | | +--------------+ +----------------------------------+ UOS: +--------------+ +---------------+ | sw CSR by dm | | mapped to UOS | +--------------+ +---------------+ Here is an overall flow about how it works. 1. System boots up, and the npk driver on SOS is loaded. 2. The dm is launched with parameters to enable npk virtualization. 3. The dm/npk sets up a bar for CSR, and some values are initialized based on the parameters, for example, the total number of Masters for the UOS. 4. The dm/npk sets up a bar for STMR, and maps part of the physical STMR to it with an offset, according to the parameters. 5. The UOS boots up, and the native npk driver on the UOS is loaded. 6. Enable the traces from UOS, and the traces are written directly to STMR, but not output by npk for now. 7. Enable the npk output on SOS, and now the traces are output by npk to the selected target. 8. If the memory is the selected target, the traces can be retrieved from memory on SOS, after stopping the traces. Signed-off-by: Zhi Jin <zhi.jin@intel.com> Reviewed-by: Zhang Di <di.zhang@intel.com> Acked-by: Eddie Dong <eddie.dong@intel.com> |
||
---|---|---|
.travis-dockerfiles | ||
devicemodel | ||
doc | ||
hypervisor | ||
misc | ||
tools | ||
.gitignore | ||
.travis.yml | ||
CODEOWNERS | ||
LICENSE | ||
Makefile | ||
README.rst |
Project ACRN Embedded Hypervisor ################################ The open source project ACRN defines a device hypervisor reference stack and an architecture for running multiple software subsystems, managed securely, on a consolidated system by means of a virtual machine manager. It also defines a reference framework implementation for virtual device emulation, called the "ACRN Device Model". The ACRN Hypervisor is a Type 1 reference hypervisor stack, running directly on the bare-metal hardware, and is suitable for a variety of IoT and embedded device solutions. The ACRN hypervisor addresses the gap that currently exists between datacenter hypervisors, and hard partitioning hypervisors. The ACRN hypervisor architecture partitions the system into different functional domains, with carefully selected guest OS sharing optimizations for IoT and embedded devices. .. start_include_here Community Support ***************** The Project ACRN Developer Community includes developers from member organizations and the general community all joining in the development of software within the project. Members contribute and discuss ideas, submit bugs and bug fixes. They also help those in need through the community's forums such as mailing lists and IRC channels. Anyone can join the developer community and the community is always willing to help its members and the User Community to get the most out of Project ACRN. Welcome to the project ARCN community! We're now holding weekly Technical Community Meetings and encourage you to call in and learn more about the project. Meeting information is on the `TCM Meeting page`_ in our `ACRN wiki <https://wiki.projectacrn.org/>`_. .. _TCM Meeting page: https://github.com/projectacrn/acrn-hypervisor/wiki/ACRN-Committee-and-Working-Group-Meetings#technical-community-meetings Resources ********* Here's a quick summary of resources to find your way around the Project ACRN support systems: * **Project ACRN Website**: The https://projectacrn.org website is the central source of information about the project. On this site, you'll find background and current information about the project as well as relevant links to project material. For a quick start, refer to the `Introduction`_ and `Getting Started Guide`_. * **Source Code in GitHub**: Project ACRN source code is maintained on a public GitHub repository at https://github.com/projectacrn/acrn-hypervisor. You'll find information about getting access to the repository and how to contribute to the project in this `Contribution Guide`_ document. * **Documentation**: Project technical documentation is developed along with the project's code, and can be found at https://projectacrn.github.io. Additional documentation is maintained in the `Project ACRN GitHub wiki`_. * **Issue Reporting and Tracking**: Requirements and Issue tracking is done in the Github issues system: https://github.com/projectacrn/acrn-hypervisor/issues. You can browse through the reported issues and submit issues of your own. * **Mailing List**: The `Project ACRN Development mailing list`_ is perhaps the most convenient way to track developer discussions and to ask your own support questions to the project ACRN community. There are also specific `ACRN mailing list subgroups`_ for builds, users, and Technical Steering Committee notes, for example. You can read through the message archives to follow past posts and discussions, a good thing to do to discover more about the project. .. _Introduction: https://projectacrn.github.io/latest/introduction/ .. _Getting Started Guide: https://projectacrn.github.io/latest/getting_started/ .. _Contribution Guide: https://projectacrn.github.io/latest/contribute.html .. _Project ACRN GitHub wiki: https://github.com/projectacrn/acrn-hypervisor/wiki .. _Project ACRN Development mailing list: https://lists.projectacrn.org/g/acrn-dev .. _ACRN mailing list subgroups: https://lists.projectacrn.org/g/main/subgroups