Commit Graph

20 Commits

Author SHA1 Message Date
Stefan Kalkowski
6d48b5484d hw: correct the ARM cache maintainance operations
This commit fixes the following issues regarding cache maintainance
under ARM:

* read out I-, and D-cache line size at runtime and use the correct one
* remove 'update_data_region' call from unprivileged syscalls
* rename 'update_instr_region' syscall to 'cache_coherent_region' to
  reflect what it doing, namely make I-, and D-cache coherent
* restrict 'cache_coherent_region' syscall to one page at a time
* lookup the region given in a 'cache_coherent_region' syscall in the
  page-table of the PD to prevent machine exceptions in the kernel
* only clean D-cache lines, do not invalidate them when pages where
  added on Cortex-A8 and ARMv6 (MMU sees phys. memory here)
* remove unused code relicts of cache maintainance

In addition it introduces per architecture memory clearance functions
used by core, when preparing new dataspaces. Thereby, it optimizes:

* on ARMv7 using per-word assignments
* on ARMv8 using cacheline zeroing
* on x86_64 using 'rept stosq' assembler instruction

Fix #3685
2020-03-26 11:38:55 +01:00
Stefan Kalkowski
8cc48d5688 hw: be more accurate in synchronizing ASID/Pages
Fix #3651
2020-02-20 12:11:23 +01:00
Stefan Kalkowski
e9b3569f44 hw: remove overall cache maintainance from core
This functionality is only needed in bootstrap now that kernel and
userland share the same address-space.

Fix #2699
2019-04-01 19:33:46 +02:00
Stefan Kalkowski
0635d5fffb hw: turn Cpu_idle into a Thread
Fix #2539
2017-11-06 13:57:20 +01:00
Stefan Kalkowski
4e97a6511b hw: switch page-tables only when necessary
* Instead of always re-load page-tables when a thread context is switched
  only do this when another user PD's thread is the next target,
  core-threads are always executed within the last PD's page-table set
* remove the concept of the mode transition
* instead map the exception vector once in bootstrap code into kernel's
  memory segment
* when a new page directory is constructed for a user PD, copy over the
  top-level kernel segment entries on RISCV and X86, on ARM we use a designated
  page directory register for the kernel segment
* transfer the current CPU id from bootstrap to core/kernel in a register
  to ease first stack address calculation
* align cpu context member of threads and vms, because of x86 constraints
  regarding the stack-pointer loading
* introduce Align_at template for members with alignment constraints
* let the x86 hardware do part of the context saving in ISS, by passing
  the thread context into the TSS before leaving to user-land
* use one exception vector for all ARM platforms including Arm_v6

Fix #2091
2017-10-19 13:31:18 +02:00
Martin Stein
60a7fe5586 hw & arm: write whole SPSR in mode transition
Previously we did write the SPSR via an MSR instruction without
additional flags. Unfortunately, this tells the CPU to write the
register only partially. This often isn't a problem as the users PSR
reset value normally is conform to our expectations but in some cases
(e.g. PSR endianess bit on WandBoard core #4) the reset value is bad.
Thus, we have to add the CXSF flags (access Control + eXtension + Status
+ Flags) so the CPU overwrites the entire register.

Fixes #2254
2017-05-31 13:16:08 +02:00
Stefan Kalkowski
b9549e58d0 hw: cleanup core code (Ref #2394) 2017-05-31 13:15:53 +02:00
Stefan Kalkowski
76bc2b9e89 hw: remove core internal header directories
Fix #2393
2017-05-31 13:15:52 +02:00
Stefan Kalkowski
67ba7b89a7 hw: separate bootstrap and core strictly
* Introduce Hw namespace and library files under src/lib/hw
* Introduce Bootstrap namespace
* Move all initialization logic into Bootstrap namespace

Ref #2388
2017-05-31 13:15:52 +02:00
Norman Feske
29b8d609c9 Adjust file headers to refer to the AGPLv3 2017-02-28 12:59:29 +01:00
Stefan Kalkowski
cf943dac65 hw: bootstrap into kernel
Put the initialization of the cpu cores, setup of page-tables, enabling of
MMU and caches into a separate component that is only used to bootstrap
the kernel resp. core.

Ref #2092
2017-02-23 14:54:42 +01:00
Stefan Kalkowski
7aff1895bf hw: enable SMP for ARM Cortex A9
This commit enables multi-processing for all Cortex A9 SoCs we currently
support. Moreover, it thereby enables the L2 cache for i.MX6 that was not
enabled until now. However, the QEMU variants hw_pbxa9 and hw_zynq still
only use 1 core, because the busy cpu synchronization used when initializing
multiple Cortex A9 cores leads to horrible boot times on QEMU.

During this work the CPU initialization in general was reworked. From now
on lots of hardware specifics were put into the 'spec' specific files, some
generic hook functions and abstractions thereby were eliminated. This
results to more lean implementations for instance on non-SMP platforms,
or in the x86 case where cache maintainance is a non-issue.

Due to the fact that memory/cache coherency and SMP are closely coupled
on ARM Cortex A9 this commit combines so different aspects.

Fix #1312
Fix #1807
2016-01-26 16:20:18 +01:00
Stefan Kalkowski
e05d26567d hw: make 'smp' property an aspect (Ref #1312)
This commit separates certain SMP aspects into 'spec/smp' subdirectories.
Thereby it simplifies non-SMP implementations again, where no locking
and several platform specific maintainance operations are not needed.
Moreover, it moves several platform specifics to appropriated places,
removes dead code from x86, and starts to turn global static pointers
into references that are handed over.
2016-01-15 16:42:12 +01:00
Stefan Kalkowski
c850462f43 hw: replace kernel's object id allocators
Instead of having an ID allocator per object class use one global allocator for
all. Thereby artificial limitations for the different object types are
superfluent. Moreover, replace the base-hw specific id allocator implementation
with the generic Bit_allocator, which is also memory saving.

Ref #1443
2015-04-17 16:13:20 +02:00
Stefan Kalkowski
657646e76e hw: adjust core bootstrap to fit generic process
* Introduce hw-specific crt0 for core that calls e.g.: init_main_thread
* re-map core's main thread UTCB to fit the right context area location
* switch core's main thread's stack to fit the right context area location

Fix #1440
2015-03-19 08:57:19 +01:00
Stefan Kalkowski
322be1b4fb hw: LPAE for Cortex a15 (fix #1387) 2015-02-16 13:40:37 +01:00
Stefan Kalkowski
f0fae2a5f2 hw: set TTBR0 according to CPU facilities
Fixes #1195
2014-10-10 13:02:30 +02:00
Martin Stein
14e9a89cba hw: no superfluous ORing of zeros and clean up
fix #710
2014-08-15 10:19:49 +02:00
Martin Stein
9da42dde2f hw & arm_v7: mode transition via transit ttbr0
Previously, we did the protection-domain switches without a transitional
translation table that contains only global mappings. This was fine as long
as the CPU did no speculative memory accesses. However, to enabling branch
prediction triggers such accesses. Thus, if we don't want to invalidate
predictors on every context switch, we need to switch more carefully.

ref #474
2014-08-15 10:19:48 +02:00
Martin Stein
a5cf09fa6e hw: re-organize file structure
fix #1197
2014-08-15 10:19:48 +02:00