Compare commits

..

151 Commits

Author SHA1 Message Date
Morph
e6f200b960 applets/swkbd: Split software keyboard initialization
Since the CalcArg struct has been updated with a new size and fields, we have to split the initialization of the keyboard into multiple functions.
This also adds support for parsing the new CalcArg struct used by updated versions of Monster Hunter Rise.
2022-03-21 23:58:50 -04:00
Morph
e7c1e6fc27 applets/swkbd: Add new inline software keyboard types
These were added in newer firmware versions.
2022-03-21 23:58:50 -04:00
Narr the Reg
f83cd2e8b9 Merge pull request #8067 from ameerj/qt-browser-include
qt_web_browser: Add missing includes
2022-03-21 21:33:09 -06:00
ameerj
b3cfccdb80 qt_web_browser: Add missing includes 2022-03-21 22:42:14 -04:00
Ameer J
75046a3351 Merge pull request #8038 from liamwhite/exit-register-detection
shader_recompiler/EXIT: increment output register on failed enable test
2022-03-21 21:24:07 -04:00
bunnei
82ac66f8a4 Merge pull request #8048 from ameerj/include-purge
general: Reduce unused includes across the project
2022-03-21 18:03:54 -07:00
bunnei
ff2e891022 Merge pull request #7812 from FernandoS27/made-straight-from-the-nut
BufferCache: Find direction of the stream buffer increase.
2022-03-20 15:23:53 -07:00
Fernando Sahmkow
3b0d233cbd BufferCache: Find direction of the stream buffer increase. 2022-03-20 21:37:23 +01:00
Mai M
628534a9ac Merge pull request #8054 from merryhime/dynarmic
dynarmic: Accelerate SHA256 and implement for A32 frontend
2022-03-20 10:37:38 -04:00
Merry
d1c0cdf4f2 dynarmic: Accelerate SHA256 and implement for A32 frontend
* Implements hardware acceleration for SHA256 instructions.
 * Adds SHA256 instructions introduced in ARMv8 to A32 frontend.
 * Implements polyfill for processors that do not support hardware
   accelerated SHA instructions.
2022-03-20 14:06:17 +00:00
Fernando S
7c05c987a3 Merge pull request #8036 from ameerj/starbit-nv
vk_texture_cache: Do not reinterpret DepthStencil source images
2022-03-20 13:35:59 +01:00
bunnei
150f6db4d1 Merge pull request #7840 from lioncash/bitor
texture_cache: Amend unintended bitwise OR in SynchronizeAliases
2022-03-20 02:19:30 -07:00
ameerj
866b7c20a8 general: Fix clang/gcc build errors 2022-03-20 02:25:09 -04:00
ameerj
a367db44cf yuzu_cmd: Reduce unused includes 2022-03-20 02:25:09 -04:00
ameerj
936829e873 yuzu: Reduce unused includes 2022-03-20 02:25:09 -04:00
ameerj
9b505758dc web_service: Reduce unused includes 2022-03-20 02:25:09 -04:00
ameerj
967ed01fcf input_common: Reduce unused includes 2022-03-20 02:25:09 -04:00
ameerj
574a2c4b77 shader_recompiler: Reduce unused includes 2022-03-20 02:25:08 -04:00
bunnei
5960d54722 Merge pull request #8040 from Morph1984/handle-table
KHandleTable: Optimize table entry layout
2022-03-19 23:17:37 -07:00
bunnei
474318ee37 Merge pull request #8047 from ameerj/msvc-test-disable
.ci/build-msvc: Disable YUZU_TESTS cmake variable
2022-03-19 16:28:17 -07:00
ameerj
923decae5a common: Reduce unused includes 2022-03-19 15:01:31 -04:00
ameerj
1bc7d61b57 video_core: Reduce unused includes 2022-03-19 15:01:31 -04:00
bunnei
17ebe211ec Merge pull request #8025 from lat9nq/cmd-specify-config
yuzu_cmd: Allow user to specify config file location
2022-03-19 01:33:50 -07:00
ameerj
c85a3e5a28 build-msvc: Disable tests 2022-03-19 02:35:09 -04:00
ameerj
8a8ea65fae common: Reduce unused includes 2022-03-19 02:23:33 -04:00
ameerj
ade596121b core: Reduce unused includes 2022-03-19 02:23:32 -04:00
bunnei
8c8b5359f2 Merge pull request #8028 from v1993/patch-9
bsd: Allow inexact match for address length in AcceptImpl
2022-03-18 18:06:13 -07:00
Liam
536d7ed7b1 Address review comments 2022-03-18 15:55:46 -04:00
Liam
d400b618a7 shader_recompiler/EXIT: skip render targets with no outputs 2022-03-18 09:26:25 -04:00
Morph
fe1182e916 Merge pull request #8039 from ameerj/core-include
general: Reduce core.h includes
2022-03-18 02:45:30 -04:00
ameerj
d618bba8a6 general: Reduce core.h includes 2022-03-18 02:13:02 -04:00
Morph
8b7d571b66 KHandleTable: Optimize table entry layout
Since the handle type is not being used, we can reduce the amount of space each entry takes up by 4 bytes.
2022-03-18 00:28:25 -04:00
Liam
6fa17f3372 shader_recompiler/EXIT: increment output register on failed enable test 2022-03-17 22:09:31 -04:00
ameerj
4d840aa903 vk_texture_cache: Do not reinterpret DepthStencil source images
Fixes star pointer interactions in Super Mario Galaxy on some drivers, notably Nvidia.

Co-Authored-By: Fernando S. <1731197+fernandos27@users.noreply.github.com>
2022-03-17 20:55:05 -04:00
Fernando S
cb86e7941b Merge pull request #8024 from liamwhite/const-indexing
Add shader support for const buffer indirect addressing
2022-03-18 00:36:31 +01:00
Fernando S
a616f49864 Merge pull request #8030 from liamwhite/s8d24-conversion
Vulkan: convert S8D24 <-> ABGR8
2022-03-18 00:36:06 +01:00
Liam
3009d0bd7d Address review comments 2022-03-17 14:48:18 -04:00
Liam
e228a40db8 shader_recompiler: Use functions for indirect const buffer accesses 2022-03-17 13:30:21 -04:00
Liam
3ac522ba41 Address review comments 2022-03-17 09:30:41 -04:00
bunnei
f55af65e82 Merge pull request #7964 from german77/miiii
applet: mii: Simple implementation of mii applet
2022-03-16 21:37:53 -07:00
Liam
6407f16d81 Address review comments 2022-03-16 18:00:42 -04:00
Liam
1415542f73 shader_recompiler: Implement LDC.IS address mode 2022-03-16 11:05:04 -04:00
Fernando S
2db5076ec9 Merge pull request #8013 from bunnei/kernel-slab-rework-v2
Kernel Memory Updates (Part 6): Use guest memory for slab heaps & update TLS.
2022-03-16 12:15:33 +01:00
Fernando S
c3c351e2c2 Merge pull request #8023 from ameerj/kirby-pop-in
maxwell_3d: Implement a safer CB data upload
2022-03-16 12:14:08 +01:00
bunnei
613558867c Merge pull request #8026 from lat9nq/ext-mem-ini
default_ini: List use_extended_memory_layout in default config file
2022-03-15 18:12:10 -07:00
Liam
bcc2d7e69b Vulkan: convert S8D24 <-> ABGR8 2022-03-15 20:05:21 -04:00
Valeri
9e633999d6 bsd: Allow inexact match for address length in AcceptImpl
Minecraft passes in zero for length, but this should account for all possible cases
2022-03-15 14:06:34 +03:00
lat9nq
24d51e1c92 yuzu_cmd: Allow user to specify config file location
Adds an option `-c` or `--config` with one required argument that allows
the user to specify to where the config file is located. Useful for
scripts that run specific games with different preferences for settings.
2022-03-15 03:48:40 -04:00
lat9nq
cb32d9aff8 default_ini: List use_extended_memory_layout in default config file 2022-03-15 03:13:55 -04:00
bunnei
59d2a38daa Merge pull request #8006 from BytesGalore/fix_cmake_missing_qt5_dbus
build(cmake): fix missing Qt5::DBus link target for bundled linux package
2022-03-14 18:56:39 -07:00
bunnei
e95bb782f0 core: hle: kernel: init_slab_setup: Move CalculateSlabHeapGapSize to global namespace. 2022-03-14 18:14:54 -07:00
bunnei
5f3e77d93e core: hle: kernel: Allocate dummy threads on host thread storage.
- Fixes a crash where on subsequent boots, long-lived host threads would have their dummy threads freed.
2022-03-14 18:14:54 -07:00
bunnei
82a2463062 core: hle: kernel: Downgrade dangling objects warning to debug.
- It is not impossible to leak kernel objects, so this is not really any issue anymore (albeit, still interesting).
2022-03-14 18:14:54 -07:00
bunnei
f7d1929816 core: hle: kernel: Make object list container global and ensure it is reset on each emulation session. 2022-03-14 18:14:54 -07:00
bunnei
51589c5e21 core: hle: kernel: Remove server session tracking.
- These are now allocated/managed by emulated memory, so we do not need to track and free them on shutdown.
2022-03-14 18:14:54 -07:00
bunnei
0defac2f2a core: hle: kernel: k_process: Remove handle table finalize, reset page table. 2022-03-14 18:14:54 -07:00
bunnei
813b2ef253 core: hle: kernel: k_process: Implement thread local storage accurately. 2022-03-14 18:14:54 -07:00
bunnei
3210bc2767 core: hle: kernel: k_page_table: Add implementations of MapPages, UnmapPages, and FindFreeArea for TLS. 2022-03-14 18:14:54 -07:00
bunnei
15d9b0418f core: hle: kernel: k_slab_heap: Refresh to use guest allocations. 2022-03-14 18:14:54 -07:00
bunnei
a25cd4bb4b core: hle: kernel: Update init_slab_heap, use device memory, and add KThreadLocalPage and KPageBuffer.
- Refreshes our slab initialization code to latest known behavior.
- Moves all guest kernel slabs into emulated device memory.
- Adds KThreadLocalPage and KPageBuffer, which we will use for accurate TLS management.
2022-03-14 18:14:54 -07:00
bunnei
91819726b1 core: hle: kernel: k_page_buffer: Add KThreadLocalPage primitive. 2022-03-14 18:14:53 -07:00
bunnei
08434842b3 core: hle: kernel: k_page_buffer: Add KPageBuffer primitive. 2022-03-14 18:14:53 -07:00
bunnei
4a28d8cebb core: hle: kernel: k_thread: Ensure host Fiber is freed. 2022-03-14 18:14:53 -07:00
bunnei
ed67e1dd10 core: hle: kernel: k_server_session: Ensure SessionRequestManager is freed. 2022-03-14 18:14:53 -07:00
bunnei
bfc4823e36 core: hle: service: kernel_helpers: Use system resource limit. 2022-03-14 18:14:53 -07:00
bunnei
8873c0c3db core: hle: service: sm: Fix KPort reference count. 2022-03-14 18:14:53 -07:00
bunnei
25c0acc388 core: hle: kernel: k_thread: Update to reflect tree changes. 2022-03-14 18:14:53 -07:00
bunnei
07c9d9bdbd core: hle: kernel: Use weak_ptr where possible for SessionRequestHandler and SessionRequestManager. 2022-03-14 18:14:53 -07:00
bunnei
ce33503adf core: hle: kernel: k_memory_layout: Update kernel slab memory sizes. 2022-03-14 18:14:53 -07:00
bunnei
0f0e1c25bc core: hle: kernel: svc_types: Add ThreadLocalRegionSize. 2022-03-14 18:14:53 -07:00
bunnei
944d9186ca core: hle: kernel: k_condition_variable: Update to reflect tree changes. 2022-03-14 18:14:53 -07:00
bunnei
158c5845ab core: hle: kernel: k_address_arbiter: Update to reflect tree changes. 2022-03-14 18:14:53 -07:00
bunnei
0fdf1d2a60 common: tree: Various updates. 2022-03-14 18:14:53 -07:00
bunnei
69c2faeb6a common: intrusive_red_black_tree: Various updates. 2022-03-14 18:14:53 -07:00
Liam
52895fab67 shader: add support for const buffer indirect addressing 2022-03-14 19:43:32 -04:00
ameerj
5119a57614 maxwell_3d: Implement a safer CB data upload
This makes constant buffer uploads safer and more accurate by updating the GPU memory as soon as the CB Data method is invoked. The previous implementation was deferring the updates until a different maxwell 3d method was detected, then writing all CB data at once.
2022-03-14 19:18:36 -04:00
Fernando S
cd07a43724 Merge pull request #8008 from ameerj/rescale-offsets-array
rescaling_pass: Fix rescaling Color2DArray ImageFetch offsets
2022-03-15 00:08:22 +01:00
Fernando S
f9e1f559b1 Merge pull request #8000 from liamwhite/hagi
Initial support for Wii Hagi emulator
2022-03-15 00:08:05 +01:00
bunnei
cc285b9924 Merge pull request #8015 from FernandoS27/fix-global-mem
Shader decompiler: Fix storage tracking in deko3d.
2022-03-14 16:03:23 -07:00
byte[]
be0e6a2bb4 Maxwell3D: Link to override constant definition in nouveau 2022-03-14 11:06:25 -04:00
Fernando S
0331b8d799 Merge pull request #8016 from merryhime/kill-mem-use
dynarmic: Reduce size of code caches
2022-03-14 16:04:46 +01:00
byte[]
364c67e49b Maxwell3D: restore original topology when topology overrides are disabled 2022-03-14 11:00:08 -04:00
Liam
37aa472269 Maxwell3D: Use override constants from nouveau
This fixes some incorrect rendering in Sunshine
2022-03-14 10:11:58 -04:00
Merry
220674d0d6 dynarmic: Reduce size of code caches 2022-03-13 22:17:14 +00:00
Fernando Sahmkow
185fc03c3c Shader decompiler: do constant propgation before texture pass. 2022-03-13 21:49:40 +01:00
Fernando Sahmkow
ec9f0f064e Shader decompiler: Fix storage tracking in deko3d. 2022-03-13 17:41:16 +01:00
bunnei
8decc8d1a5 Merge pull request #8007 from ameerj/vs-2022-errors
emit_spirv, vk_compute_pass: Resolve VS2022 compiler errors
2022-03-13 03:43:06 -07:00
merry
1f6bbb6257 Merge pull request #8009 from ameerj/dynarmic-exclusives-config
config: Write dynarmic exclusive memory configs
2022-03-13 07:42:38 +00:00
ameerj
6b164a80a1 config: Write dynarmic exclusive memory configs
Ensures the configs are written and saved between boots
2022-03-12 03:42:50 -05:00
ameerj
f87f8d4610 rescaling_pass: Fix rescaling Color2DArray ImageFetch offsets
ImageFetch offsets for 2D array coordinates have a different composite size than the coordinates. The rescaling pass was not taking this into account.

Fixes broken shaders when scaling is enabled in Astral Chain, and likely other titles.
2022-03-12 03:31:56 -05:00
ameerj
e8c50e709e emit_spirv, vk_compute_pass: Resolve VS2022 compiler errors 2022-03-12 02:54:33 -05:00
BytesGalore
948f6e1112 build(cmake): fix missing Qt5::DBus link target for bundled linux package 2022-03-12 08:40:33 +01:00
bunnei
27cc7b6a73 Merge pull request #7997 from Wunkolo/cpu_detect_more
cpu_detect: Add additional x86 flags and telemetry
2022-03-11 17:26:41 -08:00
Liam
56c646d82c Maxwell3D: Restrict topology override effect to after the register is set 2022-03-11 19:42:12 -05:00
bunnei
5c74dd6462 Merge pull request #8003 from yuzu-emu/revert-7982-fix_cmake_missing_qt5_dbus
Revert "build(cmake): fix missing Qt5::DBus target on linux"
2022-03-11 15:22:30 -08:00
bunnei
15fdc2cd09 Revert "build(cmake): fix missing Qt5::DBus target on linux" 2022-03-11 15:22:24 -08:00
Wunkolo
d248c1203e cpu_detect: Add additional x86 flags and telemetry
Adds detection of additional CPU flags to cpu_detect and additions to telemetry output.

This is not exhaustive but guided by features that [dynarmic utilizes](bcfe377aaa/src/dynarmic/backend/x64/host_feature.h (L12-L33)) as well as features that are currently utilized but not reported to telemetry(invariant_tsc). This is intended to guide future optimizations.

AVX512 in particular is broken up into its individual subsets and some other processor features such as [sha](https://en.wikipedia.org/wiki/Intel_SHA_extensions) and [gfni](https://en.wikipedia.org/wiki/AVX-512#GFNI) are added to have some forward-facing data-points.

What used to be a single `CPU_Extension_x64_AVX512` telemetry field
is also broken up into individual `CPU_Extension_x64_AVX512{F,VL,CD,...}` fields.
2022-03-11 10:27:00 -08:00
Wunkolo
29a7a61806 common/telemetry: Update AddField name type to string_view
Non-owning `string_view` is flexable and
avoids some of the many redundant copies made over `std::string`
2022-03-11 10:26:59 -08:00
Liam
70e632f153 Maxwell3D: mark index buffers as dirty after updating counts 2022-03-11 08:51:22 -05:00
bunnei
8180b262fc Merge pull request #7982 from BytesGalore/fix_cmake_missing_qt5_dbus
build(cmake): fix missing Qt5::DBus target on linux
2022-03-10 23:12:33 -08:00
Liam
82c3042c0f TextureCacheRuntime: allow converting D24S8 to ABGR8
I can't see how this would be useful, but Galaxy uses it.
2022-03-10 20:25:34 -05:00
Liam
f1521183f8 Maxwell3D: read small-index draw and primitive topology override registers
This allows Galaxy and Sunshine to render for the first time.
2022-03-10 19:21:04 -05:00
Mai M
e200161982 Merge pull request #7999 from merryhime/fix-7992
backend: Ensure backend_thread is destructed before message_queue
2022-03-10 08:07:41 -05:00
Merry
22f50c6bc1 backend: Ensure backend_thread is destructed before message_queue
Ensures that stop_token signals that stop has been requested before destruction of conditional_variable
2022-03-10 10:49:15 +00:00
Morph
52f8f00434 Merge pull request #7998 from Wunkolo/cpuid_array
cpu_detect: Revert `__cpuid{ex}` array-type argument
2022-03-10 00:09:36 -05:00
Wunkolo
d9b1199ffb cpu_detect: Revert __cpuid{ex} array-type argument
Restores compatibility with MSVC's `__cpuid` intrinsic.
2022-03-09 19:50:01 -08:00
bunnei
9a97ef4647 Merge pull request #7936 from Wunkolo/cpu_detect
cpu_detect: Refactor detection of processor features
2022-03-09 15:34:42 -08:00
Wunkolo
873a9fa7e5 cpu_detect: Add missing lzcnt detection 2022-03-09 13:57:47 -08:00
Wunkolo
ec5f3351b6 cpu_detect: Refactor cpu/manufacturer identification
Set the zero-enum value to Unknown
Move the Manufacterer enum into the CPUCaps structure namespace
Add "ParseManufacturer" utility-function
Fix cpu/brand string buffer sizes(!)
2022-03-09 13:57:47 -08:00
Wunkolo
86e9e60f07 cpu_detect: Update array-types to span and array
Update some uses of `int` into some more explicitly sized types as well
2022-03-09 13:57:47 -08:00
Wunkolo
3c33ba7f18 cpu_detect: Utilize Bit<N> utility function 2022-03-09 13:57:47 -08:00
Wunkolo
d233de8194 cpu_detect: Compact capability fields
As this structure gets more explicit, bools can be bitfields and
small enums can use smaller types for their span of values.
2022-03-09 13:57:47 -08:00
Wunkolo
add2cfcb96 bit_util: Add bit utility function
Extracts a singular bit, as a bool, from the specified compile-time index.
2022-03-09 13:57:47 -08:00
bunnei
6f670381cf Merge pull request #7975 from bunnei/ldr-fix
hle: service: ldr: Use deterministic addresses when mapping NROs.
2022-03-08 17:39:03 -08:00
bunnei
853e58e593 hle: service: ldr: Use deterministic addresses when mapping NROs.
- Instead of randomization, choose in-order addresses for where to map NROs into memory.
- This results in predictable behavior when debugging and consistent behavior when reproducing issues.
2022-03-08 17:38:20 -08:00
bunnei
f2743b41b0 Merge pull request #7986 from lat9nq/vk-callback
core, video_core: Fix two crashes when failing to create the emulated GPU instance
2022-03-08 12:36:57 -08:00
Fernando S
35309f27ed Merge pull request #7989 from degasus/maxwell_LUT3
shader_recompiler/LOP3: Use brute force python results within switch/case.
2022-03-08 15:40:31 +01:00
Markus Wick
c78c8190d5 shader_recompiler/LOP3: Use brute force python results within switch/case.
Thanks to @asLody for optimizing this function. This raised the focus that this function should be optimized more.

The current table assumes that the host GPU is able to invert for free, so only AND,OR,XOR are accumulated in the performance metrik.

Performance results:

Instructions
0: 8
1: 30
2: 114
3: 80
4: 24

Latency
0: 8
1: 30
2: 194
3: 24
2022-03-08 09:44:28 +01:00
bunnei
1f37896066 Merge pull request #7974 from bunnei/improve-code-mem
Kernel Memory Updates (Part 5): Revamp MapCodeMemory and UnmapCodeMemory.
2022-03-07 20:28:39 -08:00
bunnei
749f76e6fe hle: kernel: KPageTable: Improve implementations of MapCodeMemory and UnmapCodeMemory.
- This makes these functions more accurate to the real HOS implementations.
- Fixes memory access issues in Super Smash Bros. Ultimate that occur when un/mapping NROs.
2022-03-07 17:18:20 -08:00
lat9nq
b5e60ae1b0 video_core: Cancel Scoped's exit call on GPU failure
When CreateRenderer fails, the GraphicsContext that was std::move'd into
it is destroyed before the Scoped that was created to manage its
currency. In that case, the GraphicsContext::Scoped will still call its
destructor at the ending of the function. And because the context is
destroyed, the Scoped will cause a crash as it attempts to call a
destroyed object's DoneCurrent function.

Since we know when the call would be invalid, call the Scoped's Cancel
method. This prevents it from calling a method on a destroyed object.
2022-03-07 18:21:56 -05:00
lat9nq
1f24a4e520 emu_window: Create a way to Cancel the exit of a Scoped
If a GraphicsContext is destroyed before its Scoped is destroyed, this
causes a crash as the Scoped tries to call a method in the destroyed
context on exit.

Add a way to Cancel the call when we know that calling the
GraphicsContext will not work.
2022-03-07 18:21:56 -05:00
Fernando S
58b52f4884 Merge pull request #7930 from asLody/dma-semaphore
MaxwellDMA: Implement semaphore operations
2022-03-07 21:53:38 +01:00
lat9nq
381f1dd2c9 core: Don't shutdown a null GPU
When CreateGPU fails, yuzu would try and shutdown the GPU instance
regardless of whether any instance was actually created.

Check for nullptr before calling its methods to prevent a crash.
2022-03-07 15:25:20 -05:00
Lody
4498908e72 MaxwellDMA: Implement semaphore operations 2022-03-07 13:46:18 +08:00
Ameer J
370e480c8c gl_graphics_pipeline: Improve shader builder synchronization using fences (#7969)
* gl_graphics_pipeline: Improve shader builder synchronization

Make use of GLsync objects to ensure better synchronization between shader builder threads and the main context

* gl_graphics_pipeline: Make built_fence access threadsafe

* gl_graphics_pipeline: Use GLsync objects only when building in parallel

* gl_graphics_pipeline: Replace GetSync calls with non-blocking waits

The spec states that a ClientWait on a Fence object ensures the changes propagate to the calling context
2022-03-06 16:46:49 +01:00
BytesGalore
fc84649aab build(cmake): fix missing Qt5::DBus link target 2022-03-06 12:21:46 +01:00
Fernando S
5192c64991 Merge pull request #7973 from Morph1984/debug-crash
host_memory: Fix fastmem crashes in debug builds
2022-03-06 04:49:27 +01:00
bunnei
a31c195749 Merge pull request #7935 from Wunkolo/logging-join-fix
logging: Convert `backend_thread` into an `std::jthread`
2022-03-02 19:09:26 -08:00
bunnei
3ab82e7582 Merge pull request #7956 from bunnei/improve-mem-manager
Kernel Memory Updates (Part 4): Revamp KMemoryManager & other fixes
2022-03-02 17:55:51 -08:00
Morph
b33f23cc46 host_memory: Fix fastmem crashes in debug builds
It is possible for virtual_offset to not be 0 when the iterator is at the beginning, and thus, std::prev(it) may be evaluated, leading to a crash in debug mode.

Co-Authored-By: Fernando S. <1731197+FernandoS27@users.noreply.github.com>
2022-03-02 18:36:59 -05:00
Fernando S
e06a133717 Merge pull request #7959 from merryhime/cmpxchg
dynarmic: Inline exclusive memory accesses
2022-03-01 22:50:52 +01:00
Mai M
3c47570563 Merge pull request #7967 from zhaobot/tx-update-20220301023432
Update translations (2022-03-01)
2022-03-01 00:50:28 -05:00
german77
03d671fabc applet: mii: Simple implementation of mii applet 2022-02-28 18:53:41 -06:00
merry
ec9689f200 dynarmic: Update to latest master 2022-02-28 20:10:13 +00:00
bunnei
14d28a043d hle: kernel: Re-create memory layout at initialization.
- As this can only be derived once.
2022-02-27 18:00:09 -08:00
bunnei
16e5954fcb hle: kernel: Remove unused pool locals. 2022-02-27 18:00:09 -08:00
bunnei
f87f076162 hle: kernel: k_memory_manager: Rework for latest kernel behavior.
- Updates the KMemoryManager implementation against latest documentation.
- Reworks KMemoryLayout to be accessed throughout the kernel.
- Fixes an issue with pool sizes being incorrectly reported.
2022-02-27 18:00:09 -08:00
Wunkolo
913c2bd2cb logging: Convert backend_thread into an std::jthread
Was getting an unhandled `invalid_argument` [exception](https://en.cppreference.com/w/cpp/thread/thread/join) during
shutdown on my linux machine. This removes the need for a `StopBackendThread` function entirely since `jthread`
[automatically handles both checking if the thread is joinable and stopping the token before attempting to join](https://en.cppreference.com/w/cpp/thread/jthread/~jthread) in the case that `StartBackendThread` was never called.
2022-02-27 16:23:52 -08:00
merry
16784e5bb3 dynarmic: Inline exclusive memory accesses
Inlines implementation of exclusive instructions into JITted code,
improving performance of applications relying heavily on these
instructions.

We also fastmem these instructions for additional speed, with
support for appropriate recompilation on fastmem failure.

An unsafe optimization to disable the intercore global_monitor is also
provided, should one wish to rely solely on cmpxchg semantics for
safety.

See also: merryhime/dynarmic#664
2022-02-27 19:40:05 +00:00
bunnei
adbb9c2b00 hle: kernel: k_page_heap: GetPhysicalAddr can be const. 2022-02-27 10:34:02 -08:00
bunnei
f7e65eb971 hle: kernel: k_page_heap: Remove superfluous consexpr. 2022-02-27 10:34:02 -08:00
bunnei
06e2b76c75 hle: kernel: k_page_heap: Various updates and improvements.
- KPageHeap tracks physical addresses, not virtual addresses.
- Various updates and improvements to match latest documentation for this type.
2022-02-27 10:34:02 -08:00
bunnei
5d1a81520c hle: kernel: Add initial_process.h header. 2022-02-27 10:34:02 -08:00
bunnei
a6496deeed hle: kernel: board: nx: Add k_memory_layout.h header. 2022-02-27 10:34:02 -08:00
bunnei
9b5e7971dc hle: kernel: k_system_control: Add GetRealMemorySize and update GetKernelPhysicalBaseAddress. 2022-02-27 10:34:02 -08:00
bunnei
18e77a54c3 hle: kernel: k_memory_layout: Add GetPhysicalLinearRegion. 2022-02-27 10:34:02 -08:00
bunnei
06a21ac229 hle: kernel: k_memory_region_types: Update for new regions. 2022-02-27 10:34:02 -08:00
Lioncash
e015dc8264 texture_cache: Ensure has_blacklisted is always initialized
Resolves a -Wmaybe_uninitialized warning
2022-02-02 14:37:27 -05:00
Lioncash
7367e55d1d texture_cache: Remove dead code within SynchronizeAliases
Since these were being copied by value, none of the changes applied in
the loop would be reflected.

However, from the looks of it, this would already be applied within
CopyImage() anyways, so this can be removed.
2022-02-02 14:37:22 -05:00
Lioncash
856f576c05 texture_cache: Amend unintended bitwise OR in SynchronizeAliases 2022-02-02 14:20:58 -05:00
347 changed files with 4532 additions and 2440 deletions

View File

@@ -8,7 +8,7 @@ steps:
displayName: 'Install vulkan-sdk'
- script: python -m pip install --upgrade pip conan
displayName: 'Install conan'
- script: refreshenv && mkdir build && cd build && cmake -G "Visual Studio 16 2019" -A x64 -DYUZU_USE_BUNDLED_QT=1 -DYUZU_USE_BUNDLED_SDL2=1 -DYUZU_USE_QT_WEB_ENGINE=ON -DENABLE_COMPATIBILITY_LIST_DOWNLOAD=ON -DYUZU_ENABLE_COMPATIBILITY_REPORTING=${COMPAT} -DUSE_DISCORD_PRESENCE=ON -DENABLE_QT_TRANSLATION=ON -DDISPLAY_VERSION=${{ parameters['version'] }} -DCMAKE_BUILD_TYPE=Release .. && cd ..
- script: refreshenv && mkdir build && cd build && cmake -G "Visual Studio 16 2019" -A x64 -DYUZU_USE_BUNDLED_QT=1 -DYUZU_USE_BUNDLED_SDL2=1 -DYUZU_USE_QT_WEB_ENGINE=ON -DENABLE_COMPATIBILITY_LIST_DOWNLOAD=ON -DYUZU_ENABLE_COMPATIBILITY_REPORTING=${COMPAT} -DYUZU_TESTS=OFF -DUSE_DISCORD_PRESENCE=ON -DENABLE_QT_TRANSLATION=ON -DDISPLAY_VERSION=${{ parameters['version'] }} -DCMAKE_BUILD_TYPE=Release .. && cd ..
displayName: 'Configure CMake'
- task: MSBuild@1
displayName: 'Build'

View File

@@ -363,7 +363,11 @@ if(ENABLE_QT)
set(YUZU_QT_NO_CMAKE_SYSTEM_PATH "NO_CMAKE_SYSTEM_PATH")
endif()
find_package(Qt5 ${QT_VERSION} REQUIRED COMPONENTS Widgets ${QT_PREFIX_HINT} ${YUZU_QT_NO_CMAKE_SYSTEM_PATH})
if ((${CMAKE_SYSTEM_NAME} STREQUAL "Linux") AND YUZU_USE_BUNDLED_QT)
find_package(Qt5 ${QT_VERSION} REQUIRED COMPONENTS Widgets DBus ${QT_PREFIX_HINT} ${YUZU_QT_NO_CMAKE_SYSTEM_PATH})
else()
find_package(Qt5 ${QT_VERSION} REQUIRED COMPONENTS Widgets ${QT_PREFIX_HINT} ${YUZU_QT_NO_CMAKE_SYSTEM_PATH})
endif()
if (YUZU_USE_QT_WEB_ENGINE)
find_package(Qt5 COMPONENTS WebEngineCore WebEngineWidgets)
endif()

View File

@@ -4,13 +4,12 @@
#pragma once
#include <cstring>
#include <memory>
#include "common/common_types.h"
#if _MSC_VER
#include <intrin.h>
#else
#include <cstring>
#endif
namespace Common {

View File

@@ -33,7 +33,6 @@
#include <cstddef>
#include <limits>
#include <type_traits>
#include "common/common_funcs.h"
#include "common/swap.h"
/*

View File

@@ -57,4 +57,11 @@ requires std::is_integral_v<T>
return static_cast<T>(1ULL << ((8U * sizeof(T)) - std::countl_zero(value - 1U)));
}
template <size_t bit_index, typename T>
requires std::is_integral_v<T>
[[nodiscard]] constexpr bool Bit(const T value) {
static_assert(bit_index < BitSize<T>(), "bit_index must be smaller than size of T");
return ((value >> bit_index) & T(1)) == T(1);
}
} // namespace Common

View File

@@ -2,7 +2,6 @@
// Licensed under GPLv2+
// Refer to the license.txt file included.
#include <cstring>
#include <string>
#include <utility>

View File

@@ -4,7 +4,6 @@
#include "common/fs/file.h"
#include "common/fs/fs.h"
#include "common/fs/path_util.h"
#include "common/logging/log.h"
#ifdef _WIN32

View File

@@ -6,10 +6,8 @@
#include <cstdio>
#include <filesystem>
#include <fstream>
#include <span>
#include <type_traits>
#include <vector>
#include "common/concepts.h"
#include "common/fs/fs_types.h"

View File

@@ -7,7 +7,6 @@
#include <functional>
#include "common/common_funcs.h"
#include "common/common_types.h"
namespace Common::FS {

View File

@@ -8,7 +8,6 @@
#include <filesystem>
#include <span>
#include <string>
#include <string_view>
#include "common/common_types.h"

View File

@@ -7,7 +7,6 @@
#include <array>
#include <cstddef>
#include <string>
#include <type_traits>
#include <vector>
#include <fmt/format.h>
#include "common/common_types.h"

View File

@@ -18,6 +18,7 @@
#include <fcntl.h>
#include <sys/mman.h>
#include <unistd.h>
#include "common/scope_exit.h"
#endif // ^^^ Linux ^^^
@@ -27,7 +28,6 @@
#include "common/assert.h"
#include "common/host_memory.h"
#include "common/logging/log.h"
#include "common/scope_exit.h"
namespace Common {
@@ -327,8 +327,8 @@ private:
bool IsNiechePlaceholder(size_t virtual_offset, size_t length) const {
const auto it = placeholders.upper_bound({virtual_offset, virtual_offset + length});
if (it != placeholders.end() && it->lower() == virtual_offset + length) {
const bool is_root = it == placeholders.begin() && virtual_offset == 0;
return is_root || std::prev(it)->upper() == virtual_offset;
return it == placeholders.begin() ? virtual_offset == 0
: std::prev(it)->upper() == virtual_offset;
}
return false;
}

View File

@@ -4,6 +4,7 @@
#pragma once
#include "common/common_funcs.h"
#include "common/parent_of_member.h"
#include "common/tree.h"
@@ -15,32 +16,33 @@ class IntrusiveRedBlackTreeImpl;
}
#pragma pack(push, 4)
struct IntrusiveRedBlackTreeNode {
YUZU_NON_COPYABLE(IntrusiveRedBlackTreeNode);
public:
using EntryType = RBEntry<IntrusiveRedBlackTreeNode>;
constexpr IntrusiveRedBlackTreeNode() = default;
void SetEntry(const EntryType& new_entry) {
entry = new_entry;
}
[[nodiscard]] EntryType& GetEntry() {
return entry;
}
[[nodiscard]] const EntryType& GetEntry() const {
return entry;
}
using RBEntry = freebsd::RBEntry<IntrusiveRedBlackTreeNode>;
private:
EntryType entry{};
RBEntry m_entry;
friend class impl::IntrusiveRedBlackTreeImpl;
public:
explicit IntrusiveRedBlackTreeNode() = default;
template <class, class, class>
friend class IntrusiveRedBlackTree;
[[nodiscard]] constexpr RBEntry& GetRBEntry() {
return m_entry;
}
[[nodiscard]] constexpr const RBEntry& GetRBEntry() const {
return m_entry;
}
constexpr void SetRBEntry(const RBEntry& entry) {
m_entry = entry;
}
};
static_assert(sizeof(IntrusiveRedBlackTreeNode) ==
3 * sizeof(void*) + std::max<size_t>(sizeof(freebsd::RBColor), 4));
#pragma pack(pop)
template <class T, class Traits, class Comparator>
class IntrusiveRedBlackTree;
@@ -48,12 +50,17 @@ class IntrusiveRedBlackTree;
namespace impl {
class IntrusiveRedBlackTreeImpl {
YUZU_NON_COPYABLE(IntrusiveRedBlackTreeImpl);
private:
template <class, class, class>
friend class ::Common::IntrusiveRedBlackTree;
using RootType = RBHead<IntrusiveRedBlackTreeNode>;
RootType root;
private:
using RootType = freebsd::RBHead<IntrusiveRedBlackTreeNode>;
private:
RootType m_root;
public:
template <bool Const>
@@ -81,149 +88,150 @@ public:
IntrusiveRedBlackTreeImpl::reference>;
private:
pointer node;
pointer m_node;
public:
explicit Iterator(pointer n) : node(n) {}
constexpr explicit Iterator(pointer n) : m_node(n) {}
bool operator==(const Iterator& rhs) const {
return this->node == rhs.node;
constexpr bool operator==(const Iterator& rhs) const {
return m_node == rhs.m_node;
}
bool operator!=(const Iterator& rhs) const {
constexpr bool operator!=(const Iterator& rhs) const {
return !(*this == rhs);
}
pointer operator->() const {
return this->node;
constexpr pointer operator->() const {
return m_node;
}
reference operator*() const {
return *this->node;
constexpr reference operator*() const {
return *m_node;
}
Iterator& operator++() {
this->node = GetNext(this->node);
constexpr Iterator& operator++() {
m_node = GetNext(m_node);
return *this;
}
Iterator& operator--() {
this->node = GetPrev(this->node);
constexpr Iterator& operator--() {
m_node = GetPrev(m_node);
return *this;
}
Iterator operator++(int) {
constexpr Iterator operator++(int) {
const Iterator it{*this};
++(*this);
return it;
}
Iterator operator--(int) {
constexpr Iterator operator--(int) {
const Iterator it{*this};
--(*this);
return it;
}
operator Iterator<true>() const {
return Iterator<true>(this->node);
constexpr operator Iterator<true>() const {
return Iterator<true>(m_node);
}
};
private:
// Define accessors using RB_* functions.
bool EmptyImpl() const {
return root.IsEmpty();
constexpr bool EmptyImpl() const {
return m_root.IsEmpty();
}
IntrusiveRedBlackTreeNode* GetMinImpl() const {
return RB_MIN(const_cast<RootType*>(&root));
constexpr IntrusiveRedBlackTreeNode* GetMinImpl() const {
return freebsd::RB_MIN(const_cast<RootType&>(m_root));
}
IntrusiveRedBlackTreeNode* GetMaxImpl() const {
return RB_MAX(const_cast<RootType*>(&root));
constexpr IntrusiveRedBlackTreeNode* GetMaxImpl() const {
return freebsd::RB_MAX(const_cast<RootType&>(m_root));
}
IntrusiveRedBlackTreeNode* RemoveImpl(IntrusiveRedBlackTreeNode* node) {
return RB_REMOVE(&root, node);
constexpr IntrusiveRedBlackTreeNode* RemoveImpl(IntrusiveRedBlackTreeNode* node) {
return freebsd::RB_REMOVE(m_root, node);
}
public:
static IntrusiveRedBlackTreeNode* GetNext(IntrusiveRedBlackTreeNode* node) {
return RB_NEXT(node);
static constexpr IntrusiveRedBlackTreeNode* GetNext(IntrusiveRedBlackTreeNode* node) {
return freebsd::RB_NEXT(node);
}
static IntrusiveRedBlackTreeNode* GetPrev(IntrusiveRedBlackTreeNode* node) {
return RB_PREV(node);
static constexpr IntrusiveRedBlackTreeNode* GetPrev(IntrusiveRedBlackTreeNode* node) {
return freebsd::RB_PREV(node);
}
static const IntrusiveRedBlackTreeNode* GetNext(const IntrusiveRedBlackTreeNode* node) {
static constexpr IntrusiveRedBlackTreeNode const* GetNext(
IntrusiveRedBlackTreeNode const* node) {
return static_cast<const IntrusiveRedBlackTreeNode*>(
GetNext(const_cast<IntrusiveRedBlackTreeNode*>(node)));
}
static const IntrusiveRedBlackTreeNode* GetPrev(const IntrusiveRedBlackTreeNode* node) {
static constexpr IntrusiveRedBlackTreeNode const* GetPrev(
IntrusiveRedBlackTreeNode const* node) {
return static_cast<const IntrusiveRedBlackTreeNode*>(
GetPrev(const_cast<IntrusiveRedBlackTreeNode*>(node)));
}
public:
constexpr IntrusiveRedBlackTreeImpl() {}
constexpr IntrusiveRedBlackTreeImpl() = default;
// Iterator accessors.
iterator begin() {
constexpr iterator begin() {
return iterator(this->GetMinImpl());
}
const_iterator begin() const {
constexpr const_iterator begin() const {
return const_iterator(this->GetMinImpl());
}
iterator end() {
constexpr iterator end() {
return iterator(static_cast<IntrusiveRedBlackTreeNode*>(nullptr));
}
const_iterator end() const {
constexpr const_iterator end() const {
return const_iterator(static_cast<const IntrusiveRedBlackTreeNode*>(nullptr));
}
const_iterator cbegin() const {
constexpr const_iterator cbegin() const {
return this->begin();
}
const_iterator cend() const {
constexpr const_iterator cend() const {
return this->end();
}
iterator iterator_to(reference ref) {
return iterator(&ref);
constexpr iterator iterator_to(reference ref) {
return iterator(std::addressof(ref));
}
const_iterator iterator_to(const_reference ref) const {
return const_iterator(&ref);
constexpr const_iterator iterator_to(const_reference ref) const {
return const_iterator(std::addressof(ref));
}
// Content management.
bool empty() const {
constexpr bool empty() const {
return this->EmptyImpl();
}
reference back() {
constexpr reference back() {
return *this->GetMaxImpl();
}
const_reference back() const {
constexpr const_reference back() const {
return *this->GetMaxImpl();
}
reference front() {
constexpr reference front() {
return *this->GetMinImpl();
}
const_reference front() const {
constexpr const_reference front() const {
return *this->GetMinImpl();
}
iterator erase(iterator it) {
constexpr iterator erase(iterator it) {
auto cur = std::addressof(*it);
auto next = GetNext(cur);
this->RemoveImpl(cur);
@@ -234,16 +242,16 @@ public:
} // namespace impl
template <typename T>
concept HasLightCompareType = requires {
{ std::is_same<typename T::LightCompareType, void>::value } -> std::convertible_to<bool>;
concept HasRedBlackKeyType = requires {
{ std::is_same<typename T::RedBlackKeyType, void>::value } -> std::convertible_to<bool>;
};
namespace impl {
template <typename T, typename Default>
consteval auto* GetLightCompareType() {
if constexpr (HasLightCompareType<T>) {
return static_cast<typename T::LightCompareType*>(nullptr);
consteval auto* GetRedBlackKeyType() {
if constexpr (HasRedBlackKeyType<T>) {
return static_cast<typename T::RedBlackKeyType*>(nullptr);
} else {
return static_cast<Default*>(nullptr);
}
@@ -252,16 +260,17 @@ namespace impl {
} // namespace impl
template <typename T, typename Default>
using LightCompareType = std::remove_pointer_t<decltype(impl::GetLightCompareType<T, Default>())>;
using RedBlackKeyType = std::remove_pointer_t<decltype(impl::GetRedBlackKeyType<T, Default>())>;
template <class T, class Traits, class Comparator>
class IntrusiveRedBlackTree {
YUZU_NON_COPYABLE(IntrusiveRedBlackTree);
public:
using ImplType = impl::IntrusiveRedBlackTreeImpl;
private:
ImplType impl{};
ImplType m_impl;
public:
template <bool Const>
@@ -277,9 +286,9 @@ public:
using iterator = Iterator<false>;
using const_iterator = Iterator<true>;
using light_value_type = LightCompareType<Comparator, value_type>;
using const_light_pointer = const light_value_type*;
using const_light_reference = const light_value_type&;
using key_type = RedBlackKeyType<Comparator, value_type>;
using const_key_pointer = const key_type*;
using const_key_reference = const key_type&;
template <bool Const>
class Iterator {
@@ -298,183 +307,201 @@ public:
IntrusiveRedBlackTree::reference>;
private:
ImplIterator iterator;
ImplIterator m_impl;
private:
explicit Iterator(ImplIterator it) : iterator(it) {}
constexpr explicit Iterator(ImplIterator it) : m_impl(it) {}
explicit Iterator(typename std::conditional<Const, ImplType::const_iterator,
ImplType::iterator>::type::pointer ptr)
: iterator(ptr) {}
constexpr explicit Iterator(typename ImplIterator::pointer p) : m_impl(p) {}
ImplIterator GetImplIterator() const {
return this->iterator;
constexpr ImplIterator GetImplIterator() const {
return m_impl;
}
public:
bool operator==(const Iterator& rhs) const {
return this->iterator == rhs.iterator;
constexpr bool operator==(const Iterator& rhs) const {
return m_impl == rhs.m_impl;
}
bool operator!=(const Iterator& rhs) const {
constexpr bool operator!=(const Iterator& rhs) const {
return !(*this == rhs);
}
pointer operator->() const {
return Traits::GetParent(std::addressof(*this->iterator));
constexpr pointer operator->() const {
return Traits::GetParent(std::addressof(*m_impl));
}
reference operator*() const {
return *Traits::GetParent(std::addressof(*this->iterator));
constexpr reference operator*() const {
return *Traits::GetParent(std::addressof(*m_impl));
}
Iterator& operator++() {
++this->iterator;
constexpr Iterator& operator++() {
++m_impl;
return *this;
}
Iterator& operator--() {
--this->iterator;
constexpr Iterator& operator--() {
--m_impl;
return *this;
}
Iterator operator++(int) {
constexpr Iterator operator++(int) {
const Iterator it{*this};
++this->iterator;
++m_impl;
return it;
}
Iterator operator--(int) {
constexpr Iterator operator--(int) {
const Iterator it{*this};
--this->iterator;
--m_impl;
return it;
}
operator Iterator<true>() const {
return Iterator<true>(this->iterator);
constexpr operator Iterator<true>() const {
return Iterator<true>(m_impl);
}
};
private:
static int CompareImpl(const IntrusiveRedBlackTreeNode* lhs,
const IntrusiveRedBlackTreeNode* rhs) {
static constexpr int CompareImpl(const IntrusiveRedBlackTreeNode* lhs,
const IntrusiveRedBlackTreeNode* rhs) {
return Comparator::Compare(*Traits::GetParent(lhs), *Traits::GetParent(rhs));
}
static int LightCompareImpl(const void* elm, const IntrusiveRedBlackTreeNode* rhs) {
return Comparator::Compare(*static_cast<const_light_pointer>(elm), *Traits::GetParent(rhs));
static constexpr int CompareKeyImpl(const_key_reference key,
const IntrusiveRedBlackTreeNode* rhs) {
return Comparator::Compare(key, *Traits::GetParent(rhs));
}
// Define accessors using RB_* functions.
IntrusiveRedBlackTreeNode* InsertImpl(IntrusiveRedBlackTreeNode* node) {
return RB_INSERT(&impl.root, node, CompareImpl);
constexpr IntrusiveRedBlackTreeNode* InsertImpl(IntrusiveRedBlackTreeNode* node) {
return freebsd::RB_INSERT(m_impl.m_root, node, CompareImpl);
}
IntrusiveRedBlackTreeNode* FindImpl(const IntrusiveRedBlackTreeNode* node) const {
return RB_FIND(const_cast<ImplType::RootType*>(&impl.root),
const_cast<IntrusiveRedBlackTreeNode*>(node), CompareImpl);
constexpr IntrusiveRedBlackTreeNode* FindImpl(IntrusiveRedBlackTreeNode const* node) const {
return freebsd::RB_FIND(const_cast<ImplType::RootType&>(m_impl.m_root),
const_cast<IntrusiveRedBlackTreeNode*>(node), CompareImpl);
}
IntrusiveRedBlackTreeNode* NFindImpl(const IntrusiveRedBlackTreeNode* node) const {
return RB_NFIND(const_cast<ImplType::RootType*>(&impl.root),
const_cast<IntrusiveRedBlackTreeNode*>(node), CompareImpl);
constexpr IntrusiveRedBlackTreeNode* NFindImpl(IntrusiveRedBlackTreeNode const* node) const {
return freebsd::RB_NFIND(const_cast<ImplType::RootType&>(m_impl.m_root),
const_cast<IntrusiveRedBlackTreeNode*>(node), CompareImpl);
}
IntrusiveRedBlackTreeNode* FindLightImpl(const_light_pointer lelm) const {
return RB_FIND_LIGHT(const_cast<ImplType::RootType*>(&impl.root),
static_cast<const void*>(lelm), LightCompareImpl);
constexpr IntrusiveRedBlackTreeNode* FindKeyImpl(const_key_reference key) const {
return freebsd::RB_FIND_KEY(const_cast<ImplType::RootType&>(m_impl.m_root), key,
CompareKeyImpl);
}
IntrusiveRedBlackTreeNode* NFindLightImpl(const_light_pointer lelm) const {
return RB_NFIND_LIGHT(const_cast<ImplType::RootType*>(&impl.root),
static_cast<const void*>(lelm), LightCompareImpl);
constexpr IntrusiveRedBlackTreeNode* NFindKeyImpl(const_key_reference key) const {
return freebsd::RB_NFIND_KEY(const_cast<ImplType::RootType&>(m_impl.m_root), key,
CompareKeyImpl);
}
constexpr IntrusiveRedBlackTreeNode* FindExistingImpl(
IntrusiveRedBlackTreeNode const* node) const {
return freebsd::RB_FIND_EXISTING(const_cast<ImplType::RootType&>(m_impl.m_root),
const_cast<IntrusiveRedBlackTreeNode*>(node), CompareImpl);
}
constexpr IntrusiveRedBlackTreeNode* FindExistingKeyImpl(const_key_reference key) const {
return freebsd::RB_FIND_EXISTING_KEY(const_cast<ImplType::RootType&>(m_impl.m_root), key,
CompareKeyImpl);
}
public:
constexpr IntrusiveRedBlackTree() = default;
// Iterator accessors.
iterator begin() {
return iterator(this->impl.begin());
constexpr iterator begin() {
return iterator(m_impl.begin());
}
const_iterator begin() const {
return const_iterator(this->impl.begin());
constexpr const_iterator begin() const {
return const_iterator(m_impl.begin());
}
iterator end() {
return iterator(this->impl.end());
constexpr iterator end() {
return iterator(m_impl.end());
}
const_iterator end() const {
return const_iterator(this->impl.end());
constexpr const_iterator end() const {
return const_iterator(m_impl.end());
}
const_iterator cbegin() const {
constexpr const_iterator cbegin() const {
return this->begin();
}
const_iterator cend() const {
constexpr const_iterator cend() const {
return this->end();
}
iterator iterator_to(reference ref) {
return iterator(this->impl.iterator_to(*Traits::GetNode(std::addressof(ref))));
constexpr iterator iterator_to(reference ref) {
return iterator(m_impl.iterator_to(*Traits::GetNode(std::addressof(ref))));
}
const_iterator iterator_to(const_reference ref) const {
return const_iterator(this->impl.iterator_to(*Traits::GetNode(std::addressof(ref))));
constexpr const_iterator iterator_to(const_reference ref) const {
return const_iterator(m_impl.iterator_to(*Traits::GetNode(std::addressof(ref))));
}
// Content management.
bool empty() const {
return this->impl.empty();
constexpr bool empty() const {
return m_impl.empty();
}
reference back() {
return *Traits::GetParent(std::addressof(this->impl.back()));
constexpr reference back() {
return *Traits::GetParent(std::addressof(m_impl.back()));
}
const_reference back() const {
return *Traits::GetParent(std::addressof(this->impl.back()));
constexpr const_reference back() const {
return *Traits::GetParent(std::addressof(m_impl.back()));
}
reference front() {
return *Traits::GetParent(std::addressof(this->impl.front()));
constexpr reference front() {
return *Traits::GetParent(std::addressof(m_impl.front()));
}
const_reference front() const {
return *Traits::GetParent(std::addressof(this->impl.front()));
constexpr const_reference front() const {
return *Traits::GetParent(std::addressof(m_impl.front()));
}
iterator erase(iterator it) {
return iterator(this->impl.erase(it.GetImplIterator()));
constexpr iterator erase(iterator it) {
return iterator(m_impl.erase(it.GetImplIterator()));
}
iterator insert(reference ref) {
constexpr iterator insert(reference ref) {
ImplType::pointer node = Traits::GetNode(std::addressof(ref));
this->InsertImpl(node);
return iterator(node);
}
iterator find(const_reference ref) const {
constexpr iterator find(const_reference ref) const {
return iterator(this->FindImpl(Traits::GetNode(std::addressof(ref))));
}
iterator nfind(const_reference ref) const {
constexpr iterator nfind(const_reference ref) const {
return iterator(this->NFindImpl(Traits::GetNode(std::addressof(ref))));
}
iterator find_light(const_light_reference ref) const {
return iterator(this->FindLightImpl(std::addressof(ref)));
constexpr iterator find_key(const_key_reference ref) const {
return iterator(this->FindKeyImpl(ref));
}
iterator nfind_light(const_light_reference ref) const {
return iterator(this->NFindLightImpl(std::addressof(ref)));
constexpr iterator nfind_key(const_key_reference ref) const {
return iterator(this->NFindKeyImpl(ref));
}
constexpr iterator find_existing(const_reference ref) const {
return iterator(this->FindExistingImpl(Traits::GetNode(std::addressof(ref))));
}
constexpr iterator find_existing_key(const_key_reference ref) const {
return iterator(this->FindExistingKeyImpl(ref));
}
};
template <auto T, class Derived = impl::GetParentType<T>>
template <auto T, class Derived = Common::impl::GetParentType<T>>
class IntrusiveRedBlackTreeMemberTraits;
template <class Parent, IntrusiveRedBlackTreeNode Parent::*Member, class Derived>
@@ -498,19 +525,16 @@ private:
return std::addressof(parent->*Member);
}
static constexpr Derived* GetParent(IntrusiveRedBlackTreeNode* node) {
return GetParentPointer<Member, Derived>(node);
static Derived* GetParent(IntrusiveRedBlackTreeNode* node) {
return Common::GetParentPointer<Member, Derived>(node);
}
static constexpr Derived const* GetParent(const IntrusiveRedBlackTreeNode* node) {
return GetParentPointer<Member, Derived>(node);
static Derived const* GetParent(IntrusiveRedBlackTreeNode const* node) {
return Common::GetParentPointer<Member, Derived>(node);
}
private:
static constexpr TypedStorage<Derived> DerivedStorage = {};
};
template <auto T, class Derived = impl::GetParentType<T>>
template <auto T, class Derived = Common::impl::GetParentType<T>>
class IntrusiveRedBlackTreeMemberTraitsDeferredAssert;
template <class Parent, IntrusiveRedBlackTreeNode Parent::*Member, class Derived>
@@ -521,11 +545,6 @@ public:
IntrusiveRedBlackTree<Derived, IntrusiveRedBlackTreeMemberTraitsDeferredAssert, Comparator>;
using TreeTypeImpl = impl::IntrusiveRedBlackTreeImpl;
static constexpr bool IsValid() {
TypedStorage<Derived> DerivedStorage = {};
return GetParent(GetNode(GetPointer(DerivedStorage))) == GetPointer(DerivedStorage);
}
private:
template <class, class, class>
friend class IntrusiveRedBlackTree;
@@ -540,30 +559,36 @@ private:
return std::addressof(parent->*Member);
}
static constexpr Derived* GetParent(IntrusiveRedBlackTreeNode* node) {
return GetParentPointer<Member, Derived>(node);
static Derived* GetParent(IntrusiveRedBlackTreeNode* node) {
return Common::GetParentPointer<Member, Derived>(node);
}
static constexpr Derived const* GetParent(const IntrusiveRedBlackTreeNode* node) {
return GetParentPointer<Member, Derived>(node);
static Derived const* GetParent(IntrusiveRedBlackTreeNode const* node) {
return Common::GetParentPointer<Member, Derived>(node);
}
};
template <class Derived>
class IntrusiveRedBlackTreeBaseNode : public IntrusiveRedBlackTreeNode {
class alignas(void*) IntrusiveRedBlackTreeBaseNode : public IntrusiveRedBlackTreeNode {
public:
using IntrusiveRedBlackTreeNode::IntrusiveRedBlackTreeNode;
constexpr Derived* GetPrev() {
return static_cast<Derived*>(impl::IntrusiveRedBlackTreeImpl::GetPrev(this));
return static_cast<Derived*>(static_cast<IntrusiveRedBlackTreeBaseNode*>(
impl::IntrusiveRedBlackTreeImpl::GetPrev(this)));
}
constexpr const Derived* GetPrev() const {
return static_cast<const Derived*>(impl::IntrusiveRedBlackTreeImpl::GetPrev(this));
return static_cast<const Derived*>(static_cast<const IntrusiveRedBlackTreeBaseNode*>(
impl::IntrusiveRedBlackTreeImpl::GetPrev(this)));
}
constexpr Derived* GetNext() {
return static_cast<Derived*>(impl::IntrusiveRedBlackTreeImpl::GetNext(this));
return static_cast<Derived*>(static_cast<IntrusiveRedBlackTreeBaseNode*>(
impl::IntrusiveRedBlackTreeImpl::GetNext(this)));
}
constexpr const Derived* GetNext() const {
return static_cast<const Derived*>(impl::IntrusiveRedBlackTreeImpl::GetNext(this));
return static_cast<const Derived*>(static_cast<const IntrusiveRedBlackTreeBaseNode*>(
impl::IntrusiveRedBlackTreeImpl::GetNext(this)));
}
};
@@ -581,19 +606,22 @@ private:
friend class impl::IntrusiveRedBlackTreeImpl;
static constexpr IntrusiveRedBlackTreeNode* GetNode(Derived* parent) {
return static_cast<IntrusiveRedBlackTreeNode*>(parent);
return static_cast<IntrusiveRedBlackTreeNode*>(
static_cast<IntrusiveRedBlackTreeBaseNode<Derived>*>(parent));
}
static constexpr IntrusiveRedBlackTreeNode const* GetNode(Derived const* parent) {
return static_cast<const IntrusiveRedBlackTreeNode*>(parent);
return static_cast<const IntrusiveRedBlackTreeNode*>(
static_cast<const IntrusiveRedBlackTreeBaseNode<Derived>*>(parent));
}
static constexpr Derived* GetParent(IntrusiveRedBlackTreeNode* node) {
return static_cast<Derived*>(node);
return static_cast<Derived*>(static_cast<IntrusiveRedBlackTreeBaseNode<Derived>*>(node));
}
static constexpr Derived const* GetParent(const IntrusiveRedBlackTreeNode* node) {
return static_cast<const Derived*>(node);
static constexpr Derived const* GetParent(IntrusiveRedBlackTreeNode const* node) {
return static_cast<const Derived*>(
static_cast<const IntrusiveRedBlackTreeBaseNode<Derived>*>(node));
}
};

View File

@@ -5,10 +5,8 @@
#include <atomic>
#include <chrono>
#include <climits>
#include <exception>
#include <stop_token>
#include <thread>
#include <vector>
#include <fmt/format.h>
@@ -218,19 +216,17 @@ private:
Impl(const std::filesystem::path& file_backend_filename, const Filter& filter_)
: filter{filter_}, file_backend{file_backend_filename} {}
~Impl() {
StopBackendThread();
}
~Impl() = default;
void StartBackendThread() {
backend_thread = std::thread([this] {
backend_thread = std::jthread([this](std::stop_token stop_token) {
Common::SetCurrentThreadName("yuzu:Log");
Entry entry;
const auto write_logs = [this, &entry]() {
ForEachBackend([&entry](Backend& backend) { backend.Write(entry); });
};
while (!stop.stop_requested()) {
entry = message_queue.PopWait(stop.get_token());
while (!stop_token.stop_requested()) {
entry = message_queue.PopWait(stop_token);
if (entry.filename != nullptr) {
write_logs();
}
@@ -244,11 +240,6 @@ private:
});
}
void StopBackendThread() {
stop.request_stop();
backend_thread.join();
}
Entry CreateEntry(Class log_class, Level log_level, const char* filename, unsigned int line_nr,
const char* function, std::string&& message) const {
using std::chrono::duration_cast;
@@ -283,10 +274,9 @@ private:
ColorConsoleBackend color_console_backend{};
FileBackend file_backend;
std::stop_source stop;
std::thread backend_thread;
MPSCQueue<Entry, true> message_queue{};
std::chrono::steady_clock::time_point time_origin{std::chrono::steady_clock::now()};
std::jthread backend_thread;
};
} // namespace

View File

@@ -4,7 +4,6 @@
#pragma once
#include <filesystem>
#include "common/logging/filter.h"
namespace Common::Log {

View File

@@ -7,7 +7,6 @@
#include <array>
#include <chrono>
#include <cstddef>
#include <string_view>
#include "common/logging/log.h"
namespace Common::Log {

View File

@@ -10,12 +10,10 @@
#endif
#include "common/assert.h"
#include "common/common_funcs.h"
#include "common/logging/filter.h"
#include "common/logging/log.h"
#include "common/logging/log_entry.h"
#include "common/logging/text_formatter.h"
#include "common/string_util.h"
namespace Common::Log {

View File

@@ -4,7 +4,6 @@
#pragma once
#include <cstddef>
#include <string>
namespace Common::Log {

View File

@@ -70,4 +70,4 @@ const MemoryInfo& GetMemInfo() {
return mem_info;
}
} // namespace Common
} // namespace Common

View File

@@ -6,7 +6,6 @@
#include <fmt/format.h>
#include "common/fs/file.h"
#include "common/fs/fs.h"
#include "common/fs/path_util.h"
#include "common/nvidia_flags.h"

View File

@@ -5,7 +5,6 @@
#pragma once
#include <atomic>
#include <tuple>
#include "common/common_types.h"
#include "common/virtual_buffer.h"

View File

@@ -7,7 +7,6 @@
#include <type_traits>
#include "common/assert.h"
#include "common/common_types.h"
namespace Common {
namespace detail {

View File

@@ -12,7 +12,6 @@
#include <new>
#include <type_traits>
#include <vector>
#include "common/common_types.h"
namespace Common {

View File

@@ -176,6 +176,7 @@ void RestoreGlobalState(bool is_powered_on) {
values.cpuopt_unsafe_ignore_standard_fpcr.SetGlobal(true);
values.cpuopt_unsafe_inaccurate_nan.SetGlobal(true);
values.cpuopt_unsafe_fastmem_check.SetGlobal(true);
values.cpuopt_unsafe_ignore_global_monitor.SetGlobal(true);
// Renderer
values.renderer_backend.SetGlobal(true);

View File

@@ -484,12 +484,15 @@ struct Values {
BasicSetting<bool> cpuopt_misc_ir{true, "cpuopt_misc_ir"};
BasicSetting<bool> cpuopt_reduce_misalign_checks{true, "cpuopt_reduce_misalign_checks"};
BasicSetting<bool> cpuopt_fastmem{true, "cpuopt_fastmem"};
BasicSetting<bool> cpuopt_fastmem_exclusives{true, "cpuopt_fastmem_exclusives"};
BasicSetting<bool> cpuopt_recompile_exclusives{true, "cpuopt_recompile_exclusives"};
Setting<bool> cpuopt_unsafe_unfuse_fma{true, "cpuopt_unsafe_unfuse_fma"};
Setting<bool> cpuopt_unsafe_reduce_fp_error{true, "cpuopt_unsafe_reduce_fp_error"};
Setting<bool> cpuopt_unsafe_ignore_standard_fpcr{true, "cpuopt_unsafe_ignore_standard_fpcr"};
Setting<bool> cpuopt_unsafe_inaccurate_nan{true, "cpuopt_unsafe_inaccurate_nan"};
Setting<bool> cpuopt_unsafe_fastmem_check{true, "cpuopt_unsafe_fastmem_check"};
Setting<bool> cpuopt_unsafe_ignore_global_monitor{true, "cpuopt_unsafe_ignore_global_monitor"};
// Renderer
RangedSetting<RendererBackend> renderer_backend{

View File

@@ -5,11 +5,9 @@
#include <algorithm>
#include <cctype>
#include <codecvt>
#include <cstdlib>
#include <locale>
#include <sstream>
#include "common/logging/log.h"
#include "common/string_util.h"
#ifdef _WIN32

View File

@@ -4,7 +4,6 @@
#include <algorithm>
#include <cstring>
#include "common/assert.h"
#include "common/scm_rev.h"
#include "common/telemetry.h"
@@ -55,22 +54,50 @@ void AppendBuildInfo(FieldCollection& fc) {
void AppendCPUInfo(FieldCollection& fc) {
#ifdef ARCHITECTURE_x86_64
fc.AddField(FieldType::UserSystem, "CPU_Model", Common::GetCPUCaps().cpu_string);
fc.AddField(FieldType::UserSystem, "CPU_BrandString", Common::GetCPUCaps().brand_string);
fc.AddField(FieldType::UserSystem, "CPU_Extension_x64_AES", Common::GetCPUCaps().aes);
fc.AddField(FieldType::UserSystem, "CPU_Extension_x64_AVX", Common::GetCPUCaps().avx);
fc.AddField(FieldType::UserSystem, "CPU_Extension_x64_AVX2", Common::GetCPUCaps().avx2);
fc.AddField(FieldType::UserSystem, "CPU_Extension_x64_AVX512", Common::GetCPUCaps().avx512);
fc.AddField(FieldType::UserSystem, "CPU_Extension_x64_BMI1", Common::GetCPUCaps().bmi1);
fc.AddField(FieldType::UserSystem, "CPU_Extension_x64_BMI2", Common::GetCPUCaps().bmi2);
fc.AddField(FieldType::UserSystem, "CPU_Extension_x64_FMA", Common::GetCPUCaps().fma);
fc.AddField(FieldType::UserSystem, "CPU_Extension_x64_FMA4", Common::GetCPUCaps().fma4);
fc.AddField(FieldType::UserSystem, "CPU_Extension_x64_SSE", Common::GetCPUCaps().sse);
fc.AddField(FieldType::UserSystem, "CPU_Extension_x64_SSE2", Common::GetCPUCaps().sse2);
fc.AddField(FieldType::UserSystem, "CPU_Extension_x64_SSE3", Common::GetCPUCaps().sse3);
fc.AddField(FieldType::UserSystem, "CPU_Extension_x64_SSSE3", Common::GetCPUCaps().ssse3);
fc.AddField(FieldType::UserSystem, "CPU_Extension_x64_SSE41", Common::GetCPUCaps().sse4_1);
fc.AddField(FieldType::UserSystem, "CPU_Extension_x64_SSE42", Common::GetCPUCaps().sse4_2);
const auto& caps = Common::GetCPUCaps();
const auto add_field = [&fc](std::string_view field_name, const auto& field_value) {
fc.AddField(FieldType::UserSystem, field_name, field_value);
};
add_field("CPU_Model", caps.cpu_string);
add_field("CPU_BrandString", caps.brand_string);
add_field("CPU_Extension_x64_SSE", caps.sse);
add_field("CPU_Extension_x64_SSE2", caps.sse2);
add_field("CPU_Extension_x64_SSE3", caps.sse3);
add_field("CPU_Extension_x64_SSSE3", caps.ssse3);
add_field("CPU_Extension_x64_SSE41", caps.sse4_1);
add_field("CPU_Extension_x64_SSE42", caps.sse4_2);
add_field("CPU_Extension_x64_AVX", caps.avx);
add_field("CPU_Extension_x64_AVX_VNNI", caps.avx_vnni);
add_field("CPU_Extension_x64_AVX2", caps.avx2);
// Skylake-X/SP level AVX512, for compatibility with the previous telemetry field
add_field("CPU_Extension_x64_AVX512",
caps.avx512f && caps.avx512cd && caps.avx512vl && caps.avx512dq && caps.avx512bw);
add_field("CPU_Extension_x64_AVX512F", caps.avx512f);
add_field("CPU_Extension_x64_AVX512CD", caps.avx512cd);
add_field("CPU_Extension_x64_AVX512VL", caps.avx512vl);
add_field("CPU_Extension_x64_AVX512DQ", caps.avx512dq);
add_field("CPU_Extension_x64_AVX512BW", caps.avx512bw);
add_field("CPU_Extension_x64_AVX512BITALG", caps.avx512bitalg);
add_field("CPU_Extension_x64_AVX512VBMI", caps.avx512vbmi);
add_field("CPU_Extension_x64_AES", caps.aes);
add_field("CPU_Extension_x64_BMI1", caps.bmi1);
add_field("CPU_Extension_x64_BMI2", caps.bmi2);
add_field("CPU_Extension_x64_F16C", caps.f16c);
add_field("CPU_Extension_x64_FMA", caps.fma);
add_field("CPU_Extension_x64_FMA4", caps.fma4);
add_field("CPU_Extension_x64_GFNI", caps.gfni);
add_field("CPU_Extension_x64_INVARIANT_TSC", caps.invariant_tsc);
add_field("CPU_Extension_x64_LZCNT", caps.lzcnt);
add_field("CPU_Extension_x64_MOVBE", caps.movbe);
add_field("CPU_Extension_x64_PCLMULQDQ", caps.pclmulqdq);
add_field("CPU_Extension_x64_POPCNT", caps.popcnt);
add_field("CPU_Extension_x64_SHA", caps.sha);
#else
fc.AddField(FieldType::UserSystem, "CPU_Model", "Other");
#endif

View File

@@ -55,8 +55,8 @@ class Field : public FieldInterface {
public:
YUZU_NON_COPYABLE(Field);
Field(FieldType type_, std::string name_, T value_)
: name(std::move(name_)), type(type_), value(std::move(value_)) {}
Field(FieldType type_, std::string_view name_, T value_)
: name(name_), type(type_), value(std::move(value_)) {}
~Field() override = default;
@@ -123,7 +123,7 @@ public:
* @param value Value for the field to add.
*/
template <typename T>
void AddField(FieldType type, const char* name, T value) {
void AddField(FieldType type, std::string_view name, T value) {
return AddField(std::make_unique<Field<T>>(type, name, std::move(value)));
}

View File

@@ -43,246 +43,445 @@
* The maximum height of a red-black tree is 2lg (n+1).
*/
#include "common/assert.h"
namespace Common::freebsd {
namespace Common {
template <typename T>
class RBHead {
public:
[[nodiscard]] T* Root() {
return rbh_root;
}
[[nodiscard]] const T* Root() const {
return rbh_root;
}
void SetRoot(T* root) {
rbh_root = root;
}
[[nodiscard]] bool IsEmpty() const {
return Root() == nullptr;
}
private:
T* rbh_root = nullptr;
};
enum class EntryColor {
Black,
Red,
enum class RBColor {
RB_BLACK = 0,
RB_RED = 1,
};
#pragma pack(push, 4)
template <typename T>
class RBEntry {
public:
[[nodiscard]] T* Left() {
return rbe_left;
constexpr RBEntry() = default;
[[nodiscard]] constexpr T* Left() {
return m_rbe_left;
}
[[nodiscard]] constexpr const T* Left() const {
return m_rbe_left;
}
[[nodiscard]] const T* Left() const {
return rbe_left;
constexpr void SetLeft(T* e) {
m_rbe_left = e;
}
void SetLeft(T* left) {
rbe_left = left;
[[nodiscard]] constexpr T* Right() {
return m_rbe_right;
}
[[nodiscard]] constexpr const T* Right() const {
return m_rbe_right;
}
[[nodiscard]] T* Right() {
return rbe_right;
constexpr void SetRight(T* e) {
m_rbe_right = e;
}
[[nodiscard]] const T* Right() const {
return rbe_right;
[[nodiscard]] constexpr T* Parent() {
return m_rbe_parent;
}
[[nodiscard]] constexpr const T* Parent() const {
return m_rbe_parent;
}
void SetRight(T* right) {
rbe_right = right;
constexpr void SetParent(T* e) {
m_rbe_parent = e;
}
[[nodiscard]] T* Parent() {
return rbe_parent;
[[nodiscard]] constexpr bool IsBlack() const {
return m_rbe_color == RBColor::RB_BLACK;
}
[[nodiscard]] constexpr bool IsRed() const {
return m_rbe_color == RBColor::RB_RED;
}
[[nodiscard]] constexpr RBColor Color() const {
return m_rbe_color;
}
[[nodiscard]] const T* Parent() const {
return rbe_parent;
}
void SetParent(T* parent) {
rbe_parent = parent;
}
[[nodiscard]] bool IsBlack() const {
return rbe_color == EntryColor::Black;
}
[[nodiscard]] bool IsRed() const {
return rbe_color == EntryColor::Red;
}
[[nodiscard]] EntryColor Color() const {
return rbe_color;
}
void SetColor(EntryColor color) {
rbe_color = color;
constexpr void SetColor(RBColor c) {
m_rbe_color = c;
}
private:
T* rbe_left = nullptr;
T* rbe_right = nullptr;
T* rbe_parent = nullptr;
EntryColor rbe_color{};
T* m_rbe_left{};
T* m_rbe_right{};
T* m_rbe_parent{};
RBColor m_rbe_color{RBColor::RB_BLACK};
};
#pragma pack(pop)
template <typename T>
struct CheckRBEntry {
static constexpr bool value = false;
};
template <typename T>
struct CheckRBEntry<RBEntry<T>> {
static constexpr bool value = true;
};
template <typename Node>
[[nodiscard]] RBEntry<Node>& RB_ENTRY(Node* node) {
return node->GetEntry();
template <typename T>
concept IsRBEntry = CheckRBEntry<T>::value;
template <typename T>
concept HasRBEntry = requires(T& t, const T& ct) {
{ t.GetRBEntry() } -> std::same_as<RBEntry<T>&>;
{ ct.GetRBEntry() } -> std::same_as<const RBEntry<T>&>;
};
template <typename T>
requires HasRBEntry<T>
class RBHead {
private:
T* m_rbh_root = nullptr;
public:
[[nodiscard]] constexpr T* Root() {
return m_rbh_root;
}
[[nodiscard]] constexpr const T* Root() const {
return m_rbh_root;
}
constexpr void SetRoot(T* root) {
m_rbh_root = root;
}
[[nodiscard]] constexpr bool IsEmpty() const {
return this->Root() == nullptr;
}
};
template <typename T>
requires HasRBEntry<T>
[[nodiscard]] constexpr RBEntry<T>& RB_ENTRY(T* t) {
return t->GetRBEntry();
}
template <typename T>
requires HasRBEntry<T>
[[nodiscard]] constexpr const RBEntry<T>& RB_ENTRY(const T* t) {
return t->GetRBEntry();
}
template <typename Node>
[[nodiscard]] const RBEntry<Node>& RB_ENTRY(const Node* node) {
return node->GetEntry();
template <typename T>
requires HasRBEntry<T>
[[nodiscard]] constexpr T* RB_LEFT(T* t) {
return RB_ENTRY(t).Left();
}
template <typename T>
requires HasRBEntry<T>
[[nodiscard]] constexpr const T* RB_LEFT(const T* t) {
return RB_ENTRY(t).Left();
}
template <typename Node>
[[nodiscard]] Node* RB_PARENT(Node* node) {
return RB_ENTRY(node).Parent();
template <typename T>
requires HasRBEntry<T>
[[nodiscard]] constexpr T* RB_RIGHT(T* t) {
return RB_ENTRY(t).Right();
}
template <typename T>
requires HasRBEntry<T>
[[nodiscard]] constexpr const T* RB_RIGHT(const T* t) {
return RB_ENTRY(t).Right();
}
template <typename Node>
[[nodiscard]] const Node* RB_PARENT(const Node* node) {
return RB_ENTRY(node).Parent();
template <typename T>
requires HasRBEntry<T>
[[nodiscard]] constexpr T* RB_PARENT(T* t) {
return RB_ENTRY(t).Parent();
}
template <typename T>
requires HasRBEntry<T>
[[nodiscard]] constexpr const T* RB_PARENT(const T* t) {
return RB_ENTRY(t).Parent();
}
template <typename Node>
void RB_SET_PARENT(Node* node, Node* parent) {
return RB_ENTRY(node).SetParent(parent);
template <typename T>
requires HasRBEntry<T>
constexpr void RB_SET_LEFT(T* t, T* e) {
RB_ENTRY(t).SetLeft(e);
}
template <typename T>
requires HasRBEntry<T>
constexpr void RB_SET_RIGHT(T* t, T* e) {
RB_ENTRY(t).SetRight(e);
}
template <typename T>
requires HasRBEntry<T>
constexpr void RB_SET_PARENT(T* t, T* e) {
RB_ENTRY(t).SetParent(e);
}
template <typename Node>
[[nodiscard]] Node* RB_LEFT(Node* node) {
return RB_ENTRY(node).Left();
template <typename T>
requires HasRBEntry<T>
[[nodiscard]] constexpr bool RB_IS_BLACK(const T* t) {
return RB_ENTRY(t).IsBlack();
}
template <typename T>
requires HasRBEntry<T>
[[nodiscard]] constexpr bool RB_IS_RED(const T* t) {
return RB_ENTRY(t).IsRed();
}
template <typename Node>
[[nodiscard]] const Node* RB_LEFT(const Node* node) {
return RB_ENTRY(node).Left();
template <typename T>
requires HasRBEntry<T>
[[nodiscard]] constexpr RBColor RB_COLOR(const T* t) {
return RB_ENTRY(t).Color();
}
template <typename Node>
void RB_SET_LEFT(Node* node, Node* left) {
return RB_ENTRY(node).SetLeft(left);
template <typename T>
requires HasRBEntry<T>
constexpr void RB_SET_COLOR(T* t, RBColor c) {
RB_ENTRY(t).SetColor(c);
}
template <typename Node>
[[nodiscard]] Node* RB_RIGHT(Node* node) {
return RB_ENTRY(node).Right();
template <typename T>
requires HasRBEntry<T>
constexpr void RB_SET(T* elm, T* parent) {
auto& rb_entry = RB_ENTRY(elm);
rb_entry.SetParent(parent);
rb_entry.SetLeft(nullptr);
rb_entry.SetRight(nullptr);
rb_entry.SetColor(RBColor::RB_RED);
}
template <typename Node>
[[nodiscard]] const Node* RB_RIGHT(const Node* node) {
return RB_ENTRY(node).Right();
template <typename T>
requires HasRBEntry<T>
constexpr void RB_SET_BLACKRED(T* black, T* red) {
RB_SET_COLOR(black, RBColor::RB_BLACK);
RB_SET_COLOR(red, RBColor::RB_RED);
}
template <typename Node>
void RB_SET_RIGHT(Node* node, Node* right) {
return RB_ENTRY(node).SetRight(right);
}
template <typename Node>
[[nodiscard]] bool RB_IS_BLACK(const Node* node) {
return RB_ENTRY(node).IsBlack();
}
template <typename Node>
[[nodiscard]] bool RB_IS_RED(const Node* node) {
return RB_ENTRY(node).IsRed();
}
template <typename Node>
[[nodiscard]] EntryColor RB_COLOR(const Node* node) {
return RB_ENTRY(node).Color();
}
template <typename Node>
void RB_SET_COLOR(Node* node, EntryColor color) {
return RB_ENTRY(node).SetColor(color);
}
template <typename Node>
void RB_SET(Node* node, Node* parent) {
auto& entry = RB_ENTRY(node);
entry.SetParent(parent);
entry.SetLeft(nullptr);
entry.SetRight(nullptr);
entry.SetColor(EntryColor::Red);
}
template <typename Node>
void RB_SET_BLACKRED(Node* black, Node* red) {
RB_SET_COLOR(black, EntryColor::Black);
RB_SET_COLOR(red, EntryColor::Red);
}
template <typename Node>
void RB_ROTATE_LEFT(RBHead<Node>* head, Node* elm, Node*& tmp) {
template <typename T>
requires HasRBEntry<T>
constexpr void RB_ROTATE_LEFT(RBHead<T>& head, T* elm, T*& tmp) {
tmp = RB_RIGHT(elm);
RB_SET_RIGHT(elm, RB_LEFT(tmp));
if (RB_RIGHT(elm) != nullptr) {
if (RB_SET_RIGHT(elm, RB_LEFT(tmp)); RB_RIGHT(elm) != nullptr) {
RB_SET_PARENT(RB_LEFT(tmp), elm);
}
RB_SET_PARENT(tmp, RB_PARENT(elm));
if (RB_PARENT(tmp) != nullptr) {
if (RB_SET_PARENT(tmp, RB_PARENT(elm)); RB_PARENT(tmp) != nullptr) {
if (elm == RB_LEFT(RB_PARENT(elm))) {
RB_SET_LEFT(RB_PARENT(elm), tmp);
} else {
RB_SET_RIGHT(RB_PARENT(elm), tmp);
}
} else {
head->SetRoot(tmp);
head.SetRoot(tmp);
}
RB_SET_LEFT(tmp, elm);
RB_SET_PARENT(elm, tmp);
}
template <typename Node>
void RB_ROTATE_RIGHT(RBHead<Node>* head, Node* elm, Node*& tmp) {
template <typename T>
requires HasRBEntry<T>
constexpr void RB_ROTATE_RIGHT(RBHead<T>& head, T* elm, T*& tmp) {
tmp = RB_LEFT(elm);
RB_SET_LEFT(elm, RB_RIGHT(tmp));
if (RB_LEFT(elm) != nullptr) {
if (RB_SET_LEFT(elm, RB_RIGHT(tmp)); RB_LEFT(elm) != nullptr) {
RB_SET_PARENT(RB_RIGHT(tmp), elm);
}
RB_SET_PARENT(tmp, RB_PARENT(elm));
if (RB_PARENT(tmp) != nullptr) {
if (RB_SET_PARENT(tmp, RB_PARENT(elm)); RB_PARENT(tmp) != nullptr) {
if (elm == RB_LEFT(RB_PARENT(elm))) {
RB_SET_LEFT(RB_PARENT(elm), tmp);
} else {
RB_SET_RIGHT(RB_PARENT(elm), tmp);
}
} else {
head->SetRoot(tmp);
head.SetRoot(tmp);
}
RB_SET_RIGHT(tmp, elm);
RB_SET_PARENT(elm, tmp);
}
template <typename Node>
void RB_INSERT_COLOR(RBHead<Node>* head, Node* elm) {
Node* parent = nullptr;
Node* tmp = nullptr;
template <typename T>
requires HasRBEntry<T>
constexpr void RB_REMOVE_COLOR(RBHead<T>& head, T* parent, T* elm) {
T* tmp;
while ((elm == nullptr || RB_IS_BLACK(elm)) && elm != head.Root()) {
if (RB_LEFT(parent) == elm) {
tmp = RB_RIGHT(parent);
if (RB_IS_RED(tmp)) {
RB_SET_BLACKRED(tmp, parent);
RB_ROTATE_LEFT(head, parent, tmp);
tmp = RB_RIGHT(parent);
}
if ((RB_LEFT(tmp) == nullptr || RB_IS_BLACK(RB_LEFT(tmp))) &&
(RB_RIGHT(tmp) == nullptr || RB_IS_BLACK(RB_RIGHT(tmp)))) {
RB_SET_COLOR(tmp, RBColor::RB_RED);
elm = parent;
parent = RB_PARENT(elm);
} else {
if (RB_RIGHT(tmp) == nullptr || RB_IS_BLACK(RB_RIGHT(tmp))) {
T* oleft;
if ((oleft = RB_LEFT(tmp)) != nullptr) {
RB_SET_COLOR(oleft, RBColor::RB_BLACK);
}
RB_SET_COLOR(tmp, RBColor::RB_RED);
RB_ROTATE_RIGHT(head, tmp, oleft);
tmp = RB_RIGHT(parent);
}
RB_SET_COLOR(tmp, RB_COLOR(parent));
RB_SET_COLOR(parent, RBColor::RB_BLACK);
if (RB_RIGHT(tmp)) {
RB_SET_COLOR(RB_RIGHT(tmp), RBColor::RB_BLACK);
}
RB_ROTATE_LEFT(head, parent, tmp);
elm = head.Root();
break;
}
} else {
tmp = RB_LEFT(parent);
if (RB_IS_RED(tmp)) {
RB_SET_BLACKRED(tmp, parent);
RB_ROTATE_RIGHT(head, parent, tmp);
tmp = RB_LEFT(parent);
}
if ((RB_LEFT(tmp) == nullptr || RB_IS_BLACK(RB_LEFT(tmp))) &&
(RB_RIGHT(tmp) == nullptr || RB_IS_BLACK(RB_RIGHT(tmp)))) {
RB_SET_COLOR(tmp, RBColor::RB_RED);
elm = parent;
parent = RB_PARENT(elm);
} else {
if (RB_LEFT(tmp) == nullptr || RB_IS_BLACK(RB_LEFT(tmp))) {
T* oright;
if ((oright = RB_RIGHT(tmp)) != nullptr) {
RB_SET_COLOR(oright, RBColor::RB_BLACK);
}
RB_SET_COLOR(tmp, RBColor::RB_RED);
RB_ROTATE_LEFT(head, tmp, oright);
tmp = RB_LEFT(parent);
}
RB_SET_COLOR(tmp, RB_COLOR(parent));
RB_SET_COLOR(parent, RBColor::RB_BLACK);
if (RB_LEFT(tmp)) {
RB_SET_COLOR(RB_LEFT(tmp), RBColor::RB_BLACK);
}
RB_ROTATE_RIGHT(head, parent, tmp);
elm = head.Root();
break;
}
}
}
if (elm) {
RB_SET_COLOR(elm, RBColor::RB_BLACK);
}
}
template <typename T>
requires HasRBEntry<T>
constexpr T* RB_REMOVE(RBHead<T>& head, T* elm) {
T* child = nullptr;
T* parent = nullptr;
T* old = elm;
RBColor color = RBColor::RB_BLACK;
if (RB_LEFT(elm) == nullptr) {
child = RB_RIGHT(elm);
} else if (RB_RIGHT(elm) == nullptr) {
child = RB_LEFT(elm);
} else {
T* left;
elm = RB_RIGHT(elm);
while ((left = RB_LEFT(elm)) != nullptr) {
elm = left;
}
child = RB_RIGHT(elm);
parent = RB_PARENT(elm);
color = RB_COLOR(elm);
if (child) {
RB_SET_PARENT(child, parent);
}
if (parent) {
if (RB_LEFT(parent) == elm) {
RB_SET_LEFT(parent, child);
} else {
RB_SET_RIGHT(parent, child);
}
} else {
head.SetRoot(child);
}
if (RB_PARENT(elm) == old) {
parent = elm;
}
elm->SetRBEntry(old->GetRBEntry());
if (RB_PARENT(old)) {
if (RB_LEFT(RB_PARENT(old)) == old) {
RB_SET_LEFT(RB_PARENT(old), elm);
} else {
RB_SET_RIGHT(RB_PARENT(old), elm);
}
} else {
head.SetRoot(elm);
}
RB_SET_PARENT(RB_LEFT(old), elm);
if (RB_RIGHT(old)) {
RB_SET_PARENT(RB_RIGHT(old), elm);
}
if (parent) {
left = parent;
}
if (color == RBColor::RB_BLACK) {
RB_REMOVE_COLOR(head, parent, child);
}
return old;
}
parent = RB_PARENT(elm);
color = RB_COLOR(elm);
if (child) {
RB_SET_PARENT(child, parent);
}
if (parent) {
if (RB_LEFT(parent) == elm) {
RB_SET_LEFT(parent, child);
} else {
RB_SET_RIGHT(parent, child);
}
} else {
head.SetRoot(child);
}
if (color == RBColor::RB_BLACK) {
RB_REMOVE_COLOR(head, parent, child);
}
return old;
}
template <typename T>
requires HasRBEntry<T>
constexpr void RB_INSERT_COLOR(RBHead<T>& head, T* elm) {
T *parent = nullptr, *tmp = nullptr;
while ((parent = RB_PARENT(elm)) != nullptr && RB_IS_RED(parent)) {
Node* gparent = RB_PARENT(parent);
T* gparent = RB_PARENT(parent);
if (parent == RB_LEFT(gparent)) {
tmp = RB_RIGHT(gparent);
if (tmp && RB_IS_RED(tmp)) {
RB_SET_COLOR(tmp, EntryColor::Black);
RB_SET_COLOR(tmp, RBColor::RB_BLACK);
RB_SET_BLACKRED(parent, gparent);
elm = gparent;
continue;
@@ -300,7 +499,7 @@ void RB_INSERT_COLOR(RBHead<Node>* head, Node* elm) {
} else {
tmp = RB_LEFT(gparent);
if (tmp && RB_IS_RED(tmp)) {
RB_SET_COLOR(tmp, EntryColor::Black);
RB_SET_COLOR(tmp, RBColor::RB_BLACK);
RB_SET_BLACKRED(parent, gparent);
elm = gparent;
continue;
@@ -318,194 +517,14 @@ void RB_INSERT_COLOR(RBHead<Node>* head, Node* elm) {
}
}
RB_SET_COLOR(head->Root(), EntryColor::Black);
RB_SET_COLOR(head.Root(), RBColor::RB_BLACK);
}
template <typename Node>
void RB_REMOVE_COLOR(RBHead<Node>* head, Node* parent, Node* elm) {
Node* tmp;
while ((elm == nullptr || RB_IS_BLACK(elm)) && elm != head->Root() && parent != nullptr) {
if (RB_LEFT(parent) == elm) {
tmp = RB_RIGHT(parent);
if (!tmp) {
ASSERT_MSG(false, "tmp is invalid!");
break;
}
if (RB_IS_RED(tmp)) {
RB_SET_BLACKRED(tmp, parent);
RB_ROTATE_LEFT(head, parent, tmp);
tmp = RB_RIGHT(parent);
}
if ((RB_LEFT(tmp) == nullptr || RB_IS_BLACK(RB_LEFT(tmp))) &&
(RB_RIGHT(tmp) == nullptr || RB_IS_BLACK(RB_RIGHT(tmp)))) {
RB_SET_COLOR(tmp, EntryColor::Red);
elm = parent;
parent = RB_PARENT(elm);
} else {
if (RB_RIGHT(tmp) == nullptr || RB_IS_BLACK(RB_RIGHT(tmp))) {
Node* oleft;
if ((oleft = RB_LEFT(tmp)) != nullptr) {
RB_SET_COLOR(oleft, EntryColor::Black);
}
RB_SET_COLOR(tmp, EntryColor::Red);
RB_ROTATE_RIGHT(head, tmp, oleft);
tmp = RB_RIGHT(parent);
}
RB_SET_COLOR(tmp, RB_COLOR(parent));
RB_SET_COLOR(parent, EntryColor::Black);
if (RB_RIGHT(tmp)) {
RB_SET_COLOR(RB_RIGHT(tmp), EntryColor::Black);
}
RB_ROTATE_LEFT(head, parent, tmp);
elm = head->Root();
break;
}
} else {
tmp = RB_LEFT(parent);
if (RB_IS_RED(tmp)) {
RB_SET_BLACKRED(tmp, parent);
RB_ROTATE_RIGHT(head, parent, tmp);
tmp = RB_LEFT(parent);
}
if (!tmp) {
ASSERT_MSG(false, "tmp is invalid!");
break;
}
if ((RB_LEFT(tmp) == nullptr || RB_IS_BLACK(RB_LEFT(tmp))) &&
(RB_RIGHT(tmp) == nullptr || RB_IS_BLACK(RB_RIGHT(tmp)))) {
RB_SET_COLOR(tmp, EntryColor::Red);
elm = parent;
parent = RB_PARENT(elm);
} else {
if (RB_LEFT(tmp) == nullptr || RB_IS_BLACK(RB_LEFT(tmp))) {
Node* oright;
if ((oright = RB_RIGHT(tmp)) != nullptr) {
RB_SET_COLOR(oright, EntryColor::Black);
}
RB_SET_COLOR(tmp, EntryColor::Red);
RB_ROTATE_LEFT(head, tmp, oright);
tmp = RB_LEFT(parent);
}
RB_SET_COLOR(tmp, RB_COLOR(parent));
RB_SET_COLOR(parent, EntryColor::Black);
if (RB_LEFT(tmp)) {
RB_SET_COLOR(RB_LEFT(tmp), EntryColor::Black);
}
RB_ROTATE_RIGHT(head, parent, tmp);
elm = head->Root();
break;
}
}
}
if (elm) {
RB_SET_COLOR(elm, EntryColor::Black);
}
}
template <typename Node>
Node* RB_REMOVE(RBHead<Node>* head, Node* elm) {
Node* child = nullptr;
Node* parent = nullptr;
Node* old = elm;
EntryColor color{};
const auto finalize = [&] {
if (color == EntryColor::Black) {
RB_REMOVE_COLOR(head, parent, child);
}
return old;
};
if (RB_LEFT(elm) == nullptr) {
child = RB_RIGHT(elm);
} else if (RB_RIGHT(elm) == nullptr) {
child = RB_LEFT(elm);
} else {
Node* left;
elm = RB_RIGHT(elm);
while ((left = RB_LEFT(elm)) != nullptr) {
elm = left;
}
child = RB_RIGHT(elm);
parent = RB_PARENT(elm);
color = RB_COLOR(elm);
if (child) {
RB_SET_PARENT(child, parent);
}
if (parent) {
if (RB_LEFT(parent) == elm) {
RB_SET_LEFT(parent, child);
} else {
RB_SET_RIGHT(parent, child);
}
} else {
head->SetRoot(child);
}
if (RB_PARENT(elm) == old) {
parent = elm;
}
elm->SetEntry(old->GetEntry());
if (RB_PARENT(old)) {
if (RB_LEFT(RB_PARENT(old)) == old) {
RB_SET_LEFT(RB_PARENT(old), elm);
} else {
RB_SET_RIGHT(RB_PARENT(old), elm);
}
} else {
head->SetRoot(elm);
}
RB_SET_PARENT(RB_LEFT(old), elm);
if (RB_RIGHT(old)) {
RB_SET_PARENT(RB_RIGHT(old), elm);
}
if (parent) {
left = parent;
}
return finalize();
}
parent = RB_PARENT(elm);
color = RB_COLOR(elm);
if (child) {
RB_SET_PARENT(child, parent);
}
if (parent) {
if (RB_LEFT(parent) == elm) {
RB_SET_LEFT(parent, child);
} else {
RB_SET_RIGHT(parent, child);
}
} else {
head->SetRoot(child);
}
return finalize();
}
// Inserts a node into the RB tree
template <typename Node, typename CompareFunction>
Node* RB_INSERT(RBHead<Node>* head, Node* elm, CompareFunction cmp) {
Node* parent = nullptr;
Node* tmp = head->Root();
template <typename T, typename Compare>
requires HasRBEntry<T>
constexpr T* RB_INSERT(RBHead<T>& head, T* elm, Compare cmp) {
T* parent = nullptr;
T* tmp = head.Root();
int comp = 0;
while (tmp) {
@@ -529,17 +548,17 @@ Node* RB_INSERT(RBHead<Node>* head, Node* elm, CompareFunction cmp) {
RB_SET_RIGHT(parent, elm);
}
} else {
head->SetRoot(elm);
head.SetRoot(elm);
}
RB_INSERT_COLOR(head, elm);
return nullptr;
}
// Finds the node with the same key as elm
template <typename Node, typename CompareFunction>
Node* RB_FIND(RBHead<Node>* head, Node* elm, CompareFunction cmp) {
Node* tmp = head->Root();
template <typename T, typename Compare>
requires HasRBEntry<T>
constexpr T* RB_FIND(RBHead<T>& head, T* elm, Compare cmp) {
T* tmp = head.Root();
while (tmp) {
const int comp = cmp(elm, tmp);
@@ -555,11 +574,11 @@ Node* RB_FIND(RBHead<Node>* head, Node* elm, CompareFunction cmp) {
return nullptr;
}
// Finds the first node greater than or equal to the search key
template <typename Node, typename CompareFunction>
Node* RB_NFIND(RBHead<Node>* head, Node* elm, CompareFunction cmp) {
Node* tmp = head->Root();
Node* res = nullptr;
template <typename T, typename Compare>
requires HasRBEntry<T>
constexpr T* RB_NFIND(RBHead<T>& head, T* elm, Compare cmp) {
T* tmp = head.Root();
T* res = nullptr;
while (tmp) {
const int comp = cmp(elm, tmp);
@@ -576,13 +595,13 @@ Node* RB_NFIND(RBHead<Node>* head, Node* elm, CompareFunction cmp) {
return res;
}
// Finds the node with the same key as lelm
template <typename Node, typename CompareFunction>
Node* RB_FIND_LIGHT(RBHead<Node>* head, const void* lelm, CompareFunction lcmp) {
Node* tmp = head->Root();
template <typename T, typename U, typename Compare>
requires HasRBEntry<T>
constexpr T* RB_FIND_KEY(RBHead<T>& head, const U& key, Compare cmp) {
T* tmp = head.Root();
while (tmp) {
const int comp = lcmp(lelm, tmp);
const int comp = cmp(key, tmp);
if (comp < 0) {
tmp = RB_LEFT(tmp);
} else if (comp > 0) {
@@ -595,14 +614,14 @@ Node* RB_FIND_LIGHT(RBHead<Node>* head, const void* lelm, CompareFunction lcmp)
return nullptr;
}
// Finds the first node greater than or equal to the search key
template <typename Node, typename CompareFunction>
Node* RB_NFIND_LIGHT(RBHead<Node>* head, const void* lelm, CompareFunction lcmp) {
Node* tmp = head->Root();
Node* res = nullptr;
template <typename T, typename U, typename Compare>
requires HasRBEntry<T>
constexpr T* RB_NFIND_KEY(RBHead<T>& head, const U& key, Compare cmp) {
T* tmp = head.Root();
T* res = nullptr;
while (tmp) {
const int comp = lcmp(lelm, tmp);
const int comp = cmp(key, tmp);
if (comp < 0) {
res = tmp;
tmp = RB_LEFT(tmp);
@@ -616,8 +635,43 @@ Node* RB_NFIND_LIGHT(RBHead<Node>* head, const void* lelm, CompareFunction lcmp)
return res;
}
template <typename Node>
Node* RB_NEXT(Node* elm) {
template <typename T, typename Compare>
requires HasRBEntry<T>
constexpr T* RB_FIND_EXISTING(RBHead<T>& head, T* elm, Compare cmp) {
T* tmp = head.Root();
while (true) {
const int comp = cmp(elm, tmp);
if (comp < 0) {
tmp = RB_LEFT(tmp);
} else if (comp > 0) {
tmp = RB_RIGHT(tmp);
} else {
return tmp;
}
}
}
template <typename T, typename U, typename Compare>
requires HasRBEntry<T>
constexpr T* RB_FIND_EXISTING_KEY(RBHead<T>& head, const U& key, Compare cmp) {
T* tmp = head.Root();
while (true) {
const int comp = cmp(key, tmp);
if (comp < 0) {
tmp = RB_LEFT(tmp);
} else if (comp > 0) {
tmp = RB_RIGHT(tmp);
} else {
return tmp;
}
}
}
template <typename T>
requires HasRBEntry<T>
constexpr T* RB_NEXT(T* elm) {
if (RB_RIGHT(elm)) {
elm = RB_RIGHT(elm);
while (RB_LEFT(elm)) {
@@ -636,8 +690,9 @@ Node* RB_NEXT(Node* elm) {
return elm;
}
template <typename Node>
Node* RB_PREV(Node* elm) {
template <typename T>
requires HasRBEntry<T>
constexpr T* RB_PREV(T* elm) {
if (RB_LEFT(elm)) {
elm = RB_LEFT(elm);
while (RB_RIGHT(elm)) {
@@ -656,30 +711,32 @@ Node* RB_PREV(Node* elm) {
return elm;
}
template <typename Node>
Node* RB_MINMAX(RBHead<Node>* head, bool is_min) {
Node* tmp = head->Root();
Node* parent = nullptr;
template <typename T>
requires HasRBEntry<T>
constexpr T* RB_MIN(RBHead<T>& head) {
T* tmp = head.Root();
T* parent = nullptr;
while (tmp) {
parent = tmp;
if (is_min) {
tmp = RB_LEFT(tmp);
} else {
tmp = RB_RIGHT(tmp);
}
tmp = RB_LEFT(tmp);
}
return parent;
}
template <typename Node>
Node* RB_MIN(RBHead<Node>* head) {
return RB_MINMAX(head, true);
template <typename T>
requires HasRBEntry<T>
constexpr T* RB_MAX(RBHead<T>& head) {
T* tmp = head.Root();
T* parent = nullptr;
while (tmp) {
parent = tmp;
tmp = RB_RIGHT(tmp);
}
return parent;
}
template <typename Node>
Node* RB_MAX(RBHead<Node>* head) {
return RB_MINMAX(head, false);
}
} // namespace Common
} // namespace Common::freebsd

View File

@@ -4,7 +4,6 @@
#pragma once
#include <cstring>
#include <utility>
#ifdef _MSC_VER
@@ -13,6 +12,7 @@
#pragma intrinsic(_umul128)
#pragma intrinsic(_udiv128)
#else
#include <cstring>
#include <x86intrin.h>
#endif

View File

@@ -7,7 +7,6 @@
#include <array>
#include <functional>
#include <string>
#include <string_view>
#include "common/common_types.h"

View File

@@ -4,7 +4,6 @@
#pragma once
#include <type_traits>
#include <utility>
namespace Common {

View File

@@ -2,8 +2,6 @@
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <cstdint>
#include "common/uint128.h"
#include "common/wall_clock.h"

View File

@@ -1,8 +1,11 @@
// Copyright 2013 Dolphin Emulator Project / 2015 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
// Copyright 2013 Dolphin Emulator Project / 2015 Citra Emulator Project / 2022 Yuzu Emulator
// Project Licensed under GPLv2 or any later version Refer to the license.txt file included.
#include <array>
#include <cstring>
#include <iterator>
#include <string_view>
#include "common/bit_util.h"
#include "common/common_types.h"
#include "common/x64/cpu_detect.h"
@@ -17,7 +20,7 @@
// clang-format on
#endif
static inline void __cpuidex(int info[4], int function_id, int subfunction_id) {
static inline void __cpuidex(int info[4], u32 function_id, u32 subfunction_id) {
#if defined(__DragonFly__) || defined(__FreeBSD__)
// Despite the name, this is just do_cpuid() with ECX as second input.
cpuid_count((u_int)function_id, (u_int)subfunction_id, (u_int*)info);
@@ -30,7 +33,7 @@ static inline void __cpuidex(int info[4], int function_id, int subfunction_id) {
#endif
}
static inline void __cpuid(int info[4], int function_id) {
static inline void __cpuid(int info[4], u32 function_id) {
return __cpuidex(info, function_id, 0);
}
@@ -45,6 +48,17 @@ static inline u64 _xgetbv(u32 index) {
namespace Common {
CPUCaps::Manufacturer CPUCaps::ParseManufacturer(std::string_view brand_string) {
if (brand_string == "GenuineIntel") {
return Manufacturer::Intel;
} else if (brand_string == "AuthenticAMD") {
return Manufacturer::AMD;
} else if (brand_string == "HygonGenuine") {
return Manufacturer::Hygon;
}
return Manufacturer::Unknown;
}
// Detects the various CPU features
static CPUCaps Detect() {
CPUCaps caps = {};
@@ -53,75 +67,74 @@ static CPUCaps Detect() {
// yuzu at all anyway
int cpu_id[4];
memset(caps.brand_string, 0, sizeof(caps.brand_string));
// Detect CPU's CPUID capabilities and grab CPU string
// Detect CPU's CPUID capabilities and grab manufacturer string
__cpuid(cpu_id, 0x00000000);
u32 max_std_fn = cpu_id[0]; // EAX
const u32 max_std_fn = cpu_id[0]; // EAX
std::memcpy(&caps.brand_string[0], &cpu_id[1], sizeof(int));
std::memcpy(&caps.brand_string[4], &cpu_id[3], sizeof(int));
std::memcpy(&caps.brand_string[8], &cpu_id[2], sizeof(int));
if (cpu_id[1] == 0x756e6547 && cpu_id[2] == 0x6c65746e && cpu_id[3] == 0x49656e69)
caps.manufacturer = Manufacturer::Intel;
else if (cpu_id[1] == 0x68747541 && cpu_id[2] == 0x444d4163 && cpu_id[3] == 0x69746e65)
caps.manufacturer = Manufacturer::AMD;
else if (cpu_id[1] == 0x6f677948 && cpu_id[2] == 0x656e6975 && cpu_id[3] == 0x6e65476e)
caps.manufacturer = Manufacturer::Hygon;
else
caps.manufacturer = Manufacturer::Unknown;
std::memset(caps.brand_string, 0, std::size(caps.brand_string));
std::memcpy(&caps.brand_string[0], &cpu_id[1], sizeof(u32));
std::memcpy(&caps.brand_string[4], &cpu_id[3], sizeof(u32));
std::memcpy(&caps.brand_string[8], &cpu_id[2], sizeof(u32));
caps.manufacturer = CPUCaps::ParseManufacturer(caps.brand_string);
// Set reasonable default cpu string even if brand string not available
std::strncpy(caps.cpu_string, caps.brand_string, std::size(caps.brand_string));
__cpuid(cpu_id, 0x80000000);
u32 max_ex_fn = cpu_id[0];
// Set reasonable default brand string even if brand string not available
strcpy(caps.cpu_string, caps.brand_string);
const u32 max_ex_fn = cpu_id[0];
// Detect family and other miscellaneous features
if (max_std_fn >= 1) {
__cpuid(cpu_id, 0x00000001);
if ((cpu_id[3] >> 25) & 1)
caps.sse = true;
if ((cpu_id[3] >> 26) & 1)
caps.sse2 = true;
if ((cpu_id[2]) & 1)
caps.sse3 = true;
if ((cpu_id[2] >> 9) & 1)
caps.ssse3 = true;
if ((cpu_id[2] >> 19) & 1)
caps.sse4_1 = true;
if ((cpu_id[2] >> 20) & 1)
caps.sse4_2 = true;
if ((cpu_id[2] >> 25) & 1)
caps.aes = true;
caps.sse = Common::Bit<25>(cpu_id[3]);
caps.sse2 = Common::Bit<26>(cpu_id[3]);
caps.sse3 = Common::Bit<0>(cpu_id[2]);
caps.pclmulqdq = Common::Bit<1>(cpu_id[2]);
caps.ssse3 = Common::Bit<9>(cpu_id[2]);
caps.sse4_1 = Common::Bit<19>(cpu_id[2]);
caps.sse4_2 = Common::Bit<20>(cpu_id[2]);
caps.movbe = Common::Bit<22>(cpu_id[2]);
caps.popcnt = Common::Bit<23>(cpu_id[2]);
caps.aes = Common::Bit<25>(cpu_id[2]);
caps.f16c = Common::Bit<29>(cpu_id[2]);
// AVX support requires 3 separate checks:
// - Is the AVX bit set in CPUID?
// - Is the XSAVE bit set in CPUID?
// - XGETBV result has the XCR bit set.
if (((cpu_id[2] >> 28) & 1) && ((cpu_id[2] >> 27) & 1)) {
if (Common::Bit<28>(cpu_id[2]) && Common::Bit<27>(cpu_id[2])) {
if ((_xgetbv(_XCR_XFEATURE_ENABLED_MASK) & 0x6) == 0x6) {
caps.avx = true;
if ((cpu_id[2] >> 12) & 1)
if (Common::Bit<12>(cpu_id[2]))
caps.fma = true;
}
}
if (max_std_fn >= 7) {
__cpuidex(cpu_id, 0x00000007, 0x00000000);
// Can't enable AVX2 unless the XSAVE/XGETBV checks above passed
if ((cpu_id[1] >> 5) & 1)
caps.avx2 = caps.avx;
if ((cpu_id[1] >> 3) & 1)
caps.bmi1 = true;
if ((cpu_id[1] >> 8) & 1)
caps.bmi2 = true;
// Checks for AVX512F, AVX512CD, AVX512VL, AVX512DQ, AVX512BW (Intel Skylake-X/SP)
if ((cpu_id[1] >> 16) & 1 && (cpu_id[1] >> 28) & 1 && (cpu_id[1] >> 31) & 1 &&
(cpu_id[1] >> 17) & 1 && (cpu_id[1] >> 30) & 1) {
caps.avx512 = caps.avx2;
// Can't enable AVX{2,512} unless the XSAVE/XGETBV checks above passed
if (caps.avx) {
caps.avx2 = Common::Bit<5>(cpu_id[1]);
caps.avx512f = Common::Bit<16>(cpu_id[1]);
caps.avx512dq = Common::Bit<17>(cpu_id[1]);
caps.avx512cd = Common::Bit<28>(cpu_id[1]);
caps.avx512bw = Common::Bit<30>(cpu_id[1]);
caps.avx512vl = Common::Bit<31>(cpu_id[1]);
caps.avx512vbmi = Common::Bit<1>(cpu_id[2]);
caps.avx512bitalg = Common::Bit<12>(cpu_id[2]);
}
caps.bmi1 = Common::Bit<3>(cpu_id[1]);
caps.bmi2 = Common::Bit<8>(cpu_id[1]);
caps.sha = Common::Bit<29>(cpu_id[1]);
caps.gfni = Common::Bit<8>(cpu_id[2]);
__cpuidex(cpu_id, 0x00000007, 0x00000001);
caps.avx_vnni = caps.avx && Common::Bit<4>(cpu_id[0]);
}
}
@@ -138,15 +151,13 @@ static CPUCaps Detect() {
if (max_ex_fn >= 0x80000001) {
// Check for more features
__cpuid(cpu_id, 0x80000001);
if ((cpu_id[2] >> 16) & 1)
caps.fma4 = true;
caps.lzcnt = Common::Bit<5>(cpu_id[2]);
caps.fma4 = Common::Bit<16>(cpu_id[2]);
}
if (max_ex_fn >= 0x80000007) {
__cpuid(cpu_id, 0x80000007);
if (cpu_id[3] & (1 << 8)) {
caps.invariant_tsc = true;
}
caps.invariant_tsc = Common::Bit<8>(cpu_id[3]);
}
if (max_std_fn >= 0x16) {

View File

@@ -1,42 +1,65 @@
// Copyright 2013 Dolphin Emulator Project / 2015 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
// Copyright 2013 Dolphin Emulator Project / 2015 Citra Emulator Project / 2022 Yuzu Emulator
// Project Project Licensed under GPLv2 or any later version Refer to the license.txt file included.
#pragma once
namespace Common {
#include <string_view>
#include "common/common_types.h"
enum class Manufacturer : u32 {
Intel = 0,
AMD = 1,
Hygon = 2,
Unknown = 3,
};
namespace Common {
/// x86/x64 CPU capabilities that may be detected by this module
struct CPUCaps {
enum class Manufacturer : u8 {
Unknown = 0,
Intel = 1,
AMD = 2,
Hygon = 3,
};
static Manufacturer ParseManufacturer(std::string_view brand_string);
Manufacturer manufacturer;
char cpu_string[0x21];
char brand_string[0x41];
bool sse;
bool sse2;
bool sse3;
bool ssse3;
bool sse4_1;
bool sse4_2;
bool lzcnt;
bool avx;
bool avx2;
bool avx512;
bool bmi1;
bool bmi2;
bool fma;
bool fma4;
bool aes;
bool invariant_tsc;
char brand_string[13];
char cpu_string[48];
u32 base_frequency;
u32 max_frequency;
u32 bus_frequency;
bool sse : 1;
bool sse2 : 1;
bool sse3 : 1;
bool ssse3 : 1;
bool sse4_1 : 1;
bool sse4_2 : 1;
bool avx : 1;
bool avx_vnni : 1;
bool avx2 : 1;
bool avx512f : 1;
bool avx512dq : 1;
bool avx512cd : 1;
bool avx512bw : 1;
bool avx512vl : 1;
bool avx512vbmi : 1;
bool avx512bitalg : 1;
bool aes : 1;
bool bmi1 : 1;
bool bmi2 : 1;
bool f16c : 1;
bool fma : 1;
bool fma4 : 1;
bool gfni : 1;
bool invariant_tsc : 1;
bool lzcnt : 1;
bool movbe : 1;
bool pclmulqdq : 1;
bool popcnt : 1;
bool sha : 1;
};
/**

View File

@@ -4,8 +4,6 @@
#include <array>
#include <chrono>
#include <limits>
#include <mutex>
#include <thread>
#include "common/atomic_ops.h"

View File

@@ -4,8 +4,6 @@
#pragma once
#include <optional>
#include "common/wall_clock.h"
namespace Common {

View File

@@ -122,6 +122,8 @@ add_library(core STATIC
frontend/applets/error.h
frontend/applets/general_frontend.cpp
frontend/applets/general_frontend.h
frontend/applets/mii.cpp
frontend/applets/mii.h
frontend/applets/profile_select.cpp
frontend/applets/profile_select.h
frontend/applets/software_keyboard.cpp
@@ -152,6 +154,7 @@ add_library(core STATIC
hle/api_version.h
hle/ipc.h
hle/ipc_helpers.h
hle/kernel/board/nintendo/nx/k_memory_layout.h
hle/kernel/board/nintendo/nx/k_system_control.cpp
hle/kernel/board/nintendo/nx/k_system_control.h
hle/kernel/board/nintendo/nx/secure_monitor.h
@@ -164,6 +167,7 @@ add_library(core STATIC
hle/kernel/hle_ipc.h
hle/kernel/init/init_slab_setup.cpp
hle/kernel/init/init_slab_setup.h
hle/kernel/initial_process.h
hle/kernel/k_address_arbiter.cpp
hle/kernel/k_address_arbiter.h
hle/kernel/k_address_space_info.cpp
@@ -205,6 +209,8 @@ add_library(core STATIC
hle/kernel/k_memory_region.h
hle/kernel/k_memory_region_type.h
hle/kernel/k_page_bitmap.h
hle/kernel/k_page_buffer.cpp
hle/kernel/k_page_buffer.h
hle/kernel/k_page_heap.cpp
hle/kernel/k_page_heap.h
hle/kernel/k_page_linked_list.h
@@ -242,6 +248,8 @@ add_library(core STATIC
hle/kernel/k_system_control.h
hle/kernel/k_thread.cpp
hle/kernel/k_thread.h
hle/kernel/k_thread_local_page.cpp
hle/kernel/k_thread_local_page.h
hle/kernel/k_thread_queue.cpp
hle/kernel/k_thread_queue.h
hle/kernel/k_trace.h
@@ -298,6 +306,8 @@ add_library(core STATIC
hle/service/am/applets/applet_error.h
hle/service/am/applets/applet_general_backend.cpp
hle/service/am/applets/applet_general_backend.h
hle/service/am/applets/applet_mii.cpp
hle/service/am/applets/applet_mii.h
hle/service/am/applets/applet_profile_select.cpp
hle/service/am/applets/applet_profile_select.h
hle/service/am/applets/applet_software_keyboard.cpp

View File

@@ -137,6 +137,8 @@ std::shared_ptr<Dynarmic::A32::Jit> ARM_Dynarmic_32::MakeJit(Common::PageTable*
config.page_table_pointer_mask_bits = Common::PageTable::ATTRIBUTE_BITS;
config.detect_misaligned_access_via_page_table = 16 | 32 | 64 | 128;
config.only_detect_misalignment_via_page_table_on_page_boundary = true;
config.fastmem_exclusive_access = true;
config.recompile_on_exclusive_fastmem_failure = true;
// Multi-process state
config.processor_id = core_index;
@@ -146,8 +148,8 @@ std::shared_ptr<Dynarmic::A32::Jit> ARM_Dynarmic_32::MakeJit(Common::PageTable*
config.wall_clock_cntpct = uses_wall_clock;
// Code cache size
config.code_cache_size = 512_MiB;
config.far_code_offset = 400_MiB;
config.code_cache_size = 128_MiB;
config.far_code_offset = 100_MiB;
// Safe optimizations
if (Settings::values.cpu_debug_mode) {
@@ -178,6 +180,12 @@ std::shared_ptr<Dynarmic::A32::Jit> ARM_Dynarmic_32::MakeJit(Common::PageTable*
if (!Settings::values.cpuopt_fastmem) {
config.fastmem_pointer = nullptr;
}
if (!Settings::values.cpuopt_fastmem_exclusives) {
config.fastmem_exclusive_access = false;
}
if (!Settings::values.cpuopt_recompile_exclusives) {
config.recompile_on_exclusive_fastmem_failure = false;
}
}
// Unsafe optimizations
@@ -195,6 +203,9 @@ std::shared_ptr<Dynarmic::A32::Jit> ARM_Dynarmic_32::MakeJit(Common::PageTable*
if (Settings::values.cpuopt_unsafe_inaccurate_nan) {
config.optimizations |= Dynarmic::OptimizationFlag::Unsafe_InaccurateNaN;
}
if (Settings::values.cpuopt_unsafe_ignore_global_monitor) {
config.optimizations |= Dynarmic::OptimizationFlag::Unsafe_IgnoreGlobalMonitor;
}
}
// Curated optimizations
@@ -203,6 +214,7 @@ std::shared_ptr<Dynarmic::A32::Jit> ARM_Dynarmic_32::MakeJit(Common::PageTable*
config.optimizations |= Dynarmic::OptimizationFlag::Unsafe_UnfuseFMA;
config.optimizations |= Dynarmic::OptimizationFlag::Unsafe_IgnoreStandardFPCRValue;
config.optimizations |= Dynarmic::OptimizationFlag::Unsafe_InaccurateNaN;
config.optimizations |= Dynarmic::OptimizationFlag::Unsafe_IgnoreGlobalMonitor;
}
return std::make_unique<Dynarmic::A32::Jit>(config);

View File

@@ -185,6 +185,9 @@ std::shared_ptr<Dynarmic::A64::Jit> ARM_Dynarmic_64::MakeJit(Common::PageTable*
config.fastmem_pointer = page_table->fastmem_arena;
config.fastmem_address_space_bits = address_space_bits;
config.silently_mirror_fastmem = false;
config.fastmem_exclusive_access = true;
config.recompile_on_exclusive_fastmem_failure = true;
}
// Multi-process state
@@ -205,8 +208,8 @@ std::shared_ptr<Dynarmic::A64::Jit> ARM_Dynarmic_64::MakeJit(Common::PageTable*
config.wall_clock_cntpct = uses_wall_clock;
// Code cache size
config.code_cache_size = 512_MiB;
config.far_code_offset = 400_MiB;
config.code_cache_size = 128_MiB;
config.far_code_offset = 100_MiB;
// Safe optimizations
if (Settings::values.cpu_debug_mode) {
@@ -237,6 +240,12 @@ std::shared_ptr<Dynarmic::A64::Jit> ARM_Dynarmic_64::MakeJit(Common::PageTable*
if (!Settings::values.cpuopt_fastmem) {
config.fastmem_pointer = nullptr;
}
if (!Settings::values.cpuopt_fastmem_exclusives) {
config.fastmem_exclusive_access = false;
}
if (!Settings::values.cpuopt_recompile_exclusives) {
config.recompile_on_exclusive_fastmem_failure = false;
}
}
// Unsafe optimizations
@@ -254,6 +263,9 @@ std::shared_ptr<Dynarmic::A64::Jit> ARM_Dynarmic_64::MakeJit(Common::PageTable*
if (Settings::values.cpuopt_unsafe_fastmem_check) {
config.fastmem_address_space_bits = 64;
}
if (Settings::values.cpuopt_unsafe_ignore_global_monitor) {
config.optimizations |= Dynarmic::OptimizationFlag::Unsafe_IgnoreGlobalMonitor;
}
}
// Curated optimizations
@@ -262,6 +274,7 @@ std::shared_ptr<Dynarmic::A64::Jit> ARM_Dynarmic_64::MakeJit(Common::PageTable*
config.optimizations |= Dynarmic::OptimizationFlag::Unsafe_UnfuseFMA;
config.optimizations |= Dynarmic::OptimizationFlag::Unsafe_InaccurateNaN;
config.fastmem_address_space_bits = 64;
config.optimizations |= Dynarmic::OptimizationFlag::Unsafe_IgnoreGlobalMonitor;
}
return std::make_shared<Dynarmic::A64::Jit>(config);

View File

@@ -37,8 +37,8 @@ u128 DynarmicExclusiveMonitor::ExclusiveRead128(std::size_t core_index, VAddr ad
});
}
void DynarmicExclusiveMonitor::ClearExclusive() {
monitor.Clear();
void DynarmicExclusiveMonitor::ClearExclusive(std::size_t core_index) {
monitor.ClearProcessor(core_index);
}
bool DynarmicExclusiveMonitor::ExclusiveWrite8(std::size_t core_index, VAddr vaddr, u8 value) {

View File

@@ -4,8 +4,6 @@
#pragma once
#include <unordered_map>
#include <dynarmic/interface/exclusive_monitor.h>
#include "common/common_types.h"
@@ -29,7 +27,7 @@ public:
u32 ExclusiveRead32(std::size_t core_index, VAddr addr) override;
u64 ExclusiveRead64(std::size_t core_index, VAddr addr) override;
u128 ExclusiveRead128(std::size_t core_index, VAddr addr) override;
void ClearExclusive() override;
void ClearExclusive(std::size_t core_index) override;
bool ExclusiveWrite8(std::size_t core_index, VAddr vaddr, u8 value) override;
bool ExclusiveWrite16(std::size_t core_index, VAddr vaddr, u16 value) override;

View File

@@ -23,7 +23,7 @@ public:
virtual u32 ExclusiveRead32(std::size_t core_index, VAddr addr) = 0;
virtual u64 ExclusiveRead64(std::size_t core_index, VAddr addr) = 0;
virtual u128 ExclusiveRead128(std::size_t core_index, VAddr addr) = 0;
virtual void ClearExclusive() = 0;
virtual void ClearExclusive(std::size_t core_index) = 0;
virtual bool ExclusiveWrite8(std::size_t core_index, VAddr vaddr, u8 value) = 0;
virtual bool ExclusiveWrite16(std::size_t core_index, VAddr vaddr, u16 value) = 0;

View File

@@ -38,7 +38,6 @@
#include "core/hle/service/apm/apm_controller.h"
#include "core/hle/service/filesystem/filesystem.h"
#include "core/hle/service/glue/glue_manager.h"
#include "core/hle/service/hid/hid.h"
#include "core/hle/service/service.h"
#include "core/hle/service/sm/sm.h"
#include "core/hle/service/time/time_manager.h"
@@ -326,7 +325,9 @@ struct System::Impl {
is_powered_on = false;
exit_lock = false;
gpu_core->NotifyShutdown();
if (gpu_core != nullptr) {
gpu_core->NotifyShutdown();
}
services.reset();
service_manager.reset();

View File

@@ -5,7 +5,6 @@
#pragma once
#include <cstddef>
#include <iterator>
#include "common/common_funcs.h"
#include "common/common_types.h"

View File

@@ -10,7 +10,10 @@
#include "common/hex_util.h"
#include "common/logging/log.h"
#include "common/settings.h"
#ifndef _WIN32
#include "common/string_util.h"
#endif
#include "core/core.h"
#include "core/file_sys/common_funcs.h"
#include "core/file_sys/content_archive.h"

View File

@@ -6,7 +6,9 @@
#include <array>
#include <vector>
#include "common/bit_field.h"
#include "common/common_funcs.h"
#include "common/common_types.h"
#include "common/swap.h"
#include "core/file_sys/vfs_types.h"

View File

@@ -0,0 +1,19 @@
// Copyright 2022 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "common/logging/log.h"
#include "core/frontend/applets/mii.h"
namespace Core::Frontend {
MiiApplet::~MiiApplet() = default;
void DefaultMiiApplet::ShowMii(
const MiiParameters& parameters,
const std::function<void(const Core::Frontend::MiiParameters& parameters)> callback) const {
LOG_INFO(Service_HID, "(STUBBED) called");
callback(parameters);
}
} // namespace Core::Frontend

View File

@@ -0,0 +1,34 @@
// Copyright 2022 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <functional>
#include "core/hle/service/mii/mii_manager.h"
namespace Core::Frontend {
struct MiiParameters {
bool is_editable;
Service::Mii::MiiInfo mii_data{};
};
class MiiApplet {
public:
virtual ~MiiApplet();
virtual void ShowMii(const MiiParameters& parameters,
const std::function<void(const Core::Frontend::MiiParameters& parameters)>
callback) const = 0;
};
class DefaultMiiApplet final : public MiiApplet {
public:
void ShowMii(const MiiParameters& parameters,
const std::function<void(const Core::Frontend::MiiParameters& parameters)>
callback) const override;
};
} // namespace Core::Frontend

View File

@@ -42,11 +42,20 @@ public:
context.MakeCurrent();
}
~Scoped() {
context.DoneCurrent();
if (active) {
context.DoneCurrent();
}
}
/// In the event that context was destroyed before the Scoped is destroyed, this provides a
/// mechanism to prevent calling a destroyed object's method during the deconstructor
void Cancel() {
active = false;
}
private:
GraphicsContext& context;
bool active{true};
};
/// Calls MakeCurrent on the context and calls DoneCurrent when the scope for the returned value

View File

@@ -385,7 +385,7 @@ public:
T PopRaw();
template <class T>
std::shared_ptr<T> PopIpcInterface() {
std::weak_ptr<T> PopIpcInterface() {
ASSERT(context->Session()->IsDomain());
ASSERT(context->GetDomainMessageHeader().input_object_count > 0);
return context->GetDomainHandler<T>(Pop<u32>() - 1);

View File

@@ -0,0 +1,13 @@
// Copyright 2022 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include "common/common_types.h"
namespace Kernel {
constexpr inline PAddr MainMemoryAddress = 0x80000000;
} // namespace Kernel

View File

@@ -39,6 +39,10 @@ Smc::MemoryArrangement GetMemoryArrangeForInit() {
}
} // namespace
size_t KSystemControl::Init::GetRealMemorySize() {
return GetIntendedMemorySize();
}
// Initialization.
size_t KSystemControl::Init::GetIntendedMemorySize() {
switch (GetMemorySizeForInit()) {
@@ -53,7 +57,13 @@ size_t KSystemControl::Init::GetIntendedMemorySize() {
}
PAddr KSystemControl::Init::GetKernelPhysicalBaseAddress(u64 base_address) {
return base_address;
const size_t real_dram_size = KSystemControl::Init::GetRealMemorySize();
const size_t intended_dram_size = KSystemControl::Init::GetIntendedMemorySize();
if (intended_dram_size * 2 < real_dram_size) {
return base_address;
} else {
return base_address + ((real_dram_size - intended_dram_size) / 2);
}
}
bool KSystemControl::Init::ShouldIncreaseThreadResourceLimit() {

View File

@@ -13,6 +13,7 @@ public:
class Init {
public:
// Initialization.
static std::size_t GetRealMemorySize();
static std::size_t GetIntendedMemorySize();
static PAddr GetKernelPhysicalBaseAddress(u64 base_address);
static bool ShouldIncreaseThreadResourceLimit();

View File

@@ -17,7 +17,6 @@
#include "core/hle/kernel/k_auto_object.h"
#include "core/hle/kernel/k_handle_table.h"
#include "core/hle/kernel/k_process.h"
#include "core/hle/kernel/k_readable_event.h"
#include "core/hle/kernel/k_server_session.h"
#include "core/hle/kernel/k_thread.h"
#include "core/hle/kernel/kernel.h"
@@ -45,7 +44,7 @@ bool SessionRequestManager::HasSessionRequestHandler(const HLERequestContext& co
LOG_CRITICAL(IPC, "object_id {} is too big!", object_id);
return false;
}
return DomainHandler(object_id - 1) != nullptr;
return DomainHandler(object_id - 1).lock() != nullptr;
} else {
return session_handler != nullptr;
}
@@ -53,9 +52,6 @@ bool SessionRequestManager::HasSessionRequestHandler(const HLERequestContext& co
void SessionRequestHandler::ClientConnected(KServerSession* session) {
session->ClientConnected(shared_from_this());
// Ensure our server session is tracked globally.
kernel.RegisterServerSession(session);
}
void SessionRequestHandler::ClientDisconnected(KServerSession* session) {

View File

@@ -94,6 +94,7 @@ protected:
std::weak_ptr<ServiceThread> service_thread;
};
using SessionRequestHandlerWeakPtr = std::weak_ptr<SessionRequestHandler>;
using SessionRequestHandlerPtr = std::shared_ptr<SessionRequestHandler>;
/**
@@ -139,7 +140,7 @@ public:
}
}
SessionRequestHandlerPtr DomainHandler(std::size_t index) const {
SessionRequestHandlerWeakPtr DomainHandler(std::size_t index) const {
ASSERT_MSG(index < DomainHandlerCount(), "Unexpected handler index {}", index);
return domain_handlers.at(index);
}
@@ -328,10 +329,10 @@ public:
template <typename T>
std::shared_ptr<T> GetDomainHandler(std::size_t index) const {
return std::static_pointer_cast<T>(manager->DomainHandler(index));
return std::static_pointer_cast<T>(manager.lock()->DomainHandler(index).lock());
}
void SetSessionRequestManager(std::shared_ptr<SessionRequestManager> manager_) {
void SetSessionRequestManager(std::weak_ptr<SessionRequestManager> manager_) {
manager = std::move(manager_);
}
@@ -374,7 +375,7 @@ private:
u32 handles_offset{};
u32 domain_offset{};
std::shared_ptr<SessionRequestManager> manager;
std::weak_ptr<SessionRequestManager> manager;
KernelCore& kernel;
Core::Memory::Memory& memory;

View File

@@ -7,19 +7,23 @@
#include "common/common_funcs.h"
#include "common/common_types.h"
#include "core/core.h"
#include "core/device_memory.h"
#include "core/hardware_properties.h"
#include "core/hle/kernel/init/init_slab_setup.h"
#include "core/hle/kernel/k_code_memory.h"
#include "core/hle/kernel/k_event.h"
#include "core/hle/kernel/k_memory_layout.h"
#include "core/hle/kernel/k_memory_manager.h"
#include "core/hle/kernel/k_page_buffer.h"
#include "core/hle/kernel/k_port.h"
#include "core/hle/kernel/k_process.h"
#include "core/hle/kernel/k_resource_limit.h"
#include "core/hle/kernel/k_session.h"
#include "core/hle/kernel/k_shared_memory.h"
#include "core/hle/kernel/k_shared_memory_info.h"
#include "core/hle/kernel/k_system_control.h"
#include "core/hle/kernel/k_thread.h"
#include "core/hle/kernel/k_thread_local_page.h"
#include "core/hle/kernel/k_transfer_memory.h"
namespace Kernel::Init {
@@ -32,9 +36,13 @@ namespace Kernel::Init {
HANDLER(KEvent, (SLAB_COUNT(KEvent)), ##__VA_ARGS__) \
HANDLER(KPort, (SLAB_COUNT(KPort)), ##__VA_ARGS__) \
HANDLER(KSharedMemory, (SLAB_COUNT(KSharedMemory)), ##__VA_ARGS__) \
HANDLER(KSharedMemoryInfo, (SLAB_COUNT(KSharedMemory) * 8), ##__VA_ARGS__) \
HANDLER(KTransferMemory, (SLAB_COUNT(KTransferMemory)), ##__VA_ARGS__) \
HANDLER(KCodeMemory, (SLAB_COUNT(KCodeMemory)), ##__VA_ARGS__) \
HANDLER(KSession, (SLAB_COUNT(KSession)), ##__VA_ARGS__) \
HANDLER(KThreadLocalPage, \
(SLAB_COUNT(KProcess) + (SLAB_COUNT(KProcess) + SLAB_COUNT(KThread)) / 8), \
##__VA_ARGS__) \
HANDLER(KResourceLimit, (SLAB_COUNT(KResourceLimit)), ##__VA_ARGS__)
namespace {
@@ -50,38 +58,46 @@ enum KSlabType : u32 {
// Constexpr counts.
constexpr size_t SlabCountKProcess = 80;
constexpr size_t SlabCountKThread = 800;
constexpr size_t SlabCountKEvent = 700;
constexpr size_t SlabCountKEvent = 900;
constexpr size_t SlabCountKInterruptEvent = 100;
constexpr size_t SlabCountKPort = 256 + 0x20; // Extra 0x20 ports over Nintendo for homebrew.
constexpr size_t SlabCountKPort = 384;
constexpr size_t SlabCountKSharedMemory = 80;
constexpr size_t SlabCountKTransferMemory = 200;
constexpr size_t SlabCountKCodeMemory = 10;
constexpr size_t SlabCountKDeviceAddressSpace = 300;
constexpr size_t SlabCountKSession = 933;
constexpr size_t SlabCountKSession = 1133;
constexpr size_t SlabCountKLightSession = 100;
constexpr size_t SlabCountKObjectName = 7;
constexpr size_t SlabCountKResourceLimit = 5;
constexpr size_t SlabCountKDebug = Core::Hardware::NUM_CPU_CORES;
constexpr size_t SlabCountKAlpha = 1;
constexpr size_t SlabCountKBeta = 6;
constexpr size_t SlabCountKIoPool = 1;
constexpr size_t SlabCountKIoRegion = 6;
constexpr size_t SlabCountExtraKThread = 160;
/// Helper function to translate from the slab virtual address to the reserved location in physical
/// memory.
static PAddr TranslateSlabAddrToPhysical(KMemoryLayout& memory_layout, VAddr slab_addr) {
slab_addr -= memory_layout.GetSlabRegionAddress();
return slab_addr + Core::DramMemoryMap::SlabHeapBase;
}
template <typename T>
VAddr InitializeSlabHeap(Core::System& system, KMemoryLayout& memory_layout, VAddr address,
size_t num_objects) {
// TODO(bunnei): This is just a place holder. We should initialize the appropriate KSlabHeap for
// kernel object type T with the backing kernel memory pointer once we emulate kernel memory.
const size_t size = Common::AlignUp(sizeof(T) * num_objects, alignof(void*));
VAddr start = Common::AlignUp(address, alignof(T));
// This is intentionally empty. Once KSlabHeap is fully implemented, we can replace this with
// the pointer to emulated memory to pass along. Until then, KSlabHeap will just allocate/free
// host memory.
void* backing_kernel_memory{};
// This should use the virtual memory address passed in, but currently, we do not setup the
// kernel virtual memory layout. Instead, we simply map these at a region of physical memory
// that we reserve for the slab heaps.
// TODO(bunnei): Fix this once we support the kernel virtual memory layout.
if (size > 0) {
void* backing_kernel_memory{
system.DeviceMemory().GetPointer(TranslateSlabAddrToPhysical(memory_layout, start))};
const KMemoryRegion* region = memory_layout.FindVirtual(start + size - 1);
ASSERT(region != nullptr);
ASSERT(region->IsDerivedFrom(KMemoryRegionType_KernelSlab));
@@ -91,6 +107,12 @@ VAddr InitializeSlabHeap(Core::System& system, KMemoryLayout& memory_layout, VAd
return start + size;
}
size_t CalculateSlabHeapGapSize() {
constexpr size_t KernelSlabHeapGapSize = 2_MiB - 296_KiB;
static_assert(KernelSlabHeapGapSize <= KernelSlabHeapGapsSizeMax);
return KernelSlabHeapGapSize;
}
} // namespace
KSlabResourceCounts KSlabResourceCounts::CreateDefault() {
@@ -109,8 +131,8 @@ KSlabResourceCounts KSlabResourceCounts::CreateDefault() {
.num_KObjectName = SlabCountKObjectName,
.num_KResourceLimit = SlabCountKResourceLimit,
.num_KDebug = SlabCountKDebug,
.num_KAlpha = SlabCountKAlpha,
.num_KBeta = SlabCountKBeta,
.num_KIoPool = SlabCountKIoPool,
.num_KIoRegion = SlabCountKIoRegion,
};
}
@@ -136,11 +158,34 @@ size_t CalculateTotalSlabHeapSize(const KernelCore& kernel) {
#undef ADD_SLAB_SIZE
// Add the reserved size.
size += KernelSlabHeapGapsSize;
size += CalculateSlabHeapGapSize();
return size;
}
void InitializeKPageBufferSlabHeap(Core::System& system) {
auto& kernel = system.Kernel();
const auto& counts = kernel.SlabResourceCounts();
const size_t num_pages =
counts.num_KProcess + counts.num_KThread + (counts.num_KProcess + counts.num_KThread) / 8;
const size_t slab_size = num_pages * PageSize;
// Reserve memory from the system resource limit.
ASSERT(kernel.GetSystemResourceLimit()->Reserve(LimitableResource::PhysicalMemory, slab_size));
// Allocate memory for the slab.
constexpr auto AllocateOption = KMemoryManager::EncodeOption(
KMemoryManager::Pool::System, KMemoryManager::Direction::FromFront);
const PAddr slab_address =
kernel.MemoryManager().AllocateAndOpenContinuous(num_pages, 1, AllocateOption);
ASSERT(slab_address != 0);
// Initialize the slabheap.
KPageBuffer::InitializeSlabHeap(kernel, system.DeviceMemory().GetPointer(slab_address),
slab_size);
}
void InitializeSlabHeaps(Core::System& system, KMemoryLayout& memory_layout) {
auto& kernel = system.Kernel();
@@ -160,13 +205,13 @@ void InitializeSlabHeaps(Core::System& system, KMemoryLayout& memory_layout) {
}
// Create an array to represent the gaps between the slabs.
const size_t total_gap_size = KernelSlabHeapGapsSize;
const size_t total_gap_size = CalculateSlabHeapGapSize();
std::array<size_t, slab_types.size()> slab_gaps;
for (size_t i = 0; i < slab_gaps.size(); i++) {
for (auto& slab_gap : slab_gaps) {
// Note: This is an off-by-one error from Nintendo's intention, because GenerateRandomRange
// is inclusive. However, Nintendo also has the off-by-one error, and it's "harmless", so we
// will include it ourselves.
slab_gaps[i] = KSystemControl::GenerateRandomRange(0, total_gap_size);
slab_gap = KSystemControl::GenerateRandomRange(0, total_gap_size);
}
// Sort the array, so that we can treat differences between values as offsets to the starts of
@@ -177,13 +222,21 @@ void InitializeSlabHeaps(Core::System& system, KMemoryLayout& memory_layout) {
}
}
for (size_t i = 0; i < slab_types.size(); i++) {
// Track the gaps, so that we can free them to the unused slab tree.
VAddr gap_start = address;
size_t gap_size = 0;
for (size_t i = 0; i < slab_gaps.size(); i++) {
// Add the random gap to the address.
address += (i == 0) ? slab_gaps[0] : slab_gaps[i] - slab_gaps[i - 1];
const auto cur_gap = (i == 0) ? slab_gaps[0] : slab_gaps[i] - slab_gaps[i - 1];
address += cur_gap;
gap_size += cur_gap;
#define INITIALIZE_SLAB_HEAP(NAME, COUNT, ...) \
case KSlabType_##NAME: \
address = InitializeSlabHeap<NAME>(system, memory_layout, address, COUNT); \
if (COUNT > 0) { \
address = InitializeSlabHeap<NAME>(system, memory_layout, address, COUNT); \
} \
break;
// Initialize the slabheap.
@@ -192,7 +245,13 @@ void InitializeSlabHeaps(Core::System& system, KMemoryLayout& memory_layout) {
FOREACH_SLAB_TYPE(INITIALIZE_SLAB_HEAP)
// If we somehow get an invalid type, abort.
default:
UNREACHABLE();
UNREACHABLE_MSG("Unknown slab type: {}", slab_types[i]);
}
// If we've hit the end of a gap, free it.
if (gap_start + gap_size != address) {
gap_start = address;
gap_size = 0;
}
}
}

View File

@@ -32,12 +32,13 @@ struct KSlabResourceCounts {
size_t num_KObjectName;
size_t num_KResourceLimit;
size_t num_KDebug;
size_t num_KAlpha;
size_t num_KBeta;
size_t num_KIoPool;
size_t num_KIoRegion;
};
void InitializeSlabResourceCounts(KernelCore& kernel);
size_t CalculateTotalSlabHeapSize(const KernelCore& kernel);
void InitializeKPageBufferSlabHeap(Core::System& system);
void InitializeSlabHeaps(Core::System& system, KMemoryLayout& memory_layout);
} // namespace Kernel::Init

View File

@@ -0,0 +1,23 @@
// Copyright 2022 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include "common/common_types.h"
#include "common/literals.h"
#include "core/hle/kernel/board/nintendo/nx/k_memory_layout.h"
#include "core/hle/kernel/board/nintendo/nx/k_system_control.h"
namespace Kernel {
using namespace Common::Literals;
constexpr std::size_t InitialProcessBinarySizeMax = 12_MiB;
static inline PAddr GetInitialProcessBinaryPhysicalAddress() {
return Kernel::Board::Nintendo::Nx::KSystemControl::Init::GetKernelPhysicalBaseAddress(
MainMemoryAddress);
}
} // namespace Kernel

View File

@@ -49,7 +49,7 @@ bool DecrementIfLessThan(Core::System& system, s32* out, VAddr address, s32 valu
}
} else {
// Otherwise, clear our exclusive hold and finish
monitor.ClearExclusive();
monitor.ClearExclusive(current_core);
}
// We're done.
@@ -78,7 +78,7 @@ bool UpdateIfEqual(Core::System& system, s32* out, VAddr address, s32 value, s32
}
} else {
// Otherwise, clear our exclusive hold and finish.
monitor.ClearExclusive();
monitor.ClearExclusive(current_core);
}
// We're done.
@@ -115,7 +115,7 @@ ResultCode KAddressArbiter::Signal(VAddr addr, s32 count) {
{
KScopedSchedulerLock sl(kernel);
auto it = thread_tree.nfind_light({addr, -1});
auto it = thread_tree.nfind_key({addr, -1});
while ((it != thread_tree.end()) && (count <= 0 || num_waiters < count) &&
(it->GetAddressArbiterKey() == addr)) {
// End the thread's wait.
@@ -148,7 +148,7 @@ ResultCode KAddressArbiter::SignalAndIncrementIfEqual(VAddr addr, s32 value, s32
return ResultInvalidState;
}
auto it = thread_tree.nfind_light({addr, -1});
auto it = thread_tree.nfind_key({addr, -1});
while ((it != thread_tree.end()) && (count <= 0 || num_waiters < count) &&
(it->GetAddressArbiterKey() == addr)) {
// End the thread's wait.
@@ -171,7 +171,7 @@ ResultCode KAddressArbiter::SignalAndModifyByWaitingCountIfEqual(VAddr addr, s32
{
[[maybe_unused]] const KScopedSchedulerLock sl(kernel);
auto it = thread_tree.nfind_light({addr, -1});
auto it = thread_tree.nfind_key({addr, -1});
// Determine the updated value.
s32 new_value{};
if (count <= 0) {

View File

@@ -4,8 +4,6 @@
#pragma once
#include <atomic>
#include "common/bit_util.h"
#include "common/common_types.h"

View File

@@ -5,7 +5,6 @@
#include "common/alignment.h"
#include "common/common_types.h"
#include "core/device_memory.h"
#include "core/hle/kernel/k_auto_object.h"
#include "core/hle/kernel/k_code_memory.h"
#include "core/hle/kernel/k_light_lock.h"
#include "core/hle/kernel/k_memory_block.h"

View File

@@ -9,7 +9,6 @@
#include "core/hle/kernel/k_process.h"
#include "core/hle/kernel/k_scheduler.h"
#include "core/hle/kernel/k_scoped_scheduler_lock_and_sleep.h"
#include "core/hle/kernel/k_synchronization_object.h"
#include "core/hle/kernel/k_thread.h"
#include "core/hle/kernel/k_thread_queue.h"
#include "core/hle/kernel/kernel.h"
@@ -244,7 +243,7 @@ void KConditionVariable::Signal(u64 cv_key, s32 count) {
{
KScopedSchedulerLock sl(kernel);
auto it = thread_tree.nfind_light({cv_key, -1});
auto it = thread_tree.nfind_key({cv_key, -1});
while ((it != thread_tree.end()) && (count <= 0 || num_waiters < count) &&
(it->GetConditionVariableKey() == cv_key)) {
KThread* target_thread = std::addressof(*it);

View File

@@ -63,7 +63,7 @@ bool KHandleTable::Remove(Handle handle) {
return true;
}
ResultCode KHandleTable::Add(Handle* out_handle, KAutoObject* obj, u16 type) {
ResultCode KHandleTable::Add(Handle* out_handle, KAutoObject* obj) {
KScopedDisableDispatch dd(kernel);
KScopedSpinLock lk(m_lock);
@@ -75,7 +75,7 @@ ResultCode KHandleTable::Add(Handle* out_handle, KAutoObject* obj, u16 type) {
const auto linear_id = this->AllocateLinearId();
const auto index = this->AllocateEntry();
m_entry_infos[index].info = {.linear_id = linear_id, .type = type};
m_entry_infos[index].linear_id = linear_id;
m_objects[index] = obj;
obj->Open();
@@ -116,7 +116,7 @@ void KHandleTable::Unreserve(Handle handle) {
}
}
void KHandleTable::Register(Handle handle, KAutoObject* obj, u16 type) {
void KHandleTable::Register(Handle handle, KAutoObject* obj) {
KScopedDisableDispatch dd(kernel);
KScopedSpinLock lk(m_lock);
@@ -132,7 +132,7 @@ void KHandleTable::Register(Handle handle, KAutoObject* obj, u16 type) {
// Set the entry.
ASSERT(m_objects[index] == nullptr);
m_entry_infos[index].info = {.linear_id = static_cast<u16>(linear_id), .type = type};
m_entry_infos[index].linear_id = static_cast<u16>(linear_id);
m_objects[index] = obj;
obj->Open();

View File

@@ -42,7 +42,7 @@ public:
m_free_head_index = -1;
// Free all entries.
for (s32 i = 0; i < static_cast<s32>(m_table_size); ++i) {
for (s16 i = 0; i < static_cast<s16>(m_table_size); ++i) {
m_objects[i] = nullptr;
m_entry_infos[i].next_free_index = i - 1;
m_free_head_index = i;
@@ -104,17 +104,8 @@ public:
ResultCode Reserve(Handle* out_handle);
void Unreserve(Handle handle);
template <typename T>
ResultCode Add(Handle* out_handle, T* obj) {
static_assert(std::is_base_of_v<KAutoObject, T>);
return this->Add(out_handle, obj, obj->GetTypeObj().GetClassToken());
}
template <typename T>
void Register(Handle handle, T* obj) {
static_assert(std::is_base_of_v<KAutoObject, T>);
return this->Register(handle, obj, obj->GetTypeObj().GetClassToken());
}
ResultCode Add(Handle* out_handle, KAutoObject* obj);
void Register(Handle handle, KAutoObject* obj);
template <typename T>
bool GetMultipleObjects(T** out, const Handle* handles, size_t num_handles) const {
@@ -160,9 +151,6 @@ public:
}
private:
ResultCode Add(Handle* out_handle, KAutoObject* obj, u16 type);
void Register(Handle handle, KAutoObject* obj, u16 type);
s32 AllocateEntry() {
ASSERT(m_count < m_table_size);
@@ -179,7 +167,7 @@ private:
ASSERT(m_count > 0);
m_objects[index] = nullptr;
m_entry_infos[index].next_free_index = m_free_head_index;
m_entry_infos[index].next_free_index = static_cast<s16>(m_free_head_index);
m_free_head_index = index;
@@ -278,19 +266,13 @@ private:
}
union EntryInfo {
struct {
u16 linear_id;
u16 type;
} info;
s32 next_free_index;
u16 linear_id;
s16 next_free_index;
constexpr u16 GetLinearId() const {
return info.linear_id;
return linear_id;
}
constexpr u16 GetType() const {
return info.type;
}
constexpr s32 GetNextFreeIndex() const {
constexpr s16 GetNextFreeIndex() const {
return next_free_index;
}
};

View File

@@ -57,11 +57,11 @@ constexpr std::size_t KernelPageTableHeapSize = GetMaximumOverheadSize(MainMemor
constexpr std::size_t KernelInitialPageHeapSize = 128_KiB;
constexpr std::size_t KernelSlabHeapDataSize = 5_MiB;
constexpr std::size_t KernelSlabHeapGapsSize = 2_MiB - 64_KiB;
constexpr std::size_t KernelSlabHeapSize = KernelSlabHeapDataSize + KernelSlabHeapGapsSize;
constexpr std::size_t KernelSlabHeapGapsSizeMax = 2_MiB - 64_KiB;
constexpr std::size_t KernelSlabHeapSize = KernelSlabHeapDataSize + KernelSlabHeapGapsSizeMax;
// NOTE: This is calculated from KThread slab counts, assuming KThread size <= 0x860.
constexpr std::size_t KernelSlabHeapAdditionalSize = 416_KiB;
constexpr std::size_t KernelSlabHeapAdditionalSize = 0x68000;
constexpr std::size_t KernelResourceSize =
KernelPageTableHeapSize + KernelInitialPageHeapSize + KernelSlabHeapSize;
@@ -173,6 +173,10 @@ public:
return Dereference(FindVirtualLinear(address));
}
const KMemoryRegion& GetPhysicalLinearRegion(PAddr address) const {
return Dereference(FindPhysicalLinear(address));
}
const KMemoryRegion* GetPhysicalKernelTraceBufferRegion() const {
return GetPhysicalMemoryRegionTree().FindFirstDerived(KMemoryRegionType_KernelTraceBuffer);
}

View File

@@ -10,189 +10,411 @@
#include "common/scope_exit.h"
#include "core/core.h"
#include "core/device_memory.h"
#include "core/hle/kernel/initial_process.h"
#include "core/hle/kernel/k_memory_manager.h"
#include "core/hle/kernel/k_page_linked_list.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/svc_results.h"
namespace Kernel {
KMemoryManager::KMemoryManager(Core::System& system_) : system{system_} {}
namespace {
std::size_t KMemoryManager::Impl::Initialize(Pool new_pool, u64 start_address, u64 end_address) {
const auto size{end_address - start_address};
// Calculate metadata sizes
const auto ref_count_size{(size / PageSize) * sizeof(u16)};
const auto optimize_map_size{(Common::AlignUp((size / PageSize), 64) / 64) * sizeof(u64)};
const auto manager_size{Common::AlignUp(optimize_map_size + ref_count_size, PageSize)};
const auto page_heap_size{KPageHeap::CalculateManagementOverheadSize(size)};
const auto total_metadata_size{manager_size + page_heap_size};
ASSERT(manager_size <= total_metadata_size);
ASSERT(Common::IsAligned(total_metadata_size, PageSize));
// Setup region
pool = new_pool;
// Initialize the manager's KPageHeap
heap.Initialize(start_address, size, page_heap_size);
// Free the memory to the heap
heap.Free(start_address, size / PageSize);
// Update the heap's used size
heap.UpdateUsedSize();
return total_metadata_size;
constexpr KMemoryManager::Pool GetPoolFromMemoryRegionType(u32 type) {
if ((type | KMemoryRegionType_DramApplicationPool) == type) {
return KMemoryManager::Pool::Application;
} else if ((type | KMemoryRegionType_DramAppletPool) == type) {
return KMemoryManager::Pool::Applet;
} else if ((type | KMemoryRegionType_DramSystemPool) == type) {
return KMemoryManager::Pool::System;
} else if ((type | KMemoryRegionType_DramSystemNonSecurePool) == type) {
return KMemoryManager::Pool::SystemNonSecure;
} else {
UNREACHABLE_MSG("InvalidMemoryRegionType for conversion to Pool");
return {};
}
}
void KMemoryManager::InitializeManager(Pool pool, u64 start_address, u64 end_address) {
ASSERT(pool < Pool::Count);
managers[static_cast<std::size_t>(pool)].Initialize(pool, start_address, end_address);
} // namespace
KMemoryManager::KMemoryManager(Core::System& system_)
: system{system_}, pool_locks{
KLightLock{system_.Kernel()},
KLightLock{system_.Kernel()},
KLightLock{system_.Kernel()},
KLightLock{system_.Kernel()},
} {}
void KMemoryManager::Initialize(VAddr management_region, size_t management_region_size) {
// Clear the management region to zero.
const VAddr management_region_end = management_region + management_region_size;
// Reset our manager count.
num_managers = 0;
// Traverse the virtual memory layout tree, initializing each manager as appropriate.
while (num_managers != MaxManagerCount) {
// Locate the region that should initialize the current manager.
PAddr region_address = 0;
size_t region_size = 0;
Pool region_pool = Pool::Count;
for (const auto& it : system.Kernel().MemoryLayout().GetPhysicalMemoryRegionTree()) {
// We only care about regions that we need to create managers for.
if (!it.IsDerivedFrom(KMemoryRegionType_DramUserPool)) {
continue;
}
// We want to initialize the managers in order.
if (it.GetAttributes() != num_managers) {
continue;
}
const PAddr cur_start = it.GetAddress();
const PAddr cur_end = it.GetEndAddress();
// Validate the region.
ASSERT(cur_end != 0);
ASSERT(cur_start != 0);
ASSERT(it.GetSize() > 0);
// Update the region's extents.
if (region_address == 0) {
region_address = cur_start;
region_size = it.GetSize();
region_pool = GetPoolFromMemoryRegionType(it.GetType());
} else {
ASSERT(cur_start == region_address + region_size);
// Update the size.
region_size = cur_end - region_address;
ASSERT(GetPoolFromMemoryRegionType(it.GetType()) == region_pool);
}
}
// If we didn't find a region, we're done.
if (region_size == 0) {
break;
}
// Initialize a new manager for the region.
Impl* manager = std::addressof(managers[num_managers++]);
ASSERT(num_managers <= managers.size());
const size_t cur_size = manager->Initialize(region_address, region_size, management_region,
management_region_end, region_pool);
management_region += cur_size;
ASSERT(management_region <= management_region_end);
// Insert the manager into the pool list.
const auto region_pool_index = static_cast<u32>(region_pool);
if (pool_managers_tail[region_pool_index] == nullptr) {
pool_managers_head[region_pool_index] = manager;
} else {
pool_managers_tail[region_pool_index]->SetNext(manager);
manager->SetPrev(pool_managers_tail[region_pool_index]);
}
pool_managers_tail[region_pool_index] = manager;
}
// Free each region to its corresponding heap.
size_t reserved_sizes[MaxManagerCount] = {};
const PAddr ini_start = GetInitialProcessBinaryPhysicalAddress();
const PAddr ini_end = ini_start + InitialProcessBinarySizeMax;
const PAddr ini_last = ini_end - 1;
for (const auto& it : system.Kernel().MemoryLayout().GetPhysicalMemoryRegionTree()) {
if (it.IsDerivedFrom(KMemoryRegionType_DramUserPool)) {
// Get the manager for the region.
auto index = it.GetAttributes();
auto& manager = managers[index];
const PAddr cur_start = it.GetAddress();
const PAddr cur_last = it.GetLastAddress();
const PAddr cur_end = it.GetEndAddress();
if (cur_start <= ini_start && ini_last <= cur_last) {
// Free memory before the ini to the heap.
if (cur_start != ini_start) {
manager.Free(cur_start, (ini_start - cur_start) / PageSize);
}
// Open/reserve the ini memory.
manager.OpenFirst(ini_start, InitialProcessBinarySizeMax / PageSize);
reserved_sizes[it.GetAttributes()] += InitialProcessBinarySizeMax;
// Free memory after the ini to the heap.
if (ini_last != cur_last) {
ASSERT(cur_end != 0);
manager.Free(ini_end, cur_end - ini_end);
}
} else {
// Ensure there's no partial overlap with the ini image.
if (cur_start <= ini_last) {
ASSERT(cur_last < ini_start);
} else {
// Otherwise, check the region for general validity.
ASSERT(cur_end != 0);
}
// Free the memory to the heap.
manager.Free(cur_start, it.GetSize() / PageSize);
}
}
}
// Update the used size for all managers.
for (size_t i = 0; i < num_managers; ++i) {
managers[i].SetInitialUsedHeapSize(reserved_sizes[i]);
}
}
VAddr KMemoryManager::AllocateAndOpenContinuous(std::size_t num_pages, std::size_t align_pages,
u32 option) {
// Early return if we're allocating no pages
PAddr KMemoryManager::AllocateAndOpenContinuous(size_t num_pages, size_t align_pages, u32 option) {
// Early return if we're allocating no pages.
if (num_pages == 0) {
return {};
return 0;
}
// Lock the pool that we're allocating from
// Lock the pool that we're allocating from.
const auto [pool, dir] = DecodeOption(option);
const auto pool_index{static_cast<std::size_t>(pool)};
std::lock_guard lock{pool_locks[pool_index]};
KScopedLightLock lk(pool_locks[static_cast<std::size_t>(pool)]);
// Choose a heap based on our page size request
const s32 heap_index{KPageHeap::GetAlignedBlockIndex(num_pages, align_pages)};
// Choose a heap based on our page size request.
const s32 heap_index = KPageHeap::GetAlignedBlockIndex(num_pages, align_pages);
// Loop, trying to iterate from each block
// TODO (bunnei): Support multiple managers
Impl& chosen_manager{managers[pool_index]};
VAddr allocated_block{chosen_manager.AllocateBlock(heap_index, false)};
// If we failed to allocate, quit now
if (!allocated_block) {
return {};
// Loop, trying to iterate from each block.
Impl* chosen_manager = nullptr;
PAddr allocated_block = 0;
for (chosen_manager = this->GetFirstManager(pool, dir); chosen_manager != nullptr;
chosen_manager = this->GetNextManager(chosen_manager, dir)) {
allocated_block = chosen_manager->AllocateBlock(heap_index, true);
if (allocated_block != 0) {
break;
}
}
// If we allocated more than we need, free some
const auto allocated_pages{KPageHeap::GetBlockNumPages(heap_index)};
// If we failed to allocate, quit now.
if (allocated_block == 0) {
return 0;
}
// If we allocated more than we need, free some.
const size_t allocated_pages = KPageHeap::GetBlockNumPages(heap_index);
if (allocated_pages > num_pages) {
chosen_manager.Free(allocated_block + num_pages * PageSize, allocated_pages - num_pages);
chosen_manager->Free(allocated_block + num_pages * PageSize, allocated_pages - num_pages);
}
// Open the first reference to the pages.
chosen_manager->OpenFirst(allocated_block, num_pages);
return allocated_block;
}
ResultCode KMemoryManager::Allocate(KPageLinkedList& page_list, std::size_t num_pages, Pool pool,
Direction dir, u32 heap_fill_value) {
ASSERT(page_list.GetNumPages() == 0);
ResultCode KMemoryManager::AllocatePageGroupImpl(KPageLinkedList* out, size_t num_pages, Pool pool,
Direction dir, bool random) {
// Choose a heap based on our page size request.
const s32 heap_index = KPageHeap::GetBlockIndex(num_pages);
R_UNLESS(0 <= heap_index, ResultOutOfMemory);
// Early return if we're allocating no pages
if (num_pages == 0) {
return ResultSuccess;
}
// Lock the pool that we're allocating from
const auto pool_index{static_cast<std::size_t>(pool)};
std::lock_guard lock{pool_locks[pool_index]};
// Choose a heap based on our page size request
const s32 heap_index{KPageHeap::GetBlockIndex(num_pages)};
if (heap_index < 0) {
return ResultOutOfMemory;
}
// TODO (bunnei): Support multiple managers
Impl& chosen_manager{managers[pool_index]};
// Ensure that we don't leave anything un-freed
auto group_guard = detail::ScopeExit([&] {
for (const auto& it : page_list.Nodes()) {
const auto min_num_pages{std::min<size_t>(
it.GetNumPages(), (chosen_manager.GetEndAddress() - it.GetAddress()) / PageSize)};
chosen_manager.Free(it.GetAddress(), min_num_pages);
// Ensure that we don't leave anything un-freed.
auto group_guard = SCOPE_GUARD({
for (const auto& it : out->Nodes()) {
auto& manager = this->GetManager(system.Kernel().MemoryLayout(), it.GetAddress());
const size_t num_pages_to_free =
std::min(it.GetNumPages(), (manager.GetEndAddress() - it.GetAddress()) / PageSize);
manager.Free(it.GetAddress(), num_pages_to_free);
}
});
// Keep allocating until we've allocated all our pages
for (s32 index{heap_index}; index >= 0 && num_pages > 0; index--) {
const auto pages_per_alloc{KPageHeap::GetBlockNumPages(index)};
while (num_pages >= pages_per_alloc) {
// Allocate a block
VAddr allocated_block{chosen_manager.AllocateBlock(index, false)};
if (!allocated_block) {
break;
}
// Safely add it to our group
{
auto block_guard = detail::ScopeExit(
[&] { chosen_manager.Free(allocated_block, pages_per_alloc); });
if (const ResultCode result{page_list.AddBlock(allocated_block, pages_per_alloc)};
result.IsError()) {
return result;
// Keep allocating until we've allocated all our pages.
for (s32 index = heap_index; index >= 0 && num_pages > 0; index--) {
const size_t pages_per_alloc = KPageHeap::GetBlockNumPages(index);
for (Impl* cur_manager = this->GetFirstManager(pool, dir); cur_manager != nullptr;
cur_manager = this->GetNextManager(cur_manager, dir)) {
while (num_pages >= pages_per_alloc) {
// Allocate a block.
PAddr allocated_block = cur_manager->AllocateBlock(index, random);
if (allocated_block == 0) {
break;
}
block_guard.Cancel();
}
// Safely add it to our group.
{
auto block_guard =
SCOPE_GUARD({ cur_manager->Free(allocated_block, pages_per_alloc); });
R_TRY(out->AddBlock(allocated_block, pages_per_alloc));
block_guard.Cancel();
}
num_pages -= pages_per_alloc;
num_pages -= pages_per_alloc;
}
}
}
// Clear allocated memory.
for (const auto& it : page_list.Nodes()) {
std::memset(system.DeviceMemory().GetPointer(it.GetAddress()), heap_fill_value,
it.GetSize());
}
// Only succeed if we allocated as many pages as we wanted
if (num_pages) {
return ResultOutOfMemory;
}
// Only succeed if we allocated as many pages as we wanted.
R_UNLESS(num_pages == 0, ResultOutOfMemory);
// We succeeded!
group_guard.Cancel();
return ResultSuccess;
}
ResultCode KMemoryManager::Free(KPageLinkedList& page_list, std::size_t num_pages, Pool pool,
Direction dir, u32 heap_fill_value) {
// Early return if we're freeing no pages
if (!num_pages) {
return ResultSuccess;
}
ResultCode KMemoryManager::AllocateAndOpen(KPageLinkedList* out, size_t num_pages, u32 option) {
ASSERT(out != nullptr);
ASSERT(out->GetNumPages() == 0);
// Lock the pool that we're freeing from
const auto pool_index{static_cast<std::size_t>(pool)};
std::lock_guard lock{pool_locks[pool_index]};
// Early return if we're allocating no pages.
R_SUCCEED_IF(num_pages == 0);
// TODO (bunnei): Support multiple managers
Impl& chosen_manager{managers[pool_index]};
// Lock the pool that we're allocating from.
const auto [pool, dir] = DecodeOption(option);
KScopedLightLock lk(pool_locks[static_cast<size_t>(pool)]);
// Free all of the pages
for (const auto& it : page_list.Nodes()) {
const auto min_num_pages{std::min<size_t>(
it.GetNumPages(), (chosen_manager.GetEndAddress() - it.GetAddress()) / PageSize)};
chosen_manager.Free(it.GetAddress(), min_num_pages);
// Allocate the page group.
R_TRY(this->AllocatePageGroupImpl(out, num_pages, pool, dir, false));
// Open the first reference to the pages.
for (const auto& block : out->Nodes()) {
PAddr cur_address = block.GetAddress();
size_t remaining_pages = block.GetNumPages();
while (remaining_pages > 0) {
// Get the manager for the current address.
auto& manager = this->GetManager(system.Kernel().MemoryLayout(), cur_address);
// Process part or all of the block.
const size_t cur_pages =
std::min(remaining_pages, manager.GetPageOffsetToEnd(cur_address));
manager.OpenFirst(cur_address, cur_pages);
// Advance.
cur_address += cur_pages * PageSize;
remaining_pages -= cur_pages;
}
}
return ResultSuccess;
}
std::size_t KMemoryManager::Impl::CalculateManagementOverheadSize(std::size_t region_size) {
const std::size_t ref_count_size = (region_size / PageSize) * sizeof(u16);
const std::size_t optimize_map_size =
ResultCode KMemoryManager::AllocateAndOpenForProcess(KPageLinkedList* out, size_t num_pages,
u32 option, u64 process_id, u8 fill_pattern) {
ASSERT(out != nullptr);
ASSERT(out->GetNumPages() == 0);
// Decode the option.
const auto [pool, dir] = DecodeOption(option);
// Allocate the memory.
{
// Lock the pool that we're allocating from.
KScopedLightLock lk(pool_locks[static_cast<size_t>(pool)]);
// Allocate the page group.
R_TRY(this->AllocatePageGroupImpl(out, num_pages, pool, dir, false));
// Open the first reference to the pages.
for (const auto& block : out->Nodes()) {
PAddr cur_address = block.GetAddress();
size_t remaining_pages = block.GetNumPages();
while (remaining_pages > 0) {
// Get the manager for the current address.
auto& manager = this->GetManager(system.Kernel().MemoryLayout(), cur_address);
// Process part or all of the block.
const size_t cur_pages =
std::min(remaining_pages, manager.GetPageOffsetToEnd(cur_address));
manager.OpenFirst(cur_address, cur_pages);
// Advance.
cur_address += cur_pages * PageSize;
remaining_pages -= cur_pages;
}
}
}
// Set all the allocated memory.
for (const auto& block : out->Nodes()) {
std::memset(system.DeviceMemory().GetPointer(block.GetAddress()), fill_pattern,
block.GetSize());
}
return ResultSuccess;
}
void KMemoryManager::Open(PAddr address, size_t num_pages) {
// Repeatedly open references until we've done so for all pages.
while (num_pages) {
auto& manager = this->GetManager(system.Kernel().MemoryLayout(), address);
const size_t cur_pages = std::min(num_pages, manager.GetPageOffsetToEnd(address));
{
KScopedLightLock lk(pool_locks[static_cast<size_t>(manager.GetPool())]);
manager.Open(address, cur_pages);
}
num_pages -= cur_pages;
address += cur_pages * PageSize;
}
}
void KMemoryManager::Close(PAddr address, size_t num_pages) {
// Repeatedly close references until we've done so for all pages.
while (num_pages) {
auto& manager = this->GetManager(system.Kernel().MemoryLayout(), address);
const size_t cur_pages = std::min(num_pages, manager.GetPageOffsetToEnd(address));
{
KScopedLightLock lk(pool_locks[static_cast<size_t>(manager.GetPool())]);
manager.Close(address, cur_pages);
}
num_pages -= cur_pages;
address += cur_pages * PageSize;
}
}
void KMemoryManager::Close(const KPageLinkedList& pg) {
for (const auto& node : pg.Nodes()) {
Close(node.GetAddress(), node.GetNumPages());
}
}
void KMemoryManager::Open(const KPageLinkedList& pg) {
for (const auto& node : pg.Nodes()) {
Open(node.GetAddress(), node.GetNumPages());
}
}
size_t KMemoryManager::Impl::Initialize(PAddr address, size_t size, VAddr management,
VAddr management_end, Pool p) {
// Calculate management sizes.
const size_t ref_count_size = (size / PageSize) * sizeof(u16);
const size_t optimize_map_size = CalculateOptimizedProcessOverheadSize(size);
const size_t manager_size = Common::AlignUp(optimize_map_size + ref_count_size, PageSize);
const size_t page_heap_size = KPageHeap::CalculateManagementOverheadSize(size);
const size_t total_management_size = manager_size + page_heap_size;
ASSERT(manager_size <= total_management_size);
ASSERT(management + total_management_size <= management_end);
ASSERT(Common::IsAligned(total_management_size, PageSize));
// Setup region.
pool = p;
management_region = management;
page_reference_counts.resize(
Kernel::Board::Nintendo::Nx::KSystemControl::Init::GetIntendedMemorySize() / PageSize);
ASSERT(Common::IsAligned(management_region, PageSize));
// Initialize the manager's KPageHeap.
heap.Initialize(address, size, management + manager_size, page_heap_size);
return total_management_size;
}
size_t KMemoryManager::Impl::CalculateManagementOverheadSize(size_t region_size) {
const size_t ref_count_size = (region_size / PageSize) * sizeof(u16);
const size_t optimize_map_size =
(Common::AlignUp((region_size / PageSize), Common::BitSize<u64>()) /
Common::BitSize<u64>()) *
sizeof(u64);
const std::size_t manager_meta_size =
Common::AlignUp(optimize_map_size + ref_count_size, PageSize);
const std::size_t page_heap_size = KPageHeap::CalculateManagementOverheadSize(region_size);
const size_t manager_meta_size = Common::AlignUp(optimize_map_size + ref_count_size, PageSize);
const size_t page_heap_size = KPageHeap::CalculateManagementOverheadSize(region_size);
return manager_meta_size + page_heap_size;
}

View File

@@ -5,11 +5,12 @@
#pragma once
#include <array>
#include <mutex>
#include <tuple>
#include "common/common_funcs.h"
#include "common/common_types.h"
#include "core/hle/kernel/k_light_lock.h"
#include "core/hle/kernel/k_memory_layout.h"
#include "core/hle/kernel/k_page_heap.h"
#include "core/hle/result.h"
@@ -52,22 +53,33 @@ public:
explicit KMemoryManager(Core::System& system_);
constexpr std::size_t GetSize(Pool pool) const {
return managers[static_cast<std::size_t>(pool)].GetSize();
void Initialize(VAddr management_region, size_t management_region_size);
constexpr size_t GetSize(Pool pool) const {
constexpr Direction GetSizeDirection = Direction::FromFront;
size_t total = 0;
for (auto* manager = this->GetFirstManager(pool, GetSizeDirection); manager != nullptr;
manager = this->GetNextManager(manager, GetSizeDirection)) {
total += manager->GetSize();
}
return total;
}
void InitializeManager(Pool pool, u64 start_address, u64 end_address);
PAddr AllocateAndOpenContinuous(size_t num_pages, size_t align_pages, u32 option);
ResultCode AllocateAndOpen(KPageLinkedList* out, size_t num_pages, u32 option);
ResultCode AllocateAndOpenForProcess(KPageLinkedList* out, size_t num_pages, u32 option,
u64 process_id, u8 fill_pattern);
VAddr AllocateAndOpenContinuous(size_t num_pages, size_t align_pages, u32 option);
ResultCode Allocate(KPageLinkedList& page_list, std::size_t num_pages, Pool pool, Direction dir,
u32 heap_fill_value = 0);
ResultCode Free(KPageLinkedList& page_list, std::size_t num_pages, Pool pool, Direction dir,
u32 heap_fill_value = 0);
static constexpr size_t MaxManagerCount = 10;
static constexpr std::size_t MaxManagerCount = 10;
void Close(PAddr address, size_t num_pages);
void Close(const KPageLinkedList& pg);
void Open(PAddr address, size_t num_pages);
void Open(const KPageLinkedList& pg);
public:
static std::size_t CalculateManagementOverheadSize(std::size_t region_size) {
static size_t CalculateManagementOverheadSize(size_t region_size) {
return Impl::CalculateManagementOverheadSize(region_size);
}
@@ -100,17 +112,26 @@ private:
Impl() = default;
~Impl() = default;
std::size_t Initialize(Pool new_pool, u64 start_address, u64 end_address);
size_t Initialize(PAddr address, size_t size, VAddr management, VAddr management_end,
Pool p);
VAddr AllocateBlock(s32 index, bool random) {
return heap.AllocateBlock(index, random);
}
void Free(VAddr addr, std::size_t num_pages) {
void Free(VAddr addr, size_t num_pages) {
heap.Free(addr, num_pages);
}
constexpr std::size_t GetSize() const {
void SetInitialUsedHeapSize(size_t reserved_size) {
heap.SetInitialUsedSize(reserved_size);
}
constexpr Pool GetPool() const {
return pool;
}
constexpr size_t GetSize() const {
return heap.GetSize();
}
@@ -122,10 +143,88 @@ private:
return heap.GetEndAddress();
}
static std::size_t CalculateManagementOverheadSize(std::size_t region_size);
constexpr size_t GetPageOffset(PAddr address) const {
return heap.GetPageOffset(address);
}
static constexpr std::size_t CalculateOptimizedProcessOverheadSize(
std::size_t region_size) {
constexpr size_t GetPageOffsetToEnd(PAddr address) const {
return heap.GetPageOffsetToEnd(address);
}
constexpr void SetNext(Impl* n) {
next = n;
}
constexpr void SetPrev(Impl* n) {
prev = n;
}
constexpr Impl* GetNext() const {
return next;
}
constexpr Impl* GetPrev() const {
return prev;
}
void OpenFirst(PAddr address, size_t num_pages) {
size_t index = this->GetPageOffset(address);
const size_t end = index + num_pages;
while (index < end) {
const RefCount ref_count = (++page_reference_counts[index]);
ASSERT(ref_count == 1);
index++;
}
}
void Open(PAddr address, size_t num_pages) {
size_t index = this->GetPageOffset(address);
const size_t end = index + num_pages;
while (index < end) {
const RefCount ref_count = (++page_reference_counts[index]);
ASSERT(ref_count > 1);
index++;
}
}
void Close(PAddr address, size_t num_pages) {
size_t index = this->GetPageOffset(address);
const size_t end = index + num_pages;
size_t free_start = 0;
size_t free_count = 0;
while (index < end) {
ASSERT(page_reference_counts[index] > 0);
const RefCount ref_count = (--page_reference_counts[index]);
// Keep track of how many zero refcounts we see in a row, to minimize calls to free.
if (ref_count == 0) {
if (free_count > 0) {
free_count++;
} else {
free_start = index;
free_count = 1;
}
} else {
if (free_count > 0) {
this->Free(heap.GetAddress() + free_start * PageSize, free_count);
free_count = 0;
}
}
index++;
}
if (free_count > 0) {
this->Free(heap.GetAddress() + free_start * PageSize, free_count);
}
}
static size_t CalculateManagementOverheadSize(size_t region_size);
static constexpr size_t CalculateOptimizedProcessOverheadSize(size_t region_size) {
return (Common::AlignUp((region_size / PageSize), Common::BitSize<u64>()) /
Common::BitSize<u64>()) *
sizeof(u64);
@@ -135,13 +234,45 @@ private:
using RefCount = u16;
KPageHeap heap;
std::vector<RefCount> page_reference_counts;
VAddr management_region{};
Pool pool{};
Impl* next{};
Impl* prev{};
};
private:
Impl& GetManager(const KMemoryLayout& memory_layout, PAddr address) {
return managers[memory_layout.GetPhysicalLinearRegion(address).GetAttributes()];
}
const Impl& GetManager(const KMemoryLayout& memory_layout, PAddr address) const {
return managers[memory_layout.GetPhysicalLinearRegion(address).GetAttributes()];
}
constexpr Impl* GetFirstManager(Pool pool, Direction dir) const {
return dir == Direction::FromBack ? pool_managers_tail[static_cast<size_t>(pool)]
: pool_managers_head[static_cast<size_t>(pool)];
}
constexpr Impl* GetNextManager(Impl* cur, Direction dir) const {
if (dir == Direction::FromBack) {
return cur->GetPrev();
} else {
return cur->GetNext();
}
}
ResultCode AllocatePageGroupImpl(KPageLinkedList* out, size_t num_pages, Pool pool,
Direction dir, bool random);
private:
Core::System& system;
std::array<std::mutex, static_cast<std::size_t>(Pool::Count)> pool_locks;
std::array<KLightLock, static_cast<size_t>(Pool::Count)> pool_locks;
std::array<Impl*, MaxManagerCount> pool_managers_head{};
std::array<Impl*, MaxManagerCount> pool_managers_tail{};
std::array<Impl, MaxManagerCount> managers;
size_t num_managers{};
};
} // namespace Kernel

View File

@@ -14,7 +14,8 @@
namespace Kernel {
enum KMemoryRegionType : u32 {
KMemoryRegionAttr_CarveoutProtected = 0x04000000,
KMemoryRegionAttr_CarveoutProtected = 0x02000000,
KMemoryRegionAttr_Uncached = 0x04000000,
KMemoryRegionAttr_DidKernelMap = 0x08000000,
KMemoryRegionAttr_ShouldKernelMap = 0x10000000,
KMemoryRegionAttr_UserReadOnly = 0x20000000,
@@ -239,6 +240,11 @@ static_assert(KMemoryRegionType_VirtualDramHeapBase.GetValue() == 0x1A);
static_assert(KMemoryRegionType_VirtualDramKernelPtHeap.GetValue() == 0x2A);
static_assert(KMemoryRegionType_VirtualDramKernelTraceBuffer.GetValue() == 0x4A);
// UNUSED: .DeriveSparse(2, 2, 0);
constexpr auto KMemoryRegionType_VirtualDramUnknownDebug =
KMemoryRegionType_Dram.DeriveSparse(2, 2, 1);
static_assert(KMemoryRegionType_VirtualDramUnknownDebug.GetValue() == (0x52));
constexpr auto KMemoryRegionType_VirtualDramKernelInitPt =
KMemoryRegionType_VirtualDramHeapBase.Derive(3, 0);
constexpr auto KMemoryRegionType_VirtualDramPoolManagement =
@@ -330,6 +336,8 @@ constexpr KMemoryRegionType GetTypeForVirtualLinearMapping(u32 type_id) {
return KMemoryRegionType_VirtualDramKernelTraceBuffer;
} else if (KMemoryRegionType_DramKernelPtHeap.IsAncestorOf(type_id)) {
return KMemoryRegionType_VirtualDramKernelPtHeap;
} else if ((type_id | KMemoryRegionAttr_ShouldKernelMap) == type_id) {
return KMemoryRegionType_VirtualDramUnknownDebug;
} else {
return KMemoryRegionType_Dram;
}

View File

@@ -0,0 +1,19 @@
// Copyright 2022 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "common/alignment.h"
#include "common/assert.h"
#include "core/core.h"
#include "core/device_memory.h"
#include "core/hle/kernel/k_page_buffer.h"
#include "core/hle/kernel/memory_types.h"
namespace Kernel {
KPageBuffer* KPageBuffer::FromPhysicalAddress(Core::System& system, PAddr phys_addr) {
ASSERT(Common::IsAligned(phys_addr, PageSize));
return reinterpret_cast<KPageBuffer*>(system.DeviceMemory().GetPointer(phys_addr));
}
} // namespace Kernel

View File

@@ -0,0 +1,28 @@
// Copyright 2022 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <array>
#include "common/common_types.h"
#include "core/hle/kernel/memory_types.h"
#include "core/hle/kernel/slab_helpers.h"
namespace Kernel {
class KPageBuffer final : public KSlabAllocated<KPageBuffer> {
public:
KPageBuffer() = default;
static KPageBuffer* FromPhysicalAddress(Core::System& system, PAddr phys_addr);
private:
[[maybe_unused]] alignas(PageSize) std::array<u8, PageSize> m_buffer{};
};
static_assert(sizeof(KPageBuffer) == PageSize);
static_assert(alignof(KPageBuffer) == PageSize);
} // namespace Kernel

View File

@@ -7,35 +7,51 @@
namespace Kernel {
void KPageHeap::Initialize(VAddr address, std::size_t size, std::size_t metadata_size) {
// Check our assumptions
ASSERT(Common::IsAligned((address), PageSize));
void KPageHeap::Initialize(PAddr address, size_t size, VAddr management_address,
size_t management_size, const size_t* block_shifts,
size_t num_block_shifts) {
// Check our assumptions.
ASSERT(Common::IsAligned(address, PageSize));
ASSERT(Common::IsAligned(size, PageSize));
ASSERT(0 < num_block_shifts && num_block_shifts <= NumMemoryBlockPageShifts);
const VAddr management_end = management_address + management_size;
// Set our members
heap_address = address;
heap_size = size;
// Set our members.
m_heap_address = address;
m_heap_size = size;
m_num_blocks = num_block_shifts;
// Setup bitmaps
metadata.resize(metadata_size / sizeof(u64));
u64* cur_bitmap_storage{metadata.data()};
for (std::size_t i = 0; i < MemoryBlockPageShifts.size(); i++) {
const std::size_t cur_block_shift{MemoryBlockPageShifts[i]};
const std::size_t next_block_shift{
(i != MemoryBlockPageShifts.size() - 1) ? MemoryBlockPageShifts[i + 1] : 0};
cur_bitmap_storage = blocks[i].Initialize(heap_address, heap_size, cur_block_shift,
next_block_shift, cur_bitmap_storage);
// Setup bitmaps.
m_management_data.resize(management_size / sizeof(u64));
u64* cur_bitmap_storage{m_management_data.data()};
for (size_t i = 0; i < num_block_shifts; i++) {
const size_t cur_block_shift = block_shifts[i];
const size_t next_block_shift = (i != num_block_shifts - 1) ? block_shifts[i + 1] : 0;
cur_bitmap_storage = m_blocks[i].Initialize(m_heap_address, m_heap_size, cur_block_shift,
next_block_shift, cur_bitmap_storage);
}
// Ensure we didn't overextend our bounds.
ASSERT(VAddr(cur_bitmap_storage) <= management_end);
}
VAddr KPageHeap::AllocateBlock(s32 index, bool random) {
const std::size_t needed_size{blocks[index].GetSize()};
size_t KPageHeap::GetNumFreePages() const {
size_t num_free = 0;
for (s32 i{index}; i < static_cast<s32>(MemoryBlockPageShifts.size()); i++) {
if (const VAddr addr{blocks[i].PopBlock(random)}; addr) {
if (const std::size_t allocated_size{blocks[i].GetSize()};
allocated_size > needed_size) {
Free(addr + needed_size, (allocated_size - needed_size) / PageSize);
for (size_t i = 0; i < m_num_blocks; i++) {
num_free += m_blocks[i].GetNumFreePages();
}
return num_free;
}
PAddr KPageHeap::AllocateBlock(s32 index, bool random) {
const size_t needed_size = m_blocks[index].GetSize();
for (s32 i = index; i < static_cast<s32>(m_num_blocks); i++) {
if (const PAddr addr = m_blocks[i].PopBlock(random); addr != 0) {
if (const size_t allocated_size = m_blocks[i].GetSize(); allocated_size > needed_size) {
this->Free(addr + needed_size, (allocated_size - needed_size) / PageSize);
}
return addr;
}
@@ -44,34 +60,34 @@ VAddr KPageHeap::AllocateBlock(s32 index, bool random) {
return 0;
}
void KPageHeap::FreeBlock(VAddr block, s32 index) {
void KPageHeap::FreeBlock(PAddr block, s32 index) {
do {
block = blocks[index++].PushBlock(block);
block = m_blocks[index++].PushBlock(block);
} while (block != 0);
}
void KPageHeap::Free(VAddr addr, std::size_t num_pages) {
// Freeing no pages is a no-op
void KPageHeap::Free(PAddr addr, size_t num_pages) {
// Freeing no pages is a no-op.
if (num_pages == 0) {
return;
}
// Find the largest block size that we can free, and free as many as possible
s32 big_index{static_cast<s32>(MemoryBlockPageShifts.size()) - 1};
const VAddr start{addr};
const VAddr end{(num_pages * PageSize) + addr};
VAddr before_start{start};
VAddr before_end{start};
VAddr after_start{end};
VAddr after_end{end};
// Find the largest block size that we can free, and free as many as possible.
s32 big_index = static_cast<s32>(m_num_blocks) - 1;
const PAddr start = addr;
const PAddr end = addr + num_pages * PageSize;
PAddr before_start = start;
PAddr before_end = start;
PAddr after_start = end;
PAddr after_end = end;
while (big_index >= 0) {
const std::size_t block_size{blocks[big_index].GetSize()};
const VAddr big_start{Common::AlignUp((start), block_size)};
const VAddr big_end{Common::AlignDown((end), block_size)};
const size_t block_size = m_blocks[big_index].GetSize();
const PAddr big_start = Common::AlignUp(start, block_size);
const PAddr big_end = Common::AlignDown(end, block_size);
if (big_start < big_end) {
// Free as many big blocks as we can
for (auto block{big_start}; block < big_end; block += block_size) {
FreeBlock(block, big_index);
// Free as many big blocks as we can.
for (auto block = big_start; block < big_end; block += block_size) {
this->FreeBlock(block, big_index);
}
before_end = big_start;
after_start = big_end;
@@ -81,31 +97,31 @@ void KPageHeap::Free(VAddr addr, std::size_t num_pages) {
}
ASSERT(big_index >= 0);
// Free space before the big blocks
for (s32 i{big_index - 1}; i >= 0; i--) {
const std::size_t block_size{blocks[i].GetSize()};
// Free space before the big blocks.
for (s32 i = big_index - 1; i >= 0; i--) {
const size_t block_size = m_blocks[i].GetSize();
while (before_start + block_size <= before_end) {
before_end -= block_size;
FreeBlock(before_end, i);
this->FreeBlock(before_end, i);
}
}
// Free space after the big blocks
for (s32 i{big_index - 1}; i >= 0; i--) {
const std::size_t block_size{blocks[i].GetSize()};
// Free space after the big blocks.
for (s32 i = big_index - 1; i >= 0; i--) {
const size_t block_size = m_blocks[i].GetSize();
while (after_start + block_size <= after_end) {
FreeBlock(after_start, i);
this->FreeBlock(after_start, i);
after_start += block_size;
}
}
}
std::size_t KPageHeap::CalculateManagementOverheadSize(std::size_t region_size) {
std::size_t overhead_size = 0;
for (std::size_t i = 0; i < MemoryBlockPageShifts.size(); i++) {
const std::size_t cur_block_shift{MemoryBlockPageShifts[i]};
const std::size_t next_block_shift{
(i != MemoryBlockPageShifts.size() - 1) ? MemoryBlockPageShifts[i + 1] : 0};
size_t KPageHeap::CalculateManagementOverheadSize(size_t region_size, const size_t* block_shifts,
size_t num_block_shifts) {
size_t overhead_size = 0;
for (size_t i = 0; i < num_block_shifts; i++) {
const size_t cur_block_shift = block_shifts[i];
const size_t next_block_shift = (i != num_block_shifts - 1) ? block_shifts[i + 1] : 0;
overhead_size += KPageHeap::Block::CalculateManagementOverheadSize(
region_size, cur_block_shift, next_block_shift);
}

View File

@@ -23,54 +23,73 @@ public:
KPageHeap() = default;
~KPageHeap() = default;
constexpr VAddr GetAddress() const {
return heap_address;
constexpr PAddr GetAddress() const {
return m_heap_address;
}
constexpr std::size_t GetSize() const {
return heap_size;
constexpr size_t GetSize() const {
return m_heap_size;
}
constexpr VAddr GetEndAddress() const {
return GetAddress() + GetSize();
constexpr PAddr GetEndAddress() const {
return this->GetAddress() + this->GetSize();
}
constexpr std::size_t GetPageOffset(VAddr block) const {
return (block - GetAddress()) / PageSize;
constexpr size_t GetPageOffset(PAddr block) const {
return (block - this->GetAddress()) / PageSize;
}
constexpr size_t GetPageOffsetToEnd(PAddr block) const {
return (this->GetEndAddress() - block) / PageSize;
}
void Initialize(VAddr heap_address, std::size_t heap_size, std::size_t metadata_size);
VAddr AllocateBlock(s32 index, bool random);
void Free(VAddr addr, std::size_t num_pages);
void UpdateUsedSize() {
used_size = heap_size - (GetNumFreePages() * PageSize);
void Initialize(PAddr heap_address, size_t heap_size, VAddr management_address,
size_t management_size) {
return this->Initialize(heap_address, heap_size, management_address, management_size,
MemoryBlockPageShifts.data(), NumMemoryBlockPageShifts);
}
static std::size_t CalculateManagementOverheadSize(std::size_t region_size);
size_t GetFreeSize() const {
return this->GetNumFreePages() * PageSize;
}
static constexpr s32 GetAlignedBlockIndex(std::size_t num_pages, std::size_t align_pages) {
const auto target_pages{std::max(num_pages, align_pages)};
for (std::size_t i = 0; i < NumMemoryBlockPageShifts; i++) {
if (target_pages <=
(static_cast<std::size_t>(1) << MemoryBlockPageShifts[i]) / PageSize) {
void SetInitialUsedSize(size_t reserved_size) {
// Check that the reserved size is valid.
const size_t free_size = this->GetNumFreePages() * PageSize;
ASSERT(m_heap_size >= free_size + reserved_size);
// Set the initial used size.
m_initial_used_size = m_heap_size - free_size - reserved_size;
}
PAddr AllocateBlock(s32 index, bool random);
void Free(PAddr addr, size_t num_pages);
static size_t CalculateManagementOverheadSize(size_t region_size) {
return CalculateManagementOverheadSize(region_size, MemoryBlockPageShifts.data(),
NumMemoryBlockPageShifts);
}
static constexpr s32 GetAlignedBlockIndex(size_t num_pages, size_t align_pages) {
const size_t target_pages = std::max(num_pages, align_pages);
for (size_t i = 0; i < NumMemoryBlockPageShifts; i++) {
if (target_pages <= (size_t(1) << MemoryBlockPageShifts[i]) / PageSize) {
return static_cast<s32>(i);
}
}
return -1;
}
static constexpr s32 GetBlockIndex(std::size_t num_pages) {
for (s32 i{static_cast<s32>(NumMemoryBlockPageShifts) - 1}; i >= 0; i--) {
if (num_pages >= (static_cast<std::size_t>(1) << MemoryBlockPageShifts[i]) / PageSize) {
static constexpr s32 GetBlockIndex(size_t num_pages) {
for (s32 i = static_cast<s32>(NumMemoryBlockPageShifts) - 1; i >= 0; i--) {
if (num_pages >= (size_t(1) << MemoryBlockPageShifts[i]) / PageSize) {
return i;
}
}
return -1;
}
static constexpr std::size_t GetBlockSize(std::size_t index) {
return static_cast<std::size_t>(1) << MemoryBlockPageShifts[index];
static constexpr size_t GetBlockSize(size_t index) {
return size_t(1) << MemoryBlockPageShifts[index];
}
static constexpr std::size_t GetBlockNumPages(std::size_t index) {
static constexpr size_t GetBlockNumPages(size_t index) {
return GetBlockSize(index) / PageSize;
}
@@ -83,114 +102,116 @@ private:
Block() = default;
~Block() = default;
constexpr std::size_t GetShift() const {
return block_shift;
constexpr size_t GetShift() const {
return m_block_shift;
}
constexpr std::size_t GetNextShift() const {
return next_block_shift;
constexpr size_t GetNextShift() const {
return m_next_block_shift;
}
constexpr std::size_t GetSize() const {
return static_cast<std::size_t>(1) << GetShift();
constexpr size_t GetSize() const {
return u64(1) << this->GetShift();
}
constexpr std::size_t GetNumPages() const {
return GetSize() / PageSize;
constexpr size_t GetNumPages() const {
return this->GetSize() / PageSize;
}
constexpr std::size_t GetNumFreeBlocks() const {
return bitmap.GetNumBits();
constexpr size_t GetNumFreeBlocks() const {
return m_bitmap.GetNumBits();
}
constexpr std::size_t GetNumFreePages() const {
return GetNumFreeBlocks() * GetNumPages();
constexpr size_t GetNumFreePages() const {
return this->GetNumFreeBlocks() * this->GetNumPages();
}
u64* Initialize(VAddr addr, std::size_t size, std::size_t bs, std::size_t nbs,
u64* bit_storage) {
// Set shifts
block_shift = bs;
next_block_shift = nbs;
u64* Initialize(PAddr addr, size_t size, size_t bs, size_t nbs, u64* bit_storage) {
// Set shifts.
m_block_shift = bs;
m_next_block_shift = nbs;
// Align up the address
VAddr end{addr + size};
const auto align{(next_block_shift != 0) ? (1ULL << next_block_shift)
: (1ULL << block_shift)};
addr = Common::AlignDown((addr), align);
end = Common::AlignUp((end), align);
// Align up the address.
PAddr end = addr + size;
const size_t align = (m_next_block_shift != 0) ? (u64(1) << m_next_block_shift)
: (u64(1) << m_block_shift);
addr = Common::AlignDown(addr, align);
end = Common::AlignUp(end, align);
heap_address = addr;
end_offset = (end - addr) / (1ULL << block_shift);
return bitmap.Initialize(bit_storage, end_offset);
m_heap_address = addr;
m_end_offset = (end - addr) / (u64(1) << m_block_shift);
return m_bitmap.Initialize(bit_storage, m_end_offset);
}
VAddr PushBlock(VAddr address) {
// Set the bit for the free block
std::size_t offset{(address - heap_address) >> GetShift()};
bitmap.SetBit(offset);
PAddr PushBlock(PAddr address) {
// Set the bit for the free block.
size_t offset = (address - m_heap_address) >> this->GetShift();
m_bitmap.SetBit(offset);
// If we have a next shift, try to clear the blocks below and return the address
if (GetNextShift()) {
const auto diff{1ULL << (GetNextShift() - GetShift())};
// If we have a next shift, try to clear the blocks below this one and return the new
// address.
if (this->GetNextShift()) {
const size_t diff = u64(1) << (this->GetNextShift() - this->GetShift());
offset = Common::AlignDown(offset, diff);
if (bitmap.ClearRange(offset, diff)) {
return heap_address + (offset << GetShift());
if (m_bitmap.ClearRange(offset, diff)) {
return m_heap_address + (offset << this->GetShift());
}
}
// We couldn't coalesce, or we're already as big as possible
return 0;
// We couldn't coalesce, or we're already as big as possible.
return {};
}
VAddr PopBlock(bool random) {
// Find a free block
const s64 soffset{bitmap.FindFreeBlock(random)};
PAddr PopBlock(bool random) {
// Find a free block.
s64 soffset = m_bitmap.FindFreeBlock(random);
if (soffset < 0) {
return 0;
return {};
}
const auto offset{static_cast<std::size_t>(soffset)};
const size_t offset = static_cast<size_t>(soffset);
// Update our tracking and return it
bitmap.ClearBit(offset);
return heap_address + (offset << GetShift());
// Update our tracking and return it.
m_bitmap.ClearBit(offset);
return m_heap_address + (offset << this->GetShift());
}
static constexpr std::size_t CalculateManagementOverheadSize(std::size_t region_size,
std::size_t cur_block_shift,
std::size_t next_block_shift) {
const auto cur_block_size{(1ULL << cur_block_shift)};
const auto next_block_size{(1ULL << next_block_shift)};
const auto align{(next_block_shift != 0) ? next_block_size : cur_block_size};
public:
static constexpr size_t CalculateManagementOverheadSize(size_t region_size,
size_t cur_block_shift,
size_t next_block_shift) {
const size_t cur_block_size = (u64(1) << cur_block_shift);
const size_t next_block_size = (u64(1) << next_block_shift);
const size_t align = (next_block_shift != 0) ? next_block_size : cur_block_size;
return KPageBitmap::CalculateManagementOverheadSize(
(align * 2 + Common::AlignUp(region_size, align)) / cur_block_size);
}
private:
KPageBitmap bitmap;
VAddr heap_address{};
uintptr_t end_offset{};
std::size_t block_shift{};
std::size_t next_block_shift{};
KPageBitmap m_bitmap;
PAddr m_heap_address{};
uintptr_t m_end_offset{};
size_t m_block_shift{};
size_t m_next_block_shift{};
};
constexpr std::size_t GetNumFreePages() const {
std::size_t num_free{};
private:
void Initialize(PAddr heap_address, size_t heap_size, VAddr management_address,
size_t management_size, const size_t* block_shifts, size_t num_block_shifts);
size_t GetNumFreePages() const;
for (const auto& block : blocks) {
num_free += block.GetNumFreePages();
}
void FreeBlock(PAddr block, s32 index);
return num_free;
}
void FreeBlock(VAddr block, s32 index);
static constexpr std::size_t NumMemoryBlockPageShifts{7};
static constexpr std::array<std::size_t, NumMemoryBlockPageShifts> MemoryBlockPageShifts{
static constexpr size_t NumMemoryBlockPageShifts{7};
static constexpr std::array<size_t, NumMemoryBlockPageShifts> MemoryBlockPageShifts{
0xC, 0x10, 0x15, 0x16, 0x19, 0x1D, 0x1E,
};
VAddr heap_address{};
std::size_t heap_size{};
std::size_t used_size{};
std::array<Block, NumMemoryBlockPageShifts> blocks{};
std::vector<u64> metadata;
private:
static size_t CalculateManagementOverheadSize(size_t region_size, const size_t* block_shifts,
size_t num_block_shifts);
private:
PAddr m_heap_address{};
size_t m_heap_size{};
size_t m_initial_used_size{};
size_t m_num_blocks{};
std::array<Block, NumMemoryBlockPageShifts> m_blocks{};
std::vector<u64> m_management_data;
};
} // namespace Kernel

View File

@@ -273,87 +273,219 @@ ResultCode KPageTable::MapProcessCode(VAddr addr, std::size_t num_pages, KMemory
R_TRY(this->CheckMemoryState(addr, size, KMemoryState::All, KMemoryState::Free,
KMemoryPermission::None, KMemoryPermission::None,
KMemoryAttribute::None, KMemoryAttribute::None));
KPageLinkedList pg;
R_TRY(system.Kernel().MemoryManager().AllocateAndOpen(
&pg, num_pages,
KMemoryManager::EncodeOption(KMemoryManager::Pool::Application, allocation_option)));
KPageLinkedList page_linked_list;
R_TRY(system.Kernel().MemoryManager().Allocate(page_linked_list, num_pages, memory_pool,
allocation_option));
R_TRY(Operate(addr, num_pages, page_linked_list, OperationType::MapGroup));
R_TRY(Operate(addr, num_pages, pg, OperationType::MapGroup));
block_manager->Update(addr, num_pages, state, perm);
return ResultSuccess;
}
ResultCode KPageTable::MapCodeMemory(VAddr dst_addr, VAddr src_addr, std::size_t size) {
ResultCode KPageTable::MapCodeMemory(VAddr dst_address, VAddr src_address, std::size_t size) {
// Validate the mapping request.
R_UNLESS(this->CanContain(dst_address, size, KMemoryState::AliasCode),
ResultInvalidMemoryRegion);
// Lock the table.
KScopedLightLock lk(general_lock);
const std::size_t num_pages{size / PageSize};
// Verify that the source memory is normal heap.
KMemoryState src_state{};
KMemoryPermission src_perm{};
std::size_t num_src_allocator_blocks{};
R_TRY(this->CheckMemoryState(&src_state, &src_perm, nullptr, &num_src_allocator_blocks,
src_address, size, KMemoryState::All, KMemoryState::Normal,
KMemoryPermission::All, KMemoryPermission::UserReadWrite,
KMemoryAttribute::All, KMemoryAttribute::None));
KMemoryState state{};
KMemoryPermission perm{};
CASCADE_CODE(CheckMemoryState(&state, &perm, nullptr, nullptr, src_addr, size,
KMemoryState::All, KMemoryState::Normal, KMemoryPermission::All,
KMemoryPermission::UserReadWrite, KMemoryAttribute::Mask,
KMemoryAttribute::None, KMemoryAttribute::IpcAndDeviceMapped));
if (IsRegionMapped(dst_addr, size)) {
return ResultInvalidCurrentMemory;
}
KPageLinkedList page_linked_list;
AddRegionToPages(src_addr, num_pages, page_linked_list);
// Verify that the destination memory is unmapped.
std::size_t num_dst_allocator_blocks{};
R_TRY(this->CheckMemoryState(&num_dst_allocator_blocks, dst_address, size, KMemoryState::All,
KMemoryState::Free, KMemoryPermission::None,
KMemoryPermission::None, KMemoryAttribute::None,
KMemoryAttribute::None));
// Map the code memory.
{
auto block_guard = detail::ScopeExit(
[&] { Operate(src_addr, num_pages, perm, OperationType::ChangePermissions); });
// Determine the number of pages being operated on.
const std::size_t num_pages = size / PageSize;
CASCADE_CODE(Operate(src_addr, num_pages, KMemoryPermission::None,
OperationType::ChangePermissions));
CASCADE_CODE(MapPages(dst_addr, page_linked_list, KMemoryPermission::None));
// Create page groups for the memory being mapped.
KPageLinkedList pg;
AddRegionToPages(src_address, num_pages, pg);
block_guard.Cancel();
// Reprotect the source as kernel-read/not mapped.
const auto new_perm = static_cast<KMemoryPermission>(KMemoryPermission::KernelRead |
KMemoryPermission::NotMapped);
R_TRY(Operate(src_address, num_pages, new_perm, OperationType::ChangePermissions));
// Ensure that we unprotect the source pages on failure.
auto unprot_guard = SCOPE_GUARD({
ASSERT(this->Operate(src_address, num_pages, src_perm, OperationType::ChangePermissions)
.IsSuccess());
});
// Map the alias pages.
R_TRY(MapPages(dst_address, pg, new_perm));
// We successfully mapped the alias pages, so we don't need to unprotect the src pages on
// failure.
unprot_guard.Cancel();
// Apply the memory block updates.
block_manager->Update(src_address, num_pages, src_state, new_perm,
KMemoryAttribute::Locked);
block_manager->Update(dst_address, num_pages, KMemoryState::AliasCode, new_perm,
KMemoryAttribute::None);
}
block_manager->Update(src_addr, num_pages, state, KMemoryPermission::None,
KMemoryAttribute::Locked);
block_manager->Update(dst_addr, num_pages, KMemoryState::AliasCode);
return ResultSuccess;
}
ResultCode KPageTable::UnmapCodeMemory(VAddr dst_addr, VAddr src_addr, std::size_t size) {
ResultCode KPageTable::UnmapCodeMemory(VAddr dst_address, VAddr src_address, std::size_t size) {
// Validate the mapping request.
R_UNLESS(this->CanContain(dst_address, size, KMemoryState::AliasCode),
ResultInvalidMemoryRegion);
// Lock the table.
KScopedLightLock lk(general_lock);
if (!size) {
return ResultSuccess;
// Verify that the source memory is locked normal heap.
std::size_t num_src_allocator_blocks{};
R_TRY(this->CheckMemoryState(std::addressof(num_src_allocator_blocks), src_address, size,
KMemoryState::All, KMemoryState::Normal, KMemoryPermission::None,
KMemoryPermission::None, KMemoryAttribute::All,
KMemoryAttribute::Locked));
// Verify that the destination memory is aliasable code.
std::size_t num_dst_allocator_blocks{};
R_TRY(this->CheckMemoryStateContiguous(
std::addressof(num_dst_allocator_blocks), dst_address, size, KMemoryState::FlagCanCodeAlias,
KMemoryState::FlagCanCodeAlias, KMemoryPermission::None, KMemoryPermission::None,
KMemoryAttribute::All, KMemoryAttribute::None));
// Determine whether any pages being unmapped are code.
bool any_code_pages = false;
{
KMemoryBlockManager::const_iterator it = block_manager->FindIterator(dst_address);
while (true) {
// Get the memory info.
const KMemoryInfo info = it->GetMemoryInfo();
// Check if the memory has code flag.
if ((info.GetState() & KMemoryState::FlagCode) != KMemoryState::None) {
any_code_pages = true;
break;
}
// Check if we're done.
if (dst_address + size - 1 <= info.GetLastAddress()) {
break;
}
// Advance.
++it;
}
}
const std::size_t num_pages{size / PageSize};
// Ensure that we maintain the instruction cache.
bool reprotected_pages = false;
SCOPE_EXIT({
if (reprotected_pages && any_code_pages) {
system.InvalidateCpuInstructionCacheRange(dst_address, size);
}
});
CASCADE_CODE(CheckMemoryState(nullptr, nullptr, nullptr, nullptr, src_addr, size,
KMemoryState::All, KMemoryState::Normal, KMemoryPermission::None,
KMemoryPermission::None, KMemoryAttribute::Mask,
KMemoryAttribute::Locked, KMemoryAttribute::IpcAndDeviceMapped));
// Unmap.
{
// Determine the number of pages being operated on.
const std::size_t num_pages = size / PageSize;
KMemoryState state{};
CASCADE_CODE(CheckMemoryState(
&state, nullptr, nullptr, nullptr, dst_addr, PageSize, KMemoryState::FlagCanCodeAlias,
KMemoryState::FlagCanCodeAlias, KMemoryPermission::None, KMemoryPermission::None,
KMemoryAttribute::Mask, KMemoryAttribute::None, KMemoryAttribute::IpcAndDeviceMapped));
CASCADE_CODE(CheckMemoryState(dst_addr, size, KMemoryState::All, state, KMemoryPermission::None,
KMemoryPermission::None, KMemoryAttribute::Mask,
KMemoryAttribute::None));
CASCADE_CODE(Operate(dst_addr, num_pages, KMemoryPermission::None, OperationType::Unmap));
// Unmap the aliased copy of the pages.
R_TRY(Operate(dst_address, num_pages, KMemoryPermission::None, OperationType::Unmap));
block_manager->Update(dst_addr, num_pages, KMemoryState::Free);
block_manager->Update(src_addr, num_pages, KMemoryState::Normal,
KMemoryPermission::UserReadWrite);
// Try to set the permissions for the source pages back to what they should be.
R_TRY(Operate(src_address, num_pages, KMemoryPermission::UserReadWrite,
OperationType::ChangePermissions));
system.InvalidateCpuInstructionCacheRange(dst_addr, size);
// Apply the memory block updates.
block_manager->Update(dst_address, num_pages, KMemoryState::None);
block_manager->Update(src_address, num_pages, KMemoryState::Normal,
KMemoryPermission::UserReadWrite);
// Note that we reprotected pages.
reprotected_pages = true;
}
return ResultSuccess;
}
VAddr KPageTable::FindFreeArea(VAddr region_start, std::size_t region_num_pages,
std::size_t num_pages, std::size_t alignment, std::size_t offset,
std::size_t guard_pages) {
VAddr address = 0;
if (num_pages <= region_num_pages) {
if (this->IsAslrEnabled()) {
// Try to directly find a free area up to 8 times.
for (std::size_t i = 0; i < 8; i++) {
const std::size_t random_offset =
KSystemControl::GenerateRandomRange(
0, (region_num_pages - num_pages - guard_pages) * PageSize / alignment) *
alignment;
const VAddr candidate =
Common::AlignDown((region_start + random_offset), alignment) + offset;
KMemoryInfo info = this->QueryInfoImpl(candidate);
if (info.state != KMemoryState::Free) {
continue;
}
if (region_start > candidate) {
continue;
}
if (info.GetAddress() + guard_pages * PageSize > candidate) {
continue;
}
const VAddr candidate_end = candidate + (num_pages + guard_pages) * PageSize - 1;
if (candidate_end > info.GetLastAddress()) {
continue;
}
if (candidate_end > region_start + region_num_pages * PageSize - 1) {
continue;
}
address = candidate;
break;
}
// Fall back to finding the first free area with a random offset.
if (address == 0) {
// NOTE: Nintendo does not account for guard pages here.
// This may theoretically cause an offset to be chosen that cannot be mapped. We
// will account for guard pages.
const std::size_t offset_pages = KSystemControl::GenerateRandomRange(
0, region_num_pages - num_pages - guard_pages);
address = block_manager->FindFreeArea(region_start + offset_pages * PageSize,
region_num_pages - offset_pages, num_pages,
alignment, offset, guard_pages);
}
}
// Find the first free area.
if (address == 0) {
address = block_manager->FindFreeArea(region_start, region_num_pages, num_pages,
alignment, offset, guard_pages);
}
}
return address;
}
ResultCode KPageTable::UnmapProcessMemory(VAddr dst_addr, std::size_t size,
KPageTable& src_page_table, VAddr src_addr) {
KScopedLightLock lk(general_lock);
@@ -443,9 +575,10 @@ ResultCode KPageTable::MapPhysicalMemory(VAddr address, std::size_t size) {
R_UNLESS(memory_reservation.Succeeded(), ResultLimitReached);
// Allocate pages for the new memory.
KPageLinkedList page_linked_list;
R_TRY(system.Kernel().MemoryManager().Allocate(
page_linked_list, (size - mapped_size) / PageSize, memory_pool, allocation_option));
KPageLinkedList pg;
R_TRY(system.Kernel().MemoryManager().AllocateAndOpenForProcess(
&pg, (size - mapped_size) / PageSize,
KMemoryManager::EncodeOption(memory_pool, allocation_option), 0, 0));
// Map the memory.
{
@@ -547,7 +680,7 @@ ResultCode KPageTable::MapPhysicalMemory(VAddr address, std::size_t size) {
});
// Iterate over the memory.
auto pg_it = page_linked_list.Nodes().begin();
auto pg_it = pg.Nodes().begin();
PAddr pg_phys_addr = pg_it->GetAddress();
size_t pg_pages = pg_it->GetNumPages();
@@ -571,7 +704,7 @@ ResultCode KPageTable::MapPhysicalMemory(VAddr address, std::size_t size) {
// Check if we're at the end of the physical block.
if (pg_pages == 0) {
// Ensure there are more pages to map.
ASSERT(pg_it != page_linked_list.Nodes().end());
ASSERT(pg_it != pg.Nodes().end());
// Advance our physical block.
++pg_it;
@@ -841,10 +974,14 @@ ResultCode KPageTable::UnmapPhysicalMemory(VAddr address, std::size_t size) {
process->GetResourceLimit()->Release(LimitableResource::PhysicalMemory, mapped_size);
// Update memory blocks.
system.Kernel().MemoryManager().Free(pg, size / PageSize, memory_pool, allocation_option);
block_manager->Update(address, size / PageSize, KMemoryState::Free, KMemoryPermission::None,
KMemoryAttribute::None);
// TODO(bunnei): This is a workaround until the next set of changes, where we add reference
// counting for mapped pages. Until then, we must manually close the reference to the page
// group.
system.Kernel().MemoryManager().Close(pg);
// We succeeded.
remap_guard.Cancel();
@@ -980,6 +1117,46 @@ ResultCode KPageTable::MapPages(VAddr address, KPageLinkedList& page_linked_list
return ResultSuccess;
}
ResultCode KPageTable::MapPages(VAddr* out_addr, std::size_t num_pages, std::size_t alignment,
PAddr phys_addr, bool is_pa_valid, VAddr region_start,
std::size_t region_num_pages, KMemoryState state,
KMemoryPermission perm) {
ASSERT(Common::IsAligned(alignment, PageSize) && alignment >= PageSize);
// Ensure this is a valid map request.
R_UNLESS(this->CanContain(region_start, region_num_pages * PageSize, state),
ResultInvalidCurrentMemory);
R_UNLESS(num_pages < region_num_pages, ResultOutOfMemory);
// Lock the table.
KScopedLightLock lk(general_lock);
// Find a random address to map at.
VAddr addr = this->FindFreeArea(region_start, region_num_pages, num_pages, alignment, 0,
this->GetNumGuardPages());
R_UNLESS(addr != 0, ResultOutOfMemory);
ASSERT(Common::IsAligned(addr, alignment));
ASSERT(this->CanContain(addr, num_pages * PageSize, state));
ASSERT(this->CheckMemoryState(addr, num_pages * PageSize, KMemoryState::All, KMemoryState::Free,
KMemoryPermission::None, KMemoryPermission::None,
KMemoryAttribute::None, KMemoryAttribute::None)
.IsSuccess());
// Perform mapping operation.
if (is_pa_valid) {
R_TRY(this->Operate(addr, num_pages, perm, OperationType::Map, phys_addr));
} else {
UNIMPLEMENTED();
}
// Update the blocks.
block_manager->Update(addr, num_pages, state, perm);
// We successfully mapped the pages.
*out_addr = addr;
return ResultSuccess;
}
ResultCode KPageTable::UnmapPages(VAddr addr, const KPageLinkedList& page_linked_list) {
ASSERT(this->IsLockedByCurrentThread());
@@ -1022,6 +1199,30 @@ ResultCode KPageTable::UnmapPages(VAddr addr, KPageLinkedList& page_linked_list,
return ResultSuccess;
}
ResultCode KPageTable::UnmapPages(VAddr address, std::size_t num_pages, KMemoryState state) {
// Check that the unmap is in range.
const std::size_t size = num_pages * PageSize;
R_UNLESS(this->Contains(address, size), ResultInvalidCurrentMemory);
// Lock the table.
KScopedLightLock lk(general_lock);
// Check the memory state.
std::size_t num_allocator_blocks{};
R_TRY(this->CheckMemoryState(std::addressof(num_allocator_blocks), address, size,
KMemoryState::All, state, KMemoryPermission::None,
KMemoryPermission::None, KMemoryAttribute::All,
KMemoryAttribute::None));
// Perform the unmap.
R_TRY(Operate(address, num_pages, KMemoryPermission::None, OperationType::Unmap));
// Update the blocks.
block_manager->Update(address, num_pages, KMemoryState::Free, KMemoryPermission::None);
return ResultSuccess;
}
ResultCode KPageTable::SetProcessMemoryPermission(VAddr addr, std::size_t size,
Svc::MemoryPermission svc_perm) {
const size_t num_pages = size / PageSize;
@@ -1270,9 +1471,16 @@ ResultCode KPageTable::SetHeapSize(VAddr* out, std::size_t size) {
R_UNLESS(memory_reservation.Succeeded(), ResultLimitReached);
// Allocate pages for the heap extension.
KPageLinkedList page_linked_list;
R_TRY(system.Kernel().MemoryManager().Allocate(page_linked_list, allocation_size / PageSize,
memory_pool, allocation_option));
KPageLinkedList pg;
R_TRY(system.Kernel().MemoryManager().AllocateAndOpen(
&pg, allocation_size / PageSize,
KMemoryManager::EncodeOption(memory_pool, allocation_option)));
// Clear all the newly allocated pages.
for (const auto& it : pg.Nodes()) {
std::memset(system.DeviceMemory().GetPointer(it.GetAddress()), heap_fill_value,
it.GetSize());
}
// Map the pages.
{
@@ -1291,7 +1499,7 @@ ResultCode KPageTable::SetHeapSize(VAddr* out, std::size_t size) {
// Map the pages.
const auto num_pages = allocation_size / PageSize;
R_TRY(Operate(current_heap_end, num_pages, page_linked_list, OperationType::MapGroup));
R_TRY(Operate(current_heap_end, num_pages, pg, OperationType::MapGroup));
// Clear all the newly allocated pages.
for (std::size_t cur_page = 0; cur_page < num_pages; ++cur_page) {
@@ -1339,8 +1547,9 @@ ResultVal<VAddr> KPageTable::AllocateAndMapMemory(std::size_t needed_num_pages,
R_TRY(Operate(addr, needed_num_pages, perm, OperationType::Map, map_addr));
} else {
KPageLinkedList page_group;
R_TRY(system.Kernel().MemoryManager().Allocate(page_group, needed_num_pages, memory_pool,
allocation_option));
R_TRY(system.Kernel().MemoryManager().AllocateAndOpenForProcess(
&page_group, needed_num_pages,
KMemoryManager::EncodeOption(memory_pool, allocation_option), 0, 0));
R_TRY(Operate(addr, needed_num_pages, page_group, OperationType::MapGroup));
}
@@ -1547,7 +1756,7 @@ ResultCode KPageTable::Operate(VAddr addr, std::size_t num_pages, KMemoryPermiss
return ResultSuccess;
}
constexpr VAddr KPageTable::GetRegionAddress(KMemoryState state) const {
VAddr KPageTable::GetRegionAddress(KMemoryState state) const {
switch (state) {
case KMemoryState::Free:
case KMemoryState::Kernel:
@@ -1583,7 +1792,7 @@ constexpr VAddr KPageTable::GetRegionAddress(KMemoryState state) const {
}
}
constexpr std::size_t KPageTable::GetRegionSize(KMemoryState state) const {
std::size_t KPageTable::GetRegionSize(KMemoryState state) const {
switch (state) {
case KMemoryState::Free:
case KMemoryState::Kernel:

View File

@@ -36,8 +36,8 @@ public:
KMemoryManager::Pool pool);
ResultCode MapProcessCode(VAddr addr, std::size_t pages_count, KMemoryState state,
KMemoryPermission perm);
ResultCode MapCodeMemory(VAddr dst_addr, VAddr src_addr, std::size_t size);
ResultCode UnmapCodeMemory(VAddr dst_addr, VAddr src_addr, std::size_t size);
ResultCode MapCodeMemory(VAddr dst_address, VAddr src_address, std::size_t size);
ResultCode UnmapCodeMemory(VAddr dst_address, VAddr src_address, std::size_t size);
ResultCode UnmapProcessMemory(VAddr dst_addr, std::size_t size, KPageTable& src_page_table,
VAddr src_addr);
ResultCode MapPhysicalMemory(VAddr addr, std::size_t size);
@@ -46,7 +46,14 @@ public:
ResultCode UnmapMemory(VAddr dst_addr, VAddr src_addr, std::size_t size);
ResultCode MapPages(VAddr addr, KPageLinkedList& page_linked_list, KMemoryState state,
KMemoryPermission perm);
ResultCode MapPages(VAddr* out_addr, std::size_t num_pages, std::size_t alignment,
PAddr phys_addr, KMemoryState state, KMemoryPermission perm) {
return this->MapPages(out_addr, num_pages, alignment, phys_addr, true,
this->GetRegionAddress(state), this->GetRegionSize(state) / PageSize,
state, perm);
}
ResultCode UnmapPages(VAddr addr, KPageLinkedList& page_linked_list, KMemoryState state);
ResultCode UnmapPages(VAddr address, std::size_t num_pages, KMemoryState state);
ResultCode SetProcessMemoryPermission(VAddr addr, std::size_t size,
Svc::MemoryPermission svc_perm);
KMemoryInfo QueryInfo(VAddr addr);
@@ -91,6 +98,9 @@ private:
ResultCode InitializeMemoryLayout(VAddr start, VAddr end);
ResultCode MapPages(VAddr addr, const KPageLinkedList& page_linked_list,
KMemoryPermission perm);
ResultCode MapPages(VAddr* out_addr, std::size_t num_pages, std::size_t alignment,
PAddr phys_addr, bool is_pa_valid, VAddr region_start,
std::size_t region_num_pages, KMemoryState state, KMemoryPermission perm);
ResultCode UnmapPages(VAddr addr, const KPageLinkedList& page_linked_list);
bool IsRegionMapped(VAddr address, u64 size);
bool IsRegionContiguous(VAddr addr, u64 size) const;
@@ -102,8 +112,11 @@ private:
OperationType operation);
ResultCode Operate(VAddr addr, std::size_t num_pages, KMemoryPermission perm,
OperationType operation, PAddr map_addr = 0);
constexpr VAddr GetRegionAddress(KMemoryState state) const;
constexpr std::size_t GetRegionSize(KMemoryState state) const;
VAddr GetRegionAddress(KMemoryState state) const;
std::size_t GetRegionSize(KMemoryState state) const;
VAddr FindFreeArea(VAddr region_start, std::size_t region_num_pages, std::size_t num_pages,
std::size_t alignment, std::size_t offset, std::size_t guard_pages);
ResultCode CheckMemoryStateContiguous(std::size_t* out_blocks_needed, VAddr addr,
std::size_t size, KMemoryState state_mask,
@@ -137,7 +150,7 @@ private:
return CheckMemoryState(nullptr, nullptr, nullptr, out_blocks_needed, addr, size,
state_mask, state, perm_mask, perm, attr_mask, attr, ignore_attr);
}
ResultCode CheckMemoryState(VAddr addr, size_t size, KMemoryState state_mask,
ResultCode CheckMemoryState(VAddr addr, std::size_t size, KMemoryState state_mask,
KMemoryState state, KMemoryPermission perm_mask,
KMemoryPermission perm, KMemoryAttribute attr_mask,
KMemoryAttribute attr,
@@ -210,7 +223,7 @@ public:
constexpr VAddr GetAliasCodeRegionSize() const {
return alias_code_region_end - alias_code_region_start;
}
size_t GetNormalMemorySize() {
std::size_t GetNormalMemorySize() {
KScopedLightLock lk(general_lock);
return GetHeapSize() + mapped_physical_memory_size;
}
@@ -253,9 +266,10 @@ public:
constexpr bool IsInsideASLRRegion(VAddr address, std::size_t size) const {
return !IsOutsideASLRRegion(address, size);
}
PAddr GetPhysicalAddr(VAddr addr) {
ASSERT(IsLockedByCurrentThread());
constexpr std::size_t GetNumGuardPages() const {
return IsKernel() ? 1 : 4;
}
PAddr GetPhysicalAddr(VAddr addr) const {
const auto backing_addr = page_table_impl.backing_addr[addr >> PageBits];
ASSERT(backing_addr);
return backing_addr + addr;
@@ -276,10 +290,6 @@ private:
return is_aslr_enabled;
}
constexpr std::size_t GetNumGuardPages() const {
return IsKernel() ? 1 : 4;
}
constexpr bool ContainsPages(VAddr addr, std::size_t num_pages) const {
return (address_space_start <= addr) &&
(num_pages <= (address_space_end - address_space_start) / PageSize) &&
@@ -311,6 +321,8 @@ private:
bool is_kernel{};
bool is_aslr_enabled{};
u32 heap_fill_value{};
KMemoryManager::Pool memory_pool{KMemoryManager::Pool::Application};
KMemoryManager::Direction allocation_option{KMemoryManager::Direction::FromFront};

View File

@@ -57,7 +57,12 @@ ResultCode KPort::EnqueueSession(KServerSession* session) {
R_UNLESS(state == State::Normal, ResultPortClosed);
server.EnqueueSession(session);
server.GetSessionRequestHandler()->ClientConnected(server.AcceptSession());
if (auto session_ptr = server.GetSessionRequestHandler().lock()) {
session_ptr->ClientConnected(server.AcceptSession());
} else {
UNREACHABLE();
}
return ResultSuccess;
}

View File

@@ -13,7 +13,6 @@
#include "common/scope_exit.h"
#include "common/settings.h"
#include "core/core.h"
#include "core/device_memory.h"
#include "core/file_sys/program_metadata.h"
#include "core/hle/kernel/code_set.h"
#include "core/hle/kernel/k_memory_block_manager.h"
@@ -24,7 +23,6 @@
#include "core/hle/kernel/k_scoped_resource_reservation.h"
#include "core/hle/kernel/k_shared_memory.h"
#include "core/hle/kernel/k_shared_memory_info.h"
#include "core/hle/kernel/k_slab_heap.h"
#include "core/hle/kernel/k_thread.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/svc_results.h"
@@ -70,58 +68,6 @@ void SetupMainThread(Core::System& system, KProcess& owner_process, u32 priority
}
} // Anonymous namespace
// Represents a page used for thread-local storage.
//
// Each TLS page contains slots that may be used by processes and threads.
// Every process and thread is created with a slot in some arbitrary page
// (whichever page happens to have an available slot).
class TLSPage {
public:
static constexpr std::size_t num_slot_entries =
Core::Memory::PAGE_SIZE / Core::Memory::TLS_ENTRY_SIZE;
explicit TLSPage(VAddr address) : base_address{address} {}
bool HasAvailableSlots() const {
return !is_slot_used.all();
}
VAddr GetBaseAddress() const {
return base_address;
}
std::optional<VAddr> ReserveSlot() {
for (std::size_t i = 0; i < is_slot_used.size(); i++) {
if (is_slot_used[i]) {
continue;
}
is_slot_used[i] = true;
return base_address + (i * Core::Memory::TLS_ENTRY_SIZE);
}
return std::nullopt;
}
void ReleaseSlot(VAddr address) {
// Ensure that all given addresses are consistent with how TLS pages
// are intended to be used when releasing slots.
ASSERT(IsWithinPage(address));
ASSERT((address % Core::Memory::TLS_ENTRY_SIZE) == 0);
const std::size_t index = (address - base_address) / Core::Memory::TLS_ENTRY_SIZE;
is_slot_used[index] = false;
}
private:
bool IsWithinPage(VAddr address) const {
return base_address <= address && address < base_address + Core::Memory::PAGE_SIZE;
}
VAddr base_address;
std::bitset<num_slot_entries> is_slot_used;
};
ResultCode KProcess::Initialize(KProcess* process, Core::System& system, std::string process_name,
ProcessType type, KResourceLimit* res_limit) {
auto& kernel = system.Kernel();
@@ -404,7 +350,7 @@ ResultCode KProcess::LoadFromMetadata(const FileSys::ProgramMetadata& metadata,
}
// Create TLS region
tls_region_address = CreateTLSRegion();
R_TRY(this->CreateThreadLocalRegion(std::addressof(tls_region_address)));
memory_reservation.Commit();
return handle_table.Initialize(capabilities.GetHandleTableSize());
@@ -444,7 +390,7 @@ void KProcess::PrepareForTermination() {
stop_threads(kernel.System().GlobalSchedulerContext().GetThreadList());
FreeTLSRegion(tls_region_address);
this->DeleteThreadLocalRegion(tls_region_address);
tls_region_address = 0;
if (resource_limit) {
@@ -456,9 +402,6 @@ void KProcess::PrepareForTermination() {
}
void KProcess::Finalize() {
// Finalize the handle table and close any open handles.
handle_table.Finalize();
// Free all shared memory infos.
{
auto it = shared_memory_list.begin();
@@ -483,67 +426,110 @@ void KProcess::Finalize() {
resource_limit = nullptr;
}
// Finalize the page table.
page_table.reset();
// Perform inherited finalization.
KAutoObjectWithSlabHeapAndContainer<KProcess, KWorkerTask>::Finalize();
}
/**
* Attempts to find a TLS page that contains a free slot for
* use by a thread.
*
* @returns If a page with an available slot is found, then an iterator
* pointing to the page is returned. Otherwise the end iterator
* is returned instead.
*/
static auto FindTLSPageWithAvailableSlots(std::vector<TLSPage>& tls_pages) {
return std::find_if(tls_pages.begin(), tls_pages.end(),
[](const auto& page) { return page.HasAvailableSlots(); });
}
ResultCode KProcess::CreateThreadLocalRegion(VAddr* out) {
KThreadLocalPage* tlp = nullptr;
VAddr tlr = 0;
VAddr KProcess::CreateTLSRegion() {
KScopedSchedulerLock lock(kernel);
if (auto tls_page_iter{FindTLSPageWithAvailableSlots(tls_pages)};
tls_page_iter != tls_pages.cend()) {
return *tls_page_iter->ReserveSlot();
// See if we can get a region from a partially used TLP.
{
KScopedSchedulerLock sl{kernel};
if (auto it = partially_used_tlp_tree.begin(); it != partially_used_tlp_tree.end()) {
tlr = it->Reserve();
ASSERT(tlr != 0);
if (it->IsAllUsed()) {
tlp = std::addressof(*it);
partially_used_tlp_tree.erase(it);
fully_used_tlp_tree.insert(*tlp);
}
*out = tlr;
return ResultSuccess;
}
}
Page* const tls_page_ptr{kernel.GetUserSlabHeapPages().Allocate()};
ASSERT(tls_page_ptr);
// Allocate a new page.
tlp = KThreadLocalPage::Allocate(kernel);
R_UNLESS(tlp != nullptr, ResultOutOfMemory);
auto tlp_guard = SCOPE_GUARD({ KThreadLocalPage::Free(kernel, tlp); });
const VAddr start{page_table->GetKernelMapRegionStart()};
const VAddr size{page_table->GetKernelMapRegionEnd() - start};
const PAddr tls_map_addr{kernel.System().DeviceMemory().GetPhysicalAddr(tls_page_ptr)};
const VAddr tls_page_addr{page_table
->AllocateAndMapMemory(1, PageSize, true, start, size / PageSize,
KMemoryState::ThreadLocal,
KMemoryPermission::UserReadWrite,
tls_map_addr)
.ValueOr(0)};
// Initialize the new page.
R_TRY(tlp->Initialize(kernel, this));
ASSERT(tls_page_addr);
// Reserve a TLR.
tlr = tlp->Reserve();
ASSERT(tlr != 0);
std::memset(tls_page_ptr, 0, PageSize);
tls_pages.emplace_back(tls_page_addr);
// Insert into our tree.
{
KScopedSchedulerLock sl{kernel};
if (tlp->IsAllUsed()) {
fully_used_tlp_tree.insert(*tlp);
} else {
partially_used_tlp_tree.insert(*tlp);
}
}
const auto reserve_result{tls_pages.back().ReserveSlot()};
ASSERT(reserve_result.has_value());
return *reserve_result;
// We succeeded!
tlp_guard.Cancel();
*out = tlr;
return ResultSuccess;
}
void KProcess::FreeTLSRegion(VAddr tls_address) {
KScopedSchedulerLock lock(kernel);
const VAddr aligned_address = Common::AlignDown(tls_address, Core::Memory::PAGE_SIZE);
auto iter =
std::find_if(tls_pages.begin(), tls_pages.end(), [aligned_address](const auto& page) {
return page.GetBaseAddress() == aligned_address;
});
ResultCode KProcess::DeleteThreadLocalRegion(VAddr addr) {
KThreadLocalPage* page_to_free = nullptr;
// Something has gone very wrong if we're freeing a region
// with no actual page available.
ASSERT(iter != tls_pages.cend());
// Release the region.
{
KScopedSchedulerLock sl{kernel};
iter->ReleaseSlot(tls_address);
// Try to find the page in the partially used list.
auto it = partially_used_tlp_tree.find_key(Common::AlignDown(addr, PageSize));
if (it == partially_used_tlp_tree.end()) {
// If we don't find it, it has to be in the fully used list.
it = fully_used_tlp_tree.find_key(Common::AlignDown(addr, PageSize));
R_UNLESS(it != fully_used_tlp_tree.end(), ResultInvalidAddress);
// Release the region.
it->Release(addr);
// Move the page out of the fully used list.
KThreadLocalPage* tlp = std::addressof(*it);
fully_used_tlp_tree.erase(it);
if (tlp->IsAllFree()) {
page_to_free = tlp;
} else {
partially_used_tlp_tree.insert(*tlp);
}
} else {
// Release the region.
it->Release(addr);
// Handle the all-free case.
KThreadLocalPage* tlp = std::addressof(*it);
if (tlp->IsAllFree()) {
partially_used_tlp_tree.erase(it);
page_to_free = tlp;
}
}
}
// If we should free the page it was in, do so.
if (page_to_free != nullptr) {
page_to_free->Finalize();
KThreadLocalPage::Free(kernel, page_to_free);
}
return ResultSuccess;
}
void KProcess::LoadModule(CodeSet code_set, VAddr base_addr) {

View File

@@ -8,13 +8,13 @@
#include <cstddef>
#include <list>
#include <string>
#include <vector>
#include "common/common_types.h"
#include "core/hle/kernel/k_address_arbiter.h"
#include "core/hle/kernel/k_auto_object.h"
#include "core/hle/kernel/k_condition_variable.h"
#include "core/hle/kernel/k_handle_table.h"
#include "core/hle/kernel/k_synchronization_object.h"
#include "core/hle/kernel/k_thread_local_page.h"
#include "core/hle/kernel/k_worker_task.h"
#include "core/hle/kernel/process_capability.h"
#include "core/hle/kernel/slab_helpers.h"
@@ -362,10 +362,10 @@ public:
// Thread-local storage management
// Marks the next available region as used and returns the address of the slot.
[[nodiscard]] VAddr CreateTLSRegion();
[[nodiscard]] ResultCode CreateThreadLocalRegion(VAddr* out);
// Frees a used TLS slot identified by the given address
void FreeTLSRegion(VAddr tls_address);
ResultCode DeleteThreadLocalRegion(VAddr addr);
private:
void PinThread(s32 core_id, KThread* thread) {
@@ -413,13 +413,6 @@ private:
/// The ideal CPU core for this process, threads are scheduled on this core by default.
u8 ideal_core = 0;
/// The Thread Local Storage area is allocated as processes create threads,
/// each TLS area is 0x200 bytes, so one page (0x1000) is split up in 8 parts, and each part
/// holds the TLS for a specific thread. This vector contains which parts are in use for each
/// page as a bitmask.
/// This vector will grow as more pages are allocated for new threads.
std::vector<TLSPage> tls_pages;
/// Contains the parsed process capability descriptors.
ProcessCapabilities capabilities;
@@ -482,6 +475,12 @@ private:
KThread* exception_thread{};
KLightLock state_lock;
using TLPTree =
Common::IntrusiveRedBlackTreeBaseTraits<KThreadLocalPage>::TreeType<KThreadLocalPage>;
using TLPIterator = TLPTree::iterator;
TLPTree fully_used_tlp_tree;
TLPTree partially_used_tlp_tree;
};
} // namespace Kernel

View File

@@ -22,7 +22,6 @@
#include "core/hle/kernel/k_thread.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/physical_core.h"
#include "core/hle/kernel/time_manager.h"
namespace Kernel {

View File

@@ -30,11 +30,11 @@ public:
/// Whether or not this server port has an HLE handler available.
bool HasSessionRequestHandler() const {
return session_handler != nullptr;
return !session_handler.expired();
}
/// Gets the HLE handler for this port.
SessionRequestHandlerPtr GetSessionRequestHandler() const {
SessionRequestHandlerWeakPtr GetSessionRequestHandler() const {
return session_handler;
}
@@ -42,7 +42,7 @@ public:
* Sets the HLE handler template for the port. ServerSessions crated by connecting to this port
* will inherit a reference to this handler.
*/
void SetSessionHandler(SessionRequestHandlerPtr&& handler) {
void SetSessionHandler(SessionRequestHandlerWeakPtr&& handler) {
session_handler = std::move(handler);
}
@@ -66,7 +66,7 @@ private:
void CleanupSessions();
SessionList session_list;
SessionRequestHandlerPtr session_handler;
SessionRequestHandlerWeakPtr session_handler;
KPort* parent{};
};

View File

@@ -27,10 +27,7 @@ namespace Kernel {
KServerSession::KServerSession(KernelCore& kernel_) : KSynchronizationObject{kernel_} {}
KServerSession::~KServerSession() {
// Ensure that the global list tracking server sessions does not hold on to a reference.
kernel.UnregisterServerSession(this);
}
KServerSession::~KServerSession() = default;
void KServerSession::Initialize(KSession* parent_session_, std::string&& name_,
std::shared_ptr<SessionRequestManager> manager_) {
@@ -49,6 +46,9 @@ void KServerSession::Destroy() {
parent->OnServerClosed();
parent->Close();
// Release host emulation members.
manager.reset();
}
void KServerSession::OnClientClosed() {
@@ -98,7 +98,12 @@ ResultCode KServerSession::HandleDomainSyncRequest(Kernel::HLERequestContext& co
UNREACHABLE();
return ResultSuccess; // Ignore error if asserts are off
}
return manager->DomainHandler(object_id - 1)->HandleSyncRequest(*this, context);
if (auto strong_ptr = manager->DomainHandler(object_id - 1).lock()) {
return strong_ptr->HandleSyncRequest(*this, context);
} else {
UNREACHABLE();
return ResultSuccess;
}
case IPC::DomainMessageHeader::CommandType::CloseVirtualHandle: {
LOG_DEBUG(IPC, "CloseVirtualHandle, object_id=0x{:08X}", object_id);

View File

@@ -16,39 +16,34 @@ class KernelCore;
namespace impl {
class KSlabHeapImpl final {
public:
class KSlabHeapImpl {
YUZU_NON_COPYABLE(KSlabHeapImpl);
YUZU_NON_MOVEABLE(KSlabHeapImpl);
public:
struct Node {
Node* next{};
};
public:
constexpr KSlabHeapImpl() = default;
constexpr ~KSlabHeapImpl() = default;
void Initialize(std::size_t size) {
ASSERT(head == nullptr);
obj_size = size;
}
constexpr std::size_t GetObjectSize() const {
return obj_size;
void Initialize() {
ASSERT(m_head == nullptr);
}
Node* GetHead() const {
return head;
return m_head;
}
void* Allocate() {
Node* ret = head.load();
Node* ret = m_head.load();
do {
if (ret == nullptr) {
break;
}
} while (!head.compare_exchange_weak(ret, ret->next));
} while (!m_head.compare_exchange_weak(ret, ret->next));
return ret;
}
@@ -56,170 +51,157 @@ public:
void Free(void* obj) {
Node* node = static_cast<Node*>(obj);
Node* cur_head = head.load();
Node* cur_head = m_head.load();
do {
node->next = cur_head;
} while (!head.compare_exchange_weak(cur_head, node));
} while (!m_head.compare_exchange_weak(cur_head, node));
}
private:
std::atomic<Node*> head{};
std::size_t obj_size{};
std::atomic<Node*> m_head{};
};
} // namespace impl
class KSlabHeapBase {
public:
template <bool SupportDynamicExpansion>
class KSlabHeapBase : protected impl::KSlabHeapImpl {
YUZU_NON_COPYABLE(KSlabHeapBase);
YUZU_NON_MOVEABLE(KSlabHeapBase);
private:
size_t m_obj_size{};
uintptr_t m_peak{};
uintptr_t m_start{};
uintptr_t m_end{};
private:
void UpdatePeakImpl(uintptr_t obj) {
static_assert(std::atomic_ref<uintptr_t>::is_always_lock_free);
std::atomic_ref<uintptr_t> peak_ref(m_peak);
const uintptr_t alloc_peak = obj + this->GetObjectSize();
uintptr_t cur_peak = m_peak;
do {
if (alloc_peak <= cur_peak) {
break;
}
} while (!peak_ref.compare_exchange_strong(cur_peak, alloc_peak));
}
public:
constexpr KSlabHeapBase() = default;
constexpr ~KSlabHeapBase() = default;
constexpr bool Contains(uintptr_t addr) const {
return start <= addr && addr < end;
bool Contains(uintptr_t address) const {
return m_start <= address && address < m_end;
}
constexpr std::size_t GetSlabHeapSize() const {
return (end - start) / GetObjectSize();
}
constexpr std::size_t GetObjectSize() const {
return impl.GetObjectSize();
}
constexpr uintptr_t GetSlabHeapAddress() const {
return start;
}
std::size_t GetObjectIndexImpl(const void* obj) const {
return (reinterpret_cast<uintptr_t>(obj) - start) / GetObjectSize();
}
std::size_t GetPeakIndex() const {
return GetObjectIndexImpl(reinterpret_cast<const void*>(peak));
}
void* AllocateImpl() {
return impl.Allocate();
}
void FreeImpl(void* obj) {
// Don't allow freeing an object that wasn't allocated from this heap
ASSERT(Contains(reinterpret_cast<uintptr_t>(obj)));
impl.Free(obj);
}
void InitializeImpl(std::size_t obj_size, void* memory, std::size_t memory_size) {
// Ensure we don't initialize a slab using null memory
void Initialize(size_t obj_size, void* memory, size_t memory_size) {
// Ensure we don't initialize a slab using null memory.
ASSERT(memory != nullptr);
// Initialize the base allocator
impl.Initialize(obj_size);
// Set our object size.
m_obj_size = obj_size;
// Set our tracking variables
const std::size_t num_obj = (memory_size / obj_size);
start = reinterpret_cast<uintptr_t>(memory);
end = start + num_obj * obj_size;
peak = start;
// Initialize the base allocator.
KSlabHeapImpl::Initialize();
// Free the objects
u8* cur = reinterpret_cast<u8*>(end);
// Set our tracking variables.
const size_t num_obj = (memory_size / obj_size);
m_start = reinterpret_cast<uintptr_t>(memory);
m_end = m_start + num_obj * obj_size;
m_peak = m_start;
for (std::size_t i{}; i < num_obj; i++) {
// Free the objects.
u8* cur = reinterpret_cast<u8*>(m_end);
for (size_t i = 0; i < num_obj; i++) {
cur -= obj_size;
impl.Free(cur);
KSlabHeapImpl::Free(cur);
}
}
private:
using Impl = impl::KSlabHeapImpl;
size_t GetSlabHeapSize() const {
return (m_end - m_start) / this->GetObjectSize();
}
Impl impl;
uintptr_t peak{};
uintptr_t start{};
uintptr_t end{};
size_t GetObjectSize() const {
return m_obj_size;
}
void* Allocate() {
void* obj = KSlabHeapImpl::Allocate();
return obj;
}
void Free(void* obj) {
// Don't allow freeing an object that wasn't allocated from this heap.
const bool contained = this->Contains(reinterpret_cast<uintptr_t>(obj));
ASSERT(contained);
KSlabHeapImpl::Free(obj);
}
size_t GetObjectIndex(const void* obj) const {
if constexpr (SupportDynamicExpansion) {
if (!this->Contains(reinterpret_cast<uintptr_t>(obj))) {
return std::numeric_limits<size_t>::max();
}
}
return (reinterpret_cast<uintptr_t>(obj) - m_start) / this->GetObjectSize();
}
size_t GetPeakIndex() const {
return this->GetObjectIndex(reinterpret_cast<const void*>(m_peak));
}
uintptr_t GetSlabHeapAddress() const {
return m_start;
}
size_t GetNumRemaining() const {
// Only calculate the number of remaining objects under debug configuration.
return 0;
}
};
template <typename T>
class KSlabHeap final : public KSlabHeapBase {
class KSlabHeap final : public KSlabHeapBase<false> {
private:
using BaseHeap = KSlabHeapBase<false>;
public:
enum class AllocationType {
Host,
Guest,
};
constexpr KSlabHeap() = default;
explicit constexpr KSlabHeap(AllocationType allocation_type_ = AllocationType::Host)
: KSlabHeapBase(), allocation_type{allocation_type_} {}
void Initialize(void* memory, std::size_t memory_size) {
if (allocation_type == AllocationType::Guest) {
InitializeImpl(sizeof(T), memory, memory_size);
}
void Initialize(void* memory, size_t memory_size) {
BaseHeap::Initialize(sizeof(T), memory, memory_size);
}
T* Allocate() {
switch (allocation_type) {
case AllocationType::Host:
// Fallback for cases where we do not yet support allocating guest memory from the slab
// heap, such as for kernel memory regions.
return new T;
T* obj = static_cast<T*>(BaseHeap::Allocate());
case AllocationType::Guest:
T* obj = static_cast<T*>(AllocateImpl());
if (obj != nullptr) {
new (obj) T();
}
return obj;
if (obj != nullptr) [[likely]] {
std::construct_at(obj);
}
UNREACHABLE_MSG("Invalid AllocationType {}", allocation_type);
return nullptr;
return obj;
}
T* AllocateWithKernel(KernelCore& kernel) {
switch (allocation_type) {
case AllocationType::Host:
// Fallback for cases where we do not yet support allocating guest memory from the slab
// heap, such as for kernel memory regions.
return new T(kernel);
T* Allocate(KernelCore& kernel) {
T* obj = static_cast<T*>(BaseHeap::Allocate());
case AllocationType::Guest:
T* obj = static_cast<T*>(AllocateImpl());
if (obj != nullptr) {
new (obj) T(kernel);
}
return obj;
if (obj != nullptr) [[likely]] {
std::construct_at(obj, kernel);
}
UNREACHABLE_MSG("Invalid AllocationType {}", allocation_type);
return nullptr;
return obj;
}
void Free(T* obj) {
switch (allocation_type) {
case AllocationType::Host:
// Fallback for cases where we do not yet support allocating guest memory from the slab
// heap, such as for kernel memory regions.
delete obj;
return;
case AllocationType::Guest:
FreeImpl(obj);
return;
}
UNREACHABLE_MSG("Invalid AllocationType {}", allocation_type);
BaseHeap::Free(obj);
}
constexpr std::size_t GetObjectIndex(const T* obj) const {
return GetObjectIndexImpl(obj);
size_t GetObjectIndex(const T* obj) const {
return BaseHeap::GetObjectIndex(obj);
}
private:
const AllocationType allocation_type;
};
} // namespace Kernel

View File

@@ -14,9 +14,7 @@
#include "common/common_types.h"
#include "common/fiber.h"
#include "common/logging/log.h"
#include "common/scope_exit.h"
#include "common/settings.h"
#include "common/thread_queue_list.h"
#include "core/core.h"
#include "core/cpu_manager.h"
#include "core/hardware_properties.h"
@@ -33,7 +31,6 @@
#include "core/hle/kernel/k_worker_task_manager.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/svc_results.h"
#include "core/hle/kernel/time_manager.h"
#include "core/hle/result.h"
#include "core/memory.h"
@@ -210,7 +207,7 @@ ResultCode KThread::Initialize(KThreadFunction func, uintptr_t arg, VAddr user_s
if (owner != nullptr) {
// Setup the TLS, if needed.
if (type == ThreadType::User) {
tls_address = owner->CreateTLSRegion();
R_TRY(owner->CreateThreadLocalRegion(std::addressof(tls_address)));
}
parent = owner;
@@ -305,7 +302,7 @@ void KThread::Finalize() {
// If the thread has a local region, delete it.
if (tls_address != 0) {
parent->FreeTLSRegion(tls_address);
ASSERT(parent->DeleteThreadLocalRegion(tls_address).IsSuccess());
}
// Release any waiters.
@@ -326,6 +323,9 @@ void KThread::Finalize() {
}
}
// Release host emulation members.
host_context.reset();
// Perform inherited finalization.
KSynchronizationObject::Finalize();
}

View File

@@ -656,7 +656,7 @@ private:
static_assert(sizeof(SyncObjectBuffer::sync_objects) == sizeof(SyncObjectBuffer::handles));
struct ConditionVariableComparator {
struct LightCompareType {
struct RedBlackKeyType {
u64 cv_key{};
s32 priority{};
@@ -672,8 +672,8 @@ private:
template <typename T>
requires(
std::same_as<T, KThread> ||
std::same_as<T, LightCompareType>) static constexpr int Compare(const T& lhs,
const KThread& rhs) {
std::same_as<T, RedBlackKeyType>) static constexpr int Compare(const T& lhs,
const KThread& rhs) {
const u64 l_key = lhs.GetConditionVariableKey();
const u64 r_key = rhs.GetConditionVariableKey();

View File

@@ -0,0 +1,68 @@
// Copyright 2022 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "common/scope_exit.h"
#include "core/core.h"
#include "core/hle/kernel/k_memory_block.h"
#include "core/hle/kernel/k_page_buffer.h"
#include "core/hle/kernel/k_page_table.h"
#include "core/hle/kernel/k_process.h"
#include "core/hle/kernel/k_thread_local_page.h"
#include "core/hle/kernel/kernel.h"
namespace Kernel {
ResultCode KThreadLocalPage::Initialize(KernelCore& kernel, KProcess* process) {
// Set that this process owns us.
m_owner = process;
m_kernel = &kernel;
// Allocate a new page.
KPageBuffer* page_buf = KPageBuffer::Allocate(kernel);
R_UNLESS(page_buf != nullptr, ResultOutOfMemory);
auto page_buf_guard = SCOPE_GUARD({ KPageBuffer::Free(kernel, page_buf); });
// Map the address in.
const auto phys_addr = kernel.System().DeviceMemory().GetPhysicalAddr(page_buf);
R_TRY(m_owner->PageTable().MapPages(std::addressof(m_virt_addr), 1, PageSize, phys_addr,
KMemoryState::ThreadLocal,
KMemoryPermission::UserReadWrite));
// We succeeded.
page_buf_guard.Cancel();
return ResultSuccess;
}
ResultCode KThreadLocalPage::Finalize() {
// Get the physical address of the page.
const PAddr phys_addr = m_owner->PageTable().GetPhysicalAddr(m_virt_addr);
ASSERT(phys_addr);
// Unmap the page.
R_TRY(m_owner->PageTable().UnmapPages(this->GetAddress(), 1, KMemoryState::ThreadLocal));
// Free the page.
KPageBuffer::Free(*m_kernel, KPageBuffer::FromPhysicalAddress(m_kernel->System(), phys_addr));
return ResultSuccess;
}
VAddr KThreadLocalPage::Reserve() {
for (size_t i = 0; i < m_is_region_free.size(); i++) {
if (m_is_region_free[i]) {
m_is_region_free[i] = false;
return this->GetRegionAddress(i);
}
}
return 0;
}
void KThreadLocalPage::Release(VAddr addr) {
m_is_region_free[this->GetRegionIndex(addr)] = true;
}
} // namespace Kernel

View File

@@ -0,0 +1,111 @@
// Copyright 2022 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <algorithm>
#include <array>
#include "common/alignment.h"
#include "common/assert.h"
#include "common/common_types.h"
#include "common/intrusive_red_black_tree.h"
#include "core/hle/kernel/memory_types.h"
#include "core/hle/kernel/slab_helpers.h"
#include "core/hle/result.h"
namespace Kernel {
class KernelCore;
class KProcess;
class KThreadLocalPage final : public Common::IntrusiveRedBlackTreeBaseNode<KThreadLocalPage>,
public KSlabAllocated<KThreadLocalPage> {
public:
static constexpr size_t RegionsPerPage = PageSize / Svc::ThreadLocalRegionSize;
static_assert(RegionsPerPage > 0);
public:
constexpr explicit KThreadLocalPage(VAddr addr = {}) : m_virt_addr(addr) {
m_is_region_free.fill(true);
}
constexpr VAddr GetAddress() const {
return m_virt_addr;
}
ResultCode Initialize(KernelCore& kernel, KProcess* process);
ResultCode Finalize();
VAddr Reserve();
void Release(VAddr addr);
bool IsAllUsed() const {
return std::ranges::all_of(m_is_region_free.begin(), m_is_region_free.end(),
[](bool is_free) { return !is_free; });
}
bool IsAllFree() const {
return std::ranges::all_of(m_is_region_free.begin(), m_is_region_free.end(),
[](bool is_free) { return is_free; });
}
bool IsAnyUsed() const {
return !this->IsAllFree();
}
bool IsAnyFree() const {
return !this->IsAllUsed();
}
public:
using RedBlackKeyType = VAddr;
static constexpr RedBlackKeyType GetRedBlackKey(const RedBlackKeyType& v) {
return v;
}
static constexpr RedBlackKeyType GetRedBlackKey(const KThreadLocalPage& v) {
return v.GetAddress();
}
template <typename T>
requires(std::same_as<T, KThreadLocalPage> ||
std::same_as<T, RedBlackKeyType>) static constexpr int Compare(const T& lhs,
const KThreadLocalPage&
rhs) {
const VAddr lval = GetRedBlackKey(lhs);
const VAddr rval = GetRedBlackKey(rhs);
if (lval < rval) {
return -1;
} else if (lval == rval) {
return 0;
} else {
return 1;
}
}
private:
constexpr VAddr GetRegionAddress(size_t i) const {
return this->GetAddress() + i * Svc::ThreadLocalRegionSize;
}
constexpr bool Contains(VAddr addr) const {
return this->GetAddress() <= addr && addr < this->GetAddress() + PageSize;
}
constexpr size_t GetRegionIndex(VAddr addr) const {
ASSERT(Common::IsAligned(addr, Svc::ThreadLocalRegionSize));
ASSERT(this->Contains(addr));
return (addr - this->GetAddress()) / Svc::ThreadLocalRegionSize;
}
private:
VAddr m_virt_addr{};
KProcess* m_owner{};
KernelCore* m_kernel{};
std::array<bool, RegionsPerPage> m_is_region_free{};
};
} // namespace Kernel

View File

@@ -22,9 +22,7 @@
#include "core/arm/exclusive_monitor.h"
#include "core/core.h"
#include "core/core_timing.h"
#include "core/core_timing_util.h"
#include "core/cpu_manager.h"
#include "core/device_memory.h"
#include "core/hardware_properties.h"
#include "core/hle/kernel/init/init_slab_setup.h"
#include "core/hle/kernel/k_client_port.h"
@@ -35,7 +33,6 @@
#include "core/hle/kernel/k_resource_limit.h"
#include "core/hle/kernel/k_scheduler.h"
#include "core/hle/kernel/k_shared_memory.h"
#include "core/hle/kernel/k_slab_heap.h"
#include "core/hle/kernel/k_thread.h"
#include "core/hle/kernel/k_worker_task_manager.h"
#include "core/hle/kernel/kernel.h"
@@ -52,7 +49,7 @@ namespace Kernel {
struct KernelCore::Impl {
explicit Impl(Core::System& system_, KernelCore& kernel_)
: time_manager{system_}, object_list_container{kernel_},
: time_manager{system_},
service_threads_manager{1, "yuzu:ServiceThreadsManager"}, system{system_} {}
void SetMulticore(bool is_multi) {
@@ -60,6 +57,7 @@ struct KernelCore::Impl {
}
void Initialize(KernelCore& kernel) {
global_object_list_container = std::make_unique<KAutoObjectWithListContainer>(kernel);
global_scheduler_context = std::make_unique<Kernel::GlobalSchedulerContext>(kernel);
global_handle_table = std::make_unique<Kernel::KHandleTable>(kernel);
global_handle_table->Initialize(KHandleTable::MaxTableSize);
@@ -70,14 +68,13 @@ struct KernelCore::Impl {
// Derive the initial memory layout from the emulated board
Init::InitializeSlabResourceCounts(kernel);
KMemoryLayout memory_layout;
DeriveInitialMemoryLayout(memory_layout);
Init::InitializeSlabHeaps(system, memory_layout);
DeriveInitialMemoryLayout();
Init::InitializeSlabHeaps(system, *memory_layout);
// Initialize kernel memory and resources.
InitializeSystemResourceLimit(kernel, system.CoreTiming(), memory_layout);
InitializeMemoryLayout(memory_layout);
InitializePageSlab();
InitializeSystemResourceLimit(kernel, system.CoreTiming());
InitializeMemoryLayout();
Init::InitializeKPageBufferSlabHeap(system);
InitializeSchedulers();
InitializeSuspendThreads();
InitializePreemption(kernel);
@@ -108,19 +105,6 @@ struct KernelCore::Impl {
for (auto* server_port : server_ports_) {
server_port->Close();
}
// Close all open server sessions.
std::unordered_set<KServerSession*> server_sessions_;
{
std::lock_guard lk(server_sessions_lock);
server_sessions_ = server_sessions;
server_sessions.clear();
}
for (auto* server_session : server_sessions_) {
server_session->Close();
}
// Ensure that the object list container is finalized and properly shutdown.
object_list_container.Finalize();
// Ensures all service threads gracefully shutdown.
ClearServiceThreads();
@@ -195,11 +179,15 @@ struct KernelCore::Impl {
{
std::lock_guard lk(registered_objects_lock);
if (registered_objects.size()) {
LOG_WARNING(Kernel, "{} kernel objects were dangling on shutdown!",
registered_objects.size());
LOG_DEBUG(Kernel, "{} kernel objects were dangling on shutdown!",
registered_objects.size());
registered_objects.clear();
}
}
// Ensure that the object list container is finalized and properly shutdown.
global_object_list_container->Finalize();
global_object_list_container.reset();
}
void InitializePhysicalCores() {
@@ -219,12 +207,11 @@ struct KernelCore::Impl {
// Creates the default system resource limit
void InitializeSystemResourceLimit(KernelCore& kernel,
const Core::Timing::CoreTiming& core_timing,
const KMemoryLayout& memory_layout) {
const Core::Timing::CoreTiming& core_timing) {
system_resource_limit = KResourceLimit::Create(system.Kernel());
system_resource_limit->Initialize(&core_timing);
const auto [total_size, kernel_size] = memory_layout.GetTotalAndKernelMemorySizes();
const auto [total_size, kernel_size] = memory_layout->GetTotalAndKernelMemorySizes();
// If setting the default system values fails, then something seriously wrong has occurred.
ASSERT(system_resource_limit->SetLimitValue(LimitableResource::PhysicalMemory, total_size)
@@ -293,15 +280,16 @@ struct KernelCore::Impl {
// Gets the dummy KThread for the caller, allocating a new one if this is the first time
KThread* GetHostDummyThread() {
auto make_thread = [this]() {
KThread* thread = KThread::Create(system.Kernel());
auto initialize = [this](KThread* thread) {
ASSERT(KThread::InitializeDummyThread(thread).IsSuccess());
thread->SetName(fmt::format("DummyThread:{}", GetHostThreadId()));
return thread;
};
thread_local KThread* saved_thread = make_thread();
return saved_thread;
thread_local auto raw_thread = KThread(system.Kernel());
thread_local auto thread = initialize(&raw_thread);
return thread;
}
/// Registers a CPU core thread by allocating a host thread ID for it
@@ -353,16 +341,18 @@ struct KernelCore::Impl {
return schedulers[thread_id]->GetCurrentThread();
}
void DeriveInitialMemoryLayout(KMemoryLayout& memory_layout) {
void DeriveInitialMemoryLayout() {
memory_layout = std::make_unique<KMemoryLayout>();
// Insert the root region for the virtual memory tree, from which all other regions will
// derive.
memory_layout.GetVirtualMemoryRegionTree().InsertDirectly(
memory_layout->GetVirtualMemoryRegionTree().InsertDirectly(
KernelVirtualAddressSpaceBase,
KernelVirtualAddressSpaceBase + KernelVirtualAddressSpaceSize - 1);
// Insert the root region for the physical memory tree, from which all other regions will
// derive.
memory_layout.GetPhysicalMemoryRegionTree().InsertDirectly(
memory_layout->GetPhysicalMemoryRegionTree().InsertDirectly(
KernelPhysicalAddressSpaceBase,
KernelPhysicalAddressSpaceBase + KernelPhysicalAddressSpaceSize - 1);
@@ -379,7 +369,7 @@ struct KernelCore::Impl {
if (!(kernel_region_start + KernelRegionSize - 1 <= KernelVirtualAddressSpaceLast)) {
kernel_region_size = KernelVirtualAddressSpaceEnd - kernel_region_start;
}
ASSERT(memory_layout.GetVirtualMemoryRegionTree().Insert(
ASSERT(memory_layout->GetVirtualMemoryRegionTree().Insert(
kernel_region_start, kernel_region_size, KMemoryRegionType_Kernel));
// Setup the code region.
@@ -388,11 +378,11 @@ struct KernelCore::Impl {
Common::AlignDown(code_start_virt_addr, CodeRegionAlign);
constexpr VAddr code_region_end = Common::AlignUp(code_end_virt_addr, CodeRegionAlign);
constexpr size_t code_region_size = code_region_end - code_region_start;
ASSERT(memory_layout.GetVirtualMemoryRegionTree().Insert(
ASSERT(memory_layout->GetVirtualMemoryRegionTree().Insert(
code_region_start, code_region_size, KMemoryRegionType_KernelCode));
// Setup board-specific device physical regions.
Init::SetupDevicePhysicalMemoryRegions(memory_layout);
Init::SetupDevicePhysicalMemoryRegions(*memory_layout);
// Determine the amount of space needed for the misc region.
size_t misc_region_needed_size;
@@ -401,7 +391,7 @@ struct KernelCore::Impl {
misc_region_needed_size = Core::Hardware::NUM_CPU_CORES * (3 * (PageSize + PageSize));
// Account for each auto-map device.
for (const auto& region : memory_layout.GetPhysicalMemoryRegionTree()) {
for (const auto& region : memory_layout->GetPhysicalMemoryRegionTree()) {
if (region.HasTypeAttribute(KMemoryRegionAttr_ShouldKernelMap)) {
// Check that the region is valid.
ASSERT(region.GetEndAddress() != 0);
@@ -426,22 +416,22 @@ struct KernelCore::Impl {
// Setup the misc region.
const VAddr misc_region_start =
memory_layout.GetVirtualMemoryRegionTree().GetRandomAlignedRegion(
memory_layout->GetVirtualMemoryRegionTree().GetRandomAlignedRegion(
misc_region_size, MiscRegionAlign, KMemoryRegionType_Kernel);
ASSERT(memory_layout.GetVirtualMemoryRegionTree().Insert(
ASSERT(memory_layout->GetVirtualMemoryRegionTree().Insert(
misc_region_start, misc_region_size, KMemoryRegionType_KernelMisc));
// Setup the stack region.
constexpr size_t StackRegionSize = 14_MiB;
constexpr size_t StackRegionAlign = KernelAslrAlignment;
const VAddr stack_region_start =
memory_layout.GetVirtualMemoryRegionTree().GetRandomAlignedRegion(
memory_layout->GetVirtualMemoryRegionTree().GetRandomAlignedRegion(
StackRegionSize, StackRegionAlign, KMemoryRegionType_Kernel);
ASSERT(memory_layout.GetVirtualMemoryRegionTree().Insert(
ASSERT(memory_layout->GetVirtualMemoryRegionTree().Insert(
stack_region_start, StackRegionSize, KMemoryRegionType_KernelStack));
// Determine the size of the resource region.
const size_t resource_region_size = memory_layout.GetResourceRegionSizeForInit();
const size_t resource_region_size = memory_layout->GetResourceRegionSizeForInit();
// Determine the size of the slab region.
const size_t slab_region_size =
@@ -458,23 +448,23 @@ struct KernelCore::Impl {
Common::AlignUp(code_end_phys_addr + slab_region_size, SlabRegionAlign) -
Common::AlignDown(code_end_phys_addr, SlabRegionAlign);
const VAddr slab_region_start =
memory_layout.GetVirtualMemoryRegionTree().GetRandomAlignedRegion(
memory_layout->GetVirtualMemoryRegionTree().GetRandomAlignedRegion(
slab_region_needed_size, SlabRegionAlign, KMemoryRegionType_Kernel) +
(code_end_phys_addr % SlabRegionAlign);
ASSERT(memory_layout.GetVirtualMemoryRegionTree().Insert(
ASSERT(memory_layout->GetVirtualMemoryRegionTree().Insert(
slab_region_start, slab_region_size, KMemoryRegionType_KernelSlab));
// Setup the temp region.
constexpr size_t TempRegionSize = 128_MiB;
constexpr size_t TempRegionAlign = KernelAslrAlignment;
const VAddr temp_region_start =
memory_layout.GetVirtualMemoryRegionTree().GetRandomAlignedRegion(
memory_layout->GetVirtualMemoryRegionTree().GetRandomAlignedRegion(
TempRegionSize, TempRegionAlign, KMemoryRegionType_Kernel);
ASSERT(memory_layout.GetVirtualMemoryRegionTree().Insert(temp_region_start, TempRegionSize,
KMemoryRegionType_KernelTemp));
ASSERT(memory_layout->GetVirtualMemoryRegionTree().Insert(temp_region_start, TempRegionSize,
KMemoryRegionType_KernelTemp));
// Automatically map in devices that have auto-map attributes.
for (auto& region : memory_layout.GetPhysicalMemoryRegionTree()) {
for (auto& region : memory_layout->GetPhysicalMemoryRegionTree()) {
// We only care about kernel regions.
if (!region.IsDerivedFrom(KMemoryRegionType_Kernel)) {
continue;
@@ -501,21 +491,21 @@ struct KernelCore::Impl {
const size_t map_size =
Common::AlignUp(region.GetEndAddress(), PageSize) - map_phys_addr;
const VAddr map_virt_addr =
memory_layout.GetVirtualMemoryRegionTree().GetRandomAlignedRegionWithGuard(
memory_layout->GetVirtualMemoryRegionTree().GetRandomAlignedRegionWithGuard(
map_size, PageSize, KMemoryRegionType_KernelMisc, PageSize);
ASSERT(memory_layout.GetVirtualMemoryRegionTree().Insert(
ASSERT(memory_layout->GetVirtualMemoryRegionTree().Insert(
map_virt_addr, map_size, KMemoryRegionType_KernelMiscMappedDevice));
region.SetPairAddress(map_virt_addr + region.GetAddress() - map_phys_addr);
}
Init::SetupDramPhysicalMemoryRegions(memory_layout);
Init::SetupDramPhysicalMemoryRegions(*memory_layout);
// Insert a physical region for the kernel code region.
ASSERT(memory_layout.GetPhysicalMemoryRegionTree().Insert(
ASSERT(memory_layout->GetPhysicalMemoryRegionTree().Insert(
code_start_phys_addr, code_region_size, KMemoryRegionType_DramKernelCode));
// Insert a physical region for the kernel slab region.
ASSERT(memory_layout.GetPhysicalMemoryRegionTree().Insert(
ASSERT(memory_layout->GetPhysicalMemoryRegionTree().Insert(
slab_start_phys_addr, slab_region_size, KMemoryRegionType_DramKernelSlab));
// Determine size available for kernel page table heaps, requiring > 8 MB.
@@ -524,12 +514,12 @@ struct KernelCore::Impl {
ASSERT(page_table_heap_size / 4_MiB > 2);
// Insert a physical region for the kernel page table heap region
ASSERT(memory_layout.GetPhysicalMemoryRegionTree().Insert(
ASSERT(memory_layout->GetPhysicalMemoryRegionTree().Insert(
slab_end_phys_addr, page_table_heap_size, KMemoryRegionType_DramKernelPtHeap));
// All DRAM regions that we haven't tagged by this point will be mapped under the linear
// mapping. Tag them.
for (auto& region : memory_layout.GetPhysicalMemoryRegionTree()) {
for (auto& region : memory_layout->GetPhysicalMemoryRegionTree()) {
if (region.GetType() == KMemoryRegionType_Dram) {
// Check that the region is valid.
ASSERT(region.GetEndAddress() != 0);
@@ -541,7 +531,7 @@ struct KernelCore::Impl {
// Get the linear region extents.
const auto linear_extents =
memory_layout.GetPhysicalMemoryRegionTree().GetDerivedRegionExtents(
memory_layout->GetPhysicalMemoryRegionTree().GetDerivedRegionExtents(
KMemoryRegionAttr_LinearMapped);
ASSERT(linear_extents.GetEndAddress() != 0);
@@ -553,7 +543,7 @@ struct KernelCore::Impl {
Common::AlignUp(linear_extents.GetEndAddress(), LinearRegionAlign) -
aligned_linear_phys_start;
const VAddr linear_region_start =
memory_layout.GetVirtualMemoryRegionTree().GetRandomAlignedRegionWithGuard(
memory_layout->GetVirtualMemoryRegionTree().GetRandomAlignedRegionWithGuard(
linear_region_size, LinearRegionAlign, KMemoryRegionType_None, LinearRegionAlign);
const u64 linear_region_phys_to_virt_diff = linear_region_start - aligned_linear_phys_start;
@@ -562,7 +552,7 @@ struct KernelCore::Impl {
{
PAddr cur_phys_addr = 0;
u64 cur_size = 0;
for (auto& region : memory_layout.GetPhysicalMemoryRegionTree()) {
for (auto& region : memory_layout->GetPhysicalMemoryRegionTree()) {
if (!region.HasTypeAttribute(KMemoryRegionAttr_LinearMapped)) {
continue;
}
@@ -581,55 +571,49 @@ struct KernelCore::Impl {
const VAddr region_virt_addr =
region.GetAddress() + linear_region_phys_to_virt_diff;
ASSERT(memory_layout.GetVirtualMemoryRegionTree().Insert(
ASSERT(memory_layout->GetVirtualMemoryRegionTree().Insert(
region_virt_addr, region.GetSize(),
GetTypeForVirtualLinearMapping(region.GetType())));
region.SetPairAddress(region_virt_addr);
KMemoryRegion* virt_region =
memory_layout.GetVirtualMemoryRegionTree().FindModifiable(region_virt_addr);
memory_layout->GetVirtualMemoryRegionTree().FindModifiable(region_virt_addr);
ASSERT(virt_region != nullptr);
virt_region->SetPairAddress(region.GetAddress());
}
}
// Insert regions for the initial page table region.
ASSERT(memory_layout.GetPhysicalMemoryRegionTree().Insert(
ASSERT(memory_layout->GetPhysicalMemoryRegionTree().Insert(
resource_end_phys_addr, KernelPageTableHeapSize, KMemoryRegionType_DramKernelInitPt));
ASSERT(memory_layout.GetVirtualMemoryRegionTree().Insert(
ASSERT(memory_layout->GetVirtualMemoryRegionTree().Insert(
resource_end_phys_addr + linear_region_phys_to_virt_diff, KernelPageTableHeapSize,
KMemoryRegionType_VirtualDramKernelInitPt));
// All linear-mapped DRAM regions that we haven't tagged by this point will be allocated to
// some pool partition. Tag them.
for (auto& region : memory_layout.GetPhysicalMemoryRegionTree()) {
for (auto& region : memory_layout->GetPhysicalMemoryRegionTree()) {
if (region.GetType() == (KMemoryRegionType_Dram | KMemoryRegionAttr_LinearMapped)) {
region.SetType(KMemoryRegionType_DramPoolPartition);
}
}
// Setup all other memory regions needed to arrange the pool partitions.
Init::SetupPoolPartitionMemoryRegions(memory_layout);
Init::SetupPoolPartitionMemoryRegions(*memory_layout);
// Cache all linear regions in their own trees for faster access, later.
memory_layout.InitializeLinearMemoryRegionTrees(aligned_linear_phys_start,
linear_region_start);
memory_layout->InitializeLinearMemoryRegionTrees(aligned_linear_phys_start,
linear_region_start);
}
void InitializeMemoryLayout(const KMemoryLayout& memory_layout) {
const auto system_pool = memory_layout.GetKernelSystemPoolRegionPhysicalExtents();
const auto applet_pool = memory_layout.GetKernelAppletPoolRegionPhysicalExtents();
const auto application_pool = memory_layout.GetKernelApplicationPoolRegionPhysicalExtents();
void InitializeMemoryLayout() {
const auto system_pool = memory_layout->GetKernelSystemPoolRegionPhysicalExtents();
// Initialize memory managers
// Initialize the memory manager.
memory_manager = std::make_unique<KMemoryManager>(system);
memory_manager->InitializeManager(KMemoryManager::Pool::Application,
application_pool.GetAddress(),
application_pool.GetEndAddress());
memory_manager->InitializeManager(KMemoryManager::Pool::Applet, applet_pool.GetAddress(),
applet_pool.GetEndAddress());
memory_manager->InitializeManager(KMemoryManager::Pool::System, system_pool.GetAddress(),
system_pool.GetEndAddress());
const auto& management_region = memory_layout->GetPoolManagementRegion();
ASSERT(management_region.GetEndAddress() != 0);
memory_manager->Initialize(management_region.GetAddress(), management_region.GetSize());
// Setup memory regions for emulated processes
// TODO(bunnei): These should not be hardcoded regions initialized within the kernel
@@ -666,22 +650,6 @@ struct KernelCore::Impl {
time_phys_addr, time_size, "Time:SharedMemory");
}
void InitializePageSlab() {
// Allocate slab heaps
user_slab_heap_pages =
std::make_unique<KSlabHeap<Page>>(KSlabHeap<Page>::AllocationType::Guest);
// TODO(ameerj): This should be derived, not hardcoded within the kernel
constexpr u64 user_slab_heap_size{0x3de000};
// Reserve slab heaps
ASSERT(
system_resource_limit->Reserve(LimitableResource::PhysicalMemory, user_slab_heap_size));
// Initialize slab heap
user_slab_heap_pages->Initialize(
system.DeviceMemory().GetPointer(Core::DramMemoryMap::SlabHeapBase),
user_slab_heap_size);
}
KClientPort* CreateNamedServicePort(std::string name) {
auto search = service_interface_factory.find(name);
if (search == service_interface_factory.end()) {
@@ -719,7 +687,6 @@ struct KernelCore::Impl {
}
std::mutex server_ports_lock;
std::mutex server_sessions_lock;
std::mutex registered_objects_lock;
std::mutex registered_in_use_objects_lock;
@@ -743,14 +710,13 @@ struct KernelCore::Impl {
// stores all the objects in place.
std::unique_ptr<KHandleTable> global_handle_table;
KAutoObjectWithListContainer object_list_container;
std::unique_ptr<KAutoObjectWithListContainer> global_object_list_container;
/// Map of named ports managed by the kernel, which can be retrieved using
/// the ConnectToPort SVC.
std::unordered_map<std::string, ServiceInterfaceFactory> service_interface_factory;
NamedPortTable named_ports;
std::unordered_set<KServerPort*> server_ports;
std::unordered_set<KServerSession*> server_sessions;
std::unordered_set<KAutoObject*> registered_objects;
std::unordered_set<KAutoObject*> registered_in_use_objects;
@@ -762,7 +728,6 @@ struct KernelCore::Impl {
// Kernel memory management
std::unique_ptr<KMemoryManager> memory_manager;
std::unique_ptr<KSlabHeap<Page>> user_slab_heap_pages;
// Shared memory for services
Kernel::KSharedMemory* hid_shared_mem{};
@@ -770,6 +735,9 @@ struct KernelCore::Impl {
Kernel::KSharedMemory* irs_shared_mem{};
Kernel::KSharedMemory* time_shared_mem{};
// Memory layout
std::unique_ptr<KMemoryLayout> memory_layout;
// Threads used for services
std::unordered_set<std::shared_ptr<Kernel::ServiceThread>> service_threads;
Common::ThreadWorker service_threads_manager;
@@ -918,11 +886,11 @@ const Core::ExclusiveMonitor& KernelCore::GetExclusiveMonitor() const {
}
KAutoObjectWithListContainer& KernelCore::ObjectListContainer() {
return impl->object_list_container;
return *impl->global_object_list_container;
}
const KAutoObjectWithListContainer& KernelCore::ObjectListContainer() const {
return impl->object_list_container;
return *impl->global_object_list_container;
}
void KernelCore::InvalidateAllInstructionCaches() {
@@ -952,16 +920,6 @@ KClientPort* KernelCore::CreateNamedServicePort(std::string name) {
return impl->CreateNamedServicePort(std::move(name));
}
void KernelCore::RegisterServerSession(KServerSession* server_session) {
std::lock_guard lk(impl->server_sessions_lock);
impl->server_sessions.insert(server_session);
}
void KernelCore::UnregisterServerSession(KServerSession* server_session) {
std::lock_guard lk(impl->server_sessions_lock);
impl->server_sessions.erase(server_session);
}
void KernelCore::RegisterKernelObject(KAutoObject* object) {
std::lock_guard lk(impl->registered_objects_lock);
impl->registered_objects.insert(object);
@@ -1034,14 +992,6 @@ const KMemoryManager& KernelCore::MemoryManager() const {
return *impl->memory_manager;
}
KSlabHeap<Page>& KernelCore::GetUserSlabHeapPages() {
return *impl->user_slab_heap_pages;
}
const KSlabHeap<Page>& KernelCore::GetUserSlabHeapPages() const {
return *impl->user_slab_heap_pages;
}
Kernel::KSharedMemory& KernelCore::GetHidSharedMem() {
return *impl->hid_shared_mem;
}
@@ -1135,6 +1085,10 @@ const KWorkerTaskManager& KernelCore::WorkerTaskManager() const {
return impl->worker_task_manager;
}
const KMemoryLayout& KernelCore::MemoryLayout() const {
return *impl->memory_layout;
}
bool KernelCore::IsPhantomModeForSingleCore() const {
return impl->IsPhantomModeForSingleCore();
}

View File

@@ -14,7 +14,6 @@
#include "core/hardware_properties.h"
#include "core/hle/kernel/k_auto_object.h"
#include "core/hle/kernel/k_slab_heap.h"
#include "core/hle/kernel/memory_types.h"
#include "core/hle/kernel/svc_common.h"
namespace Core {
@@ -41,7 +40,9 @@ class KClientSession;
class KEvent;
class KHandleTable;
class KLinkedListNode;
class KMemoryLayout;
class KMemoryManager;
class KPageBuffer;
class KPort;
class KProcess;
class KResourceLimit;
@@ -51,6 +52,7 @@ class KSession;
class KSharedMemory;
class KSharedMemoryInfo;
class KThread;
class KThreadLocalPage;
class KTransferMemory;
class KWorkerTaskManager;
class KWritableEvent;
@@ -193,14 +195,6 @@ public:
/// Opens a port to a service previously registered with RegisterNamedService.
KClientPort* CreateNamedServicePort(std::string name);
/// Registers a server session with the gobal emulation state, to be freed on shutdown. This is
/// necessary because we do not emulate processes for HLE sessions.
void RegisterServerSession(KServerSession* server_session);
/// Unregisters a server session previously registered with RegisterServerSession when it was
/// destroyed during the current emulation session.
void UnregisterServerSession(KServerSession* server_session);
/// Registers all kernel objects with the global emulation state, this is purely for tracking
/// leaks after emulation has been shutdown.
void RegisterKernelObject(KAutoObject* object);
@@ -238,12 +232,6 @@ public:
/// Gets the virtual memory manager for the kernel.
const KMemoryManager& MemoryManager() const;
/// Gets the slab heap allocated for user space pages.
KSlabHeap<Page>& GetUserSlabHeapPages();
/// Gets the slab heap allocated for user space pages.
const KSlabHeap<Page>& GetUserSlabHeapPages() const;
/// Gets the shared memory object for HID services.
Kernel::KSharedMemory& GetHidSharedMem();
@@ -335,6 +323,10 @@ public:
return slab_heap_container->writeable_event;
} else if constexpr (std::is_same_v<T, KCodeMemory>) {
return slab_heap_container->code_memory;
} else if constexpr (std::is_same_v<T, KPageBuffer>) {
return slab_heap_container->page_buffer;
} else if constexpr (std::is_same_v<T, KThreadLocalPage>) {
return slab_heap_container->thread_local_page;
}
}
@@ -350,6 +342,9 @@ public:
/// Gets the current worker task manager, used for dispatching KThread/KProcess tasks.
const KWorkerTaskManager& WorkerTaskManager() const;
/// Gets the memory layout.
const KMemoryLayout& MemoryLayout() const;
private:
friend class KProcess;
friend class KThread;
@@ -393,6 +388,8 @@ private:
KSlabHeap<KTransferMemory> transfer_memory;
KSlabHeap<KWritableEvent> writeable_event;
KSlabHeap<KCodeMemory> code_memory;
KSlabHeap<KPageBuffer> page_buffer;
KSlabHeap<KThreadLocalPage> thread_local_page;
};
std::unique_ptr<SlabHeapContainer> slab_heap_container;

View File

@@ -49,12 +49,9 @@ ServiceThread::Impl::Impl(KernelCore& kernel, std::size_t num_threads, const std
return;
}
// Allocate a dummy guest thread for this host thread.
kernel.RegisterHostThread();
// Ensure the dummy thread allocated for this host thread is closed on exit.
auto* dummy_thread = kernel.GetCurrentEmuThread();
SCOPE_EXIT({ dummy_thread->Close(); });
while (true) {
std::function<void()> task;

View File

@@ -59,7 +59,7 @@ class KAutoObjectWithSlabHeapAndContainer : public Base {
private:
static Derived* Allocate(KernelCore& kernel) {
return kernel.SlabHeap<Derived>().AllocateWithKernel(kernel);
return kernel.SlabHeap<Derived>().Allocate(kernel);
}
static void Free(KernelCore& kernel, Derived* obj) {

View File

@@ -96,4 +96,6 @@ constexpr inline s32 IdealCoreNoUpdate = -3;
constexpr inline s32 LowestThreadPriority = 63;
constexpr inline s32 HighestThreadPriority = 0;
constexpr inline size_t ThreadLocalRegionSize = 0x200;
} // namespace Kernel::Svc

View File

@@ -16,7 +16,6 @@
#include "core/file_sys/control_metadata.h"
#include "core/file_sys/patch_manager.h"
#include "core/hle/ipc_helpers.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/service/acc/acc.h"
#include "core/hle/service/acc/acc_aa.h"
#include "core/hle/service/acc/acc_su.h"

View File

@@ -7,6 +7,7 @@
#include <array>
#include <optional>
#include "common/common_funcs.h"
#include "common/common_types.h"
#include "common/swap.h"
#include "common/uuid.h"

View File

@@ -980,7 +980,7 @@ private:
LOG_DEBUG(Service_AM, "called");
IPC::RequestParser rp{ctx};
applet->GetBroker().PushNormalDataFromGame(rp.PopIpcInterface<IStorage>());
applet->GetBroker().PushNormalDataFromGame(rp.PopIpcInterface<IStorage>().lock());
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(ResultSuccess);
@@ -1007,7 +1007,7 @@ private:
LOG_DEBUG(Service_AM, "called");
IPC::RequestParser rp{ctx};
applet->GetBroker().PushInteractiveDataFromGame(rp.PopIpcInterface<IStorage>());
applet->GetBroker().PushInteractiveDataFromGame(rp.PopIpcInterface<IStorage>().lock());
ASSERT(applet->IsInitialized());
applet->ExecuteInteractive();

View File

@@ -0,0 +1,100 @@
// Copyright 2022 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "common/assert.h"
#include "common/logging/log.h"
#include "core/core.h"
#include "core/frontend/applets/mii.h"
#include "core/hle/service/am/am.h"
#include "core/hle/service/am/applets/applet_mii.h"
namespace Service::AM::Applets {
Mii::Mii(Core::System& system_, LibraryAppletMode applet_mode_,
const Core::Frontend::MiiApplet& frontend_)
: Applet{system_, applet_mode_}, frontend{frontend_}, system{system_} {}
Mii::~Mii() = default;
void Mii::Initialize() {
is_complete = false;
const auto storage = broker.PopNormalDataToApplet();
ASSERT(storage != nullptr);
const auto data = storage->GetData();
ASSERT(data.size() == sizeof(MiiAppletInput));
std::memcpy(&input_data, data.data(), sizeof(MiiAppletInput));
}
bool Mii::TransactionComplete() const {
return is_complete;
}
ResultCode Mii::GetStatus() const {
return ResultSuccess;
}
void Mii::ExecuteInteractive() {
UNREACHABLE_MSG("Unexpected interactive applet data!");
}
void Mii::Execute() {
if (is_complete) {
return;
}
const auto callback = [this](const Core::Frontend::MiiParameters& parameters) {
DisplayCompleted(parameters);
};
switch (input_data.applet_mode) {
case MiiAppletMode::ShowMiiEdit: {
Service::Mii::MiiManager manager;
Core::Frontend::MiiParameters params{
.is_editable = false,
.mii_data = input_data.mii_char_info.mii_data,
};
frontend.ShowMii(params, callback);
break;
}
case MiiAppletMode::EditMii: {
Service::Mii::MiiManager manager;
Core::Frontend::MiiParameters params{
.is_editable = true,
.mii_data = input_data.mii_char_info.mii_data,
};
frontend.ShowMii(params, callback);
break;
}
case MiiAppletMode::CreateMii: {
Service::Mii::MiiManager manager;
Core::Frontend::MiiParameters params{
.is_editable = true,
.mii_data = manager.BuildDefault(0),
};
frontend.ShowMii(params, callback);
break;
}
default:
UNIMPLEMENTED_MSG("Unimplemented LibAppletMiiEdit mode={:02X}!", input_data.applet_mode);
}
}
void Mii::DisplayCompleted(const Core::Frontend::MiiParameters& parameters) {
is_complete = true;
std::vector<u8> reply(sizeof(AppletOutputForCharInfoEditing));
output_data = {
.result = ResultSuccess,
.mii_data = parameters.mii_data,
};
std::memcpy(reply.data(), &output_data, sizeof(AppletOutputForCharInfoEditing));
broker.PushNormalDataFromApplet(std::make_shared<IStorage>(system, std::move(reply)));
broker.SignalStateChanged();
}
} // namespace Service::AM::Applets

View File

@@ -0,0 +1,90 @@
// Copyright 2022 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <array>
#include "core/hle/result.h"
#include "core/hle/service/am/applets/applets.h"
#include "core/hle/service/mii/mii_manager.h"
namespace Core {
class System;
}
namespace Service::AM::Applets {
// This is nn::mii::AppletMode
enum class MiiAppletMode : u32 {
ShowMiiEdit = 0,
AppendMii = 1,
AppendMiiImage = 2,
UpdateMiiImage = 3,
CreateMii = 4,
EditMii = 5,
};
struct MiiCharInfo {
Service::Mii::MiiInfo mii_data{};
INSERT_PADDING_BYTES(0x28);
};
static_assert(sizeof(MiiCharInfo) == 0x80, "MiiCharInfo has incorrect size.");
// This is nn::mii::AppletInput
struct MiiAppletInput {
s32 version{};
MiiAppletMode applet_mode{};
u32 special_mii_key_code{};
union {
std::array<Common::UUID, 8> valid_uuid;
MiiCharInfo mii_char_info;
};
Common::UUID used_uuid;
INSERT_PADDING_BYTES(0x64);
};
static_assert(sizeof(MiiAppletInput) == 0x100, "MiiAppletInput has incorrect size.");
// This is nn::mii::AppletOutput
struct MiiAppletOutput {
ResultCode result{ResultSuccess};
s32 index{};
INSERT_PADDING_BYTES(0x18);
};
static_assert(sizeof(MiiAppletOutput) == 0x20, "MiiAppletOutput has incorrect size.");
// This is nn::mii::AppletOutputForCharInfoEditing
struct AppletOutputForCharInfoEditing {
ResultCode result{ResultSuccess};
Service::Mii::MiiInfo mii_data{};
INSERT_PADDING_BYTES(0x24);
};
static_assert(sizeof(AppletOutputForCharInfoEditing) == 0x80,
"AppletOutputForCharInfoEditing has incorrect size.");
class Mii final : public Applet {
public:
explicit Mii(Core::System& system_, LibraryAppletMode applet_mode_,
const Core::Frontend::MiiApplet& frontend_);
~Mii() override;
void Initialize() override;
bool TransactionComplete() const override;
ResultCode GetStatus() const override;
void ExecuteInteractive() override;
void Execute() override;
void DisplayCompleted(const Core::Frontend::MiiParameters& parameters);
private:
const Core::Frontend::MiiApplet& frontend;
MiiAppletInput input_data{};
AppletOutputForCharInfoEditing output_data{};
bool is_complete = false;
Core::System& system;
};
} // namespace Service::AM::Applets

View File

@@ -226,7 +226,7 @@ void SoftwareKeyboard::InitializeForeground() {
ASSERT(work_buffer_storage != nullptr);
if (swkbd_config_common.initial_string_length == 0) {
InitializeFrontendKeyboard();
InitializeFrontendNormalKeyboard();
return;
}
@@ -243,7 +243,7 @@ void SoftwareKeyboard::InitializeForeground() {
LOG_DEBUG(Service_AM, "\nInitial Text: {}", Common::UTF16ToUTF8(initial_text));
InitializeFrontendKeyboard();
InitializeFrontendNormalKeyboard();
}
void SoftwareKeyboard::InitializeBackground(LibraryAppletMode library_applet_mode) {
@@ -480,129 +480,173 @@ void SoftwareKeyboard::ChangeState(SwkbdState state) {
ReplyDefault();
}
void SoftwareKeyboard::InitializeFrontendKeyboard() {
if (is_background) {
const auto& appear_arg = swkbd_calc_arg.appear_arg;
void SoftwareKeyboard::InitializeFrontendNormalKeyboard() {
std::u16string ok_text = Common::UTF16StringFromFixedZeroTerminatedBuffer(
swkbd_config_common.ok_text.data(), swkbd_config_common.ok_text.size());
std::u16string ok_text = Common::UTF16StringFromFixedZeroTerminatedBuffer(
appear_arg.ok_text.data(), appear_arg.ok_text.size());
std::u16string header_text = Common::UTF16StringFromFixedZeroTerminatedBuffer(
swkbd_config_common.header_text.data(), swkbd_config_common.header_text.size());
const u32 max_text_length =
appear_arg.max_text_length > 0 && appear_arg.max_text_length <= DEFAULT_MAX_TEXT_LENGTH
? appear_arg.max_text_length
: DEFAULT_MAX_TEXT_LENGTH;
std::u16string sub_text = Common::UTF16StringFromFixedZeroTerminatedBuffer(
swkbd_config_common.sub_text.data(), swkbd_config_common.sub_text.size());
const u32 min_text_length =
appear_arg.min_text_length <= max_text_length ? appear_arg.min_text_length : 0;
std::u16string guide_text = Common::UTF16StringFromFixedZeroTerminatedBuffer(
swkbd_config_common.guide_text.data(), swkbd_config_common.guide_text.size());
const s32 initial_cursor_position =
current_cursor_position > 0 ? current_cursor_position : 0;
const u32 max_text_length =
swkbd_config_common.max_text_length > 0 &&
swkbd_config_common.max_text_length <= DEFAULT_MAX_TEXT_LENGTH
? swkbd_config_common.max_text_length
: DEFAULT_MAX_TEXT_LENGTH;
const auto text_draw_type =
max_text_length <= 32 ? SwkbdTextDrawType::Line : SwkbdTextDrawType::Box;
const u32 min_text_length = swkbd_config_common.min_text_length <= max_text_length
? swkbd_config_common.min_text_length
: 0;
Core::Frontend::KeyboardInitializeParameters initialize_parameters{
.ok_text{std::move(ok_text)},
.header_text{},
.sub_text{},
.guide_text{},
.initial_text{current_text},
.max_text_length{max_text_length},
.min_text_length{min_text_length},
.initial_cursor_position{initial_cursor_position},
.type{appear_arg.type},
.password_mode{SwkbdPasswordMode::Disabled},
.text_draw_type{text_draw_type},
.key_disable_flags{appear_arg.key_disable_flags},
.use_blur_background{false},
.enable_backspace_button{swkbd_calc_arg.enable_backspace_button},
.enable_return_button{appear_arg.enable_return_button},
.disable_cancel_button{appear_arg.disable_cancel_button},
};
const s32 initial_cursor_position = [this] {
switch (swkbd_config_common.initial_cursor_position) {
case SwkbdInitialCursorPosition::Start:
default:
return 0;
case SwkbdInitialCursorPosition::End:
return static_cast<s32>(initial_text.size());
}
}();
frontend.InitializeKeyboard(
true, std::move(initialize_parameters), {},
[this](SwkbdReplyType reply_type, std::u16string submitted_text, s32 cursor_position) {
SubmitTextInline(reply_type, submitted_text, cursor_position);
});
} else {
std::u16string ok_text = Common::UTF16StringFromFixedZeroTerminatedBuffer(
swkbd_config_common.ok_text.data(), swkbd_config_common.ok_text.size());
const auto text_draw_type = [this, max_text_length] {
switch (swkbd_config_common.text_draw_type) {
case SwkbdTextDrawType::Line:
default:
return max_text_length <= 32 ? SwkbdTextDrawType::Line : SwkbdTextDrawType::Box;
case SwkbdTextDrawType::Box:
case SwkbdTextDrawType::DownloadCode:
return swkbd_config_common.text_draw_type;
}
}();
std::u16string header_text = Common::UTF16StringFromFixedZeroTerminatedBuffer(
swkbd_config_common.header_text.data(), swkbd_config_common.header_text.size());
const auto enable_return_button =
text_draw_type == SwkbdTextDrawType::Box ? swkbd_config_common.enable_return_button : false;
std::u16string sub_text = Common::UTF16StringFromFixedZeroTerminatedBuffer(
swkbd_config_common.sub_text.data(), swkbd_config_common.sub_text.size());
const auto disable_cancel_button = swkbd_applet_version >= SwkbdAppletVersion::Version393227
? swkbd_config_new.disable_cancel_button
: false;
std::u16string guide_text = Common::UTF16StringFromFixedZeroTerminatedBuffer(
swkbd_config_common.guide_text.data(), swkbd_config_common.guide_text.size());
Core::Frontend::KeyboardInitializeParameters initialize_parameters{
.ok_text{std::move(ok_text)},
.header_text{std::move(header_text)},
.sub_text{std::move(sub_text)},
.guide_text{std::move(guide_text)},
.initial_text{initial_text},
.max_text_length{max_text_length},
.min_text_length{min_text_length},
.initial_cursor_position{initial_cursor_position},
.type{swkbd_config_common.type},
.password_mode{swkbd_config_common.password_mode},
.text_draw_type{text_draw_type},
.key_disable_flags{swkbd_config_common.key_disable_flags},
.use_blur_background{swkbd_config_common.use_blur_background},
.enable_backspace_button{true},
.enable_return_button{enable_return_button},
.disable_cancel_button{disable_cancel_button},
};
const u32 max_text_length =
swkbd_config_common.max_text_length > 0 &&
swkbd_config_common.max_text_length <= DEFAULT_MAX_TEXT_LENGTH
? swkbd_config_common.max_text_length
: DEFAULT_MAX_TEXT_LENGTH;
frontend.InitializeKeyboard(
false, std::move(initialize_parameters),
[this](SwkbdResult result, std::u16string submitted_text, bool confirmed) {
SubmitTextNormal(result, submitted_text, confirmed);
},
{});
}
const u32 min_text_length = swkbd_config_common.min_text_length <= max_text_length
? swkbd_config_common.min_text_length
: 0;
void SoftwareKeyboard::InitializeFrontendInlineKeyboard(
Core::Frontend::KeyboardInitializeParameters initialize_parameters) {
frontend.InitializeKeyboard(
true, std::move(initialize_parameters), {},
[this](SwkbdReplyType reply_type, std::u16string submitted_text, s32 cursor_position) {
SubmitTextInline(reply_type, submitted_text, cursor_position);
});
}
const s32 initial_cursor_position = [this] {
switch (swkbd_config_common.initial_cursor_position) {
case SwkbdInitialCursorPosition::Start:
default:
return 0;
case SwkbdInitialCursorPosition::End:
return static_cast<s32>(initial_text.size());
}
}();
void SoftwareKeyboard::InitializeFrontendInlineKeyboardOld() {
const auto& appear_arg = swkbd_calc_arg_old.appear_arg;
const auto text_draw_type = [this, max_text_length] {
switch (swkbd_config_common.text_draw_type) {
case SwkbdTextDrawType::Line:
default:
return max_text_length <= 32 ? SwkbdTextDrawType::Line : SwkbdTextDrawType::Box;
case SwkbdTextDrawType::Box:
case SwkbdTextDrawType::DownloadCode:
return swkbd_config_common.text_draw_type;
}
}();
std::u16string ok_text = Common::UTF16StringFromFixedZeroTerminatedBuffer(
appear_arg.ok_text.data(), appear_arg.ok_text.size());
const auto enable_return_button = text_draw_type == SwkbdTextDrawType::Box
? swkbd_config_common.enable_return_button
: false;
const u32 max_text_length =
appear_arg.max_text_length > 0 && appear_arg.max_text_length <= DEFAULT_MAX_TEXT_LENGTH
? appear_arg.max_text_length
: DEFAULT_MAX_TEXT_LENGTH;
const auto disable_cancel_button = swkbd_applet_version >= SwkbdAppletVersion::Version393227
? swkbd_config_new.disable_cancel_button
: false;
const u32 min_text_length =
appear_arg.min_text_length <= max_text_length ? appear_arg.min_text_length : 0;
Core::Frontend::KeyboardInitializeParameters initialize_parameters{
.ok_text{std::move(ok_text)},
.header_text{std::move(header_text)},
.sub_text{std::move(sub_text)},
.guide_text{std::move(guide_text)},
.initial_text{initial_text},
.max_text_length{max_text_length},
.min_text_length{min_text_length},
.initial_cursor_position{initial_cursor_position},
.type{swkbd_config_common.type},
.password_mode{swkbd_config_common.password_mode},
.text_draw_type{text_draw_type},
.key_disable_flags{swkbd_config_common.key_disable_flags},
.use_blur_background{swkbd_config_common.use_blur_background},
.enable_backspace_button{true},
.enable_return_button{enable_return_button},
.disable_cancel_button{disable_cancel_button},
};
const s32 initial_cursor_position = current_cursor_position > 0 ? current_cursor_position : 0;
frontend.InitializeKeyboard(
false, std::move(initialize_parameters),
[this](SwkbdResult result, std::u16string submitted_text, bool confirmed) {
SubmitTextNormal(result, submitted_text, confirmed);
},
{});
}
const auto text_draw_type =
max_text_length <= 32 ? SwkbdTextDrawType::Line : SwkbdTextDrawType::Box;
Core::Frontend::KeyboardInitializeParameters initialize_parameters{
.ok_text{std::move(ok_text)},
.header_text{},
.sub_text{},
.guide_text{},
.initial_text{current_text},
.max_text_length{max_text_length},
.min_text_length{min_text_length},
.initial_cursor_position{initial_cursor_position},
.type{appear_arg.type},
.password_mode{SwkbdPasswordMode::Disabled},
.text_draw_type{text_draw_type},
.key_disable_flags{appear_arg.key_disable_flags},
.use_blur_background{false},
.enable_backspace_button{swkbd_calc_arg_old.enable_backspace_button},
.enable_return_button{appear_arg.enable_return_button},
.disable_cancel_button{appear_arg.disable_cancel_button},
};
InitializeFrontendInlineKeyboard(std::move(initialize_parameters));
}
void SoftwareKeyboard::InitializeFrontendInlineKeyboardNew() {
const auto& appear_arg = swkbd_calc_arg_new.appear_arg;
std::u16string ok_text = Common::UTF16StringFromFixedZeroTerminatedBuffer(
appear_arg.ok_text.data(), appear_arg.ok_text.size());
const u32 max_text_length =
appear_arg.max_text_length > 0 && appear_arg.max_text_length <= DEFAULT_MAX_TEXT_LENGTH
? appear_arg.max_text_length
: DEFAULT_MAX_TEXT_LENGTH;
const u32 min_text_length =
appear_arg.min_text_length <= max_text_length ? appear_arg.min_text_length : 0;
const s32 initial_cursor_position = current_cursor_position > 0 ? current_cursor_position : 0;
const auto text_draw_type =
max_text_length <= 32 ? SwkbdTextDrawType::Line : SwkbdTextDrawType::Box;
Core::Frontend::KeyboardInitializeParameters initialize_parameters{
.ok_text{std::move(ok_text)},
.header_text{},
.sub_text{},
.guide_text{},
.initial_text{current_text},
.max_text_length{max_text_length},
.min_text_length{min_text_length},
.initial_cursor_position{initial_cursor_position},
.type{appear_arg.type},
.password_mode{SwkbdPasswordMode::Disabled},
.text_draw_type{text_draw_type},
.key_disable_flags{appear_arg.key_disable_flags},
.use_blur_background{false},
.enable_backspace_button{swkbd_calc_arg_new.enable_backspace_button},
.enable_return_button{appear_arg.enable_return_button},
.disable_cancel_button{appear_arg.disable_cancel_button},
};
InitializeFrontendInlineKeyboard(std::move(initialize_parameters));
}
void SoftwareKeyboard::ShowNormalKeyboard() {
@@ -614,14 +658,21 @@ void SoftwareKeyboard::ShowTextCheckDialog(SwkbdTextCheckResult text_check_resul
frontend.ShowTextCheckDialog(text_check_result, std::move(text_check_message));
}
void SoftwareKeyboard::ShowInlineKeyboard() {
void SoftwareKeyboard::ShowInlineKeyboard(
Core::Frontend::InlineAppearParameters appear_parameters) {
frontend.ShowInlineKeyboard(std::move(appear_parameters));
ChangeState(SwkbdState::InitializedIsShown);
}
void SoftwareKeyboard::ShowInlineKeyboardOld() {
if (swkbd_state != SwkbdState::InitializedIsHidden) {
return;
}
ChangeState(SwkbdState::InitializedIsAppearing);
const auto& appear_arg = swkbd_calc_arg.appear_arg;
const auto& appear_arg = swkbd_calc_arg_old.appear_arg;
const u32 max_text_length =
appear_arg.max_text_length > 0 && appear_arg.max_text_length <= DEFAULT_MAX_TEXT_LENGTH
@@ -634,21 +685,54 @@ void SoftwareKeyboard::ShowInlineKeyboard() {
Core::Frontend::InlineAppearParameters appear_parameters{
.max_text_length{max_text_length},
.min_text_length{min_text_length},
.key_top_scale_x{swkbd_calc_arg.key_top_scale_x},
.key_top_scale_y{swkbd_calc_arg.key_top_scale_y},
.key_top_translate_x{swkbd_calc_arg.key_top_translate_x},
.key_top_translate_y{swkbd_calc_arg.key_top_translate_y},
.key_top_scale_x{swkbd_calc_arg_old.key_top_scale_x},
.key_top_scale_y{swkbd_calc_arg_old.key_top_scale_y},
.key_top_translate_x{swkbd_calc_arg_old.key_top_translate_x},
.key_top_translate_y{swkbd_calc_arg_old.key_top_translate_y},
.type{appear_arg.type},
.key_disable_flags{appear_arg.key_disable_flags},
.key_top_as_floating{swkbd_calc_arg.key_top_as_floating},
.enable_backspace_button{swkbd_calc_arg.enable_backspace_button},
.key_top_as_floating{swkbd_calc_arg_old.key_top_as_floating},
.enable_backspace_button{swkbd_calc_arg_old.enable_backspace_button},
.enable_return_button{appear_arg.enable_return_button},
.disable_cancel_button{appear_arg.disable_cancel_button},
};
frontend.ShowInlineKeyboard(std::move(appear_parameters));
ShowInlineKeyboard(std::move(appear_parameters));
}
ChangeState(SwkbdState::InitializedIsShown);
void SoftwareKeyboard::ShowInlineKeyboardNew() {
if (swkbd_state != SwkbdState::InitializedIsHidden) {
return;
}
ChangeState(SwkbdState::InitializedIsAppearing);
const auto& appear_arg = swkbd_calc_arg_new.appear_arg;
const u32 max_text_length =
appear_arg.max_text_length > 0 && appear_arg.max_text_length <= DEFAULT_MAX_TEXT_LENGTH
? appear_arg.max_text_length
: DEFAULT_MAX_TEXT_LENGTH;
const u32 min_text_length =
appear_arg.min_text_length <= max_text_length ? appear_arg.min_text_length : 0;
Core::Frontend::InlineAppearParameters appear_parameters{
.max_text_length{max_text_length},
.min_text_length{min_text_length},
.key_top_scale_x{swkbd_calc_arg_new.key_top_scale_x},
.key_top_scale_y{swkbd_calc_arg_new.key_top_scale_y},
.key_top_translate_x{swkbd_calc_arg_new.key_top_translate_x},
.key_top_translate_y{swkbd_calc_arg_new.key_top_translate_y},
.type{appear_arg.type},
.key_disable_flags{appear_arg.key_disable_flags},
.key_top_as_floating{swkbd_calc_arg_new.key_top_as_floating},
.enable_backspace_button{swkbd_calc_arg_new.enable_backspace_button},
.enable_return_button{appear_arg.enable_return_button},
.disable_cancel_button{appear_arg.disable_cancel_button},
};
ShowInlineKeyboard(std::move(appear_parameters));
}
void SoftwareKeyboard::HideInlineKeyboard() {
@@ -693,6 +777,8 @@ void SoftwareKeyboard::RequestFinalize(const std::vector<u8>& request_data) {
void SoftwareKeyboard::RequestSetUserWordInfo(const std::vector<u8>& request_data) {
LOG_WARNING(Service_AM, "SetUserWordInfo is not implemented.");
ReplyReleasedUserWordInfo();
}
void SoftwareKeyboard::RequestSetCustomizeDic(const std::vector<u8>& request_data) {
@@ -702,53 +788,135 @@ void SoftwareKeyboard::RequestSetCustomizeDic(const std::vector<u8>& request_dat
void SoftwareKeyboard::RequestCalc(const std::vector<u8>& request_data) {
LOG_DEBUG(Service_AM, "Processing Request: Calc");
ASSERT(request_data.size() == sizeof(SwkbdRequestCommand) + sizeof(SwkbdCalcArg));
ASSERT(request_data.size() >= sizeof(SwkbdRequestCommand) + sizeof(SwkbdCalcArgCommon));
std::memcpy(&swkbd_calc_arg, request_data.data() + sizeof(SwkbdRequestCommand),
sizeof(SwkbdCalcArg));
std::memcpy(&swkbd_calc_arg_common, request_data.data() + sizeof(SwkbdRequestCommand),
sizeof(SwkbdCalcArgCommon));
if (swkbd_calc_arg.flags.set_input_text) {
switch (swkbd_calc_arg_common.calc_arg_size) {
case sizeof(SwkbdCalcArgCommon) + sizeof(SwkbdCalcArgOld):
ASSERT(request_data.size() ==
sizeof(SwkbdRequestCommand) + sizeof(SwkbdCalcArgCommon) + sizeof(SwkbdCalcArgOld));
std::memcpy(&swkbd_calc_arg_old,
request_data.data() + sizeof(SwkbdRequestCommand) + sizeof(SwkbdCalcArgCommon),
sizeof(SwkbdCalcArgOld));
RequestCalcOld();
break;
case sizeof(SwkbdCalcArgCommon) + sizeof(SwkbdCalcArgNew):
ASSERT(request_data.size() ==
sizeof(SwkbdRequestCommand) + sizeof(SwkbdCalcArgCommon) + sizeof(SwkbdCalcArgNew));
std::memcpy(&swkbd_calc_arg_new,
request_data.data() + sizeof(SwkbdRequestCommand) + sizeof(SwkbdCalcArgCommon),
sizeof(SwkbdCalcArgNew));
RequestCalcNew();
break;
default:
UNIMPLEMENTED_MSG("Unknown SwkbdCalcArg size={}", swkbd_calc_arg_common.calc_arg_size);
ASSERT(request_data.size() >=
sizeof(SwkbdRequestCommand) + sizeof(SwkbdCalcArgCommon) + sizeof(SwkbdCalcArgNew));
std::memcpy(&swkbd_calc_arg_new,
request_data.data() + sizeof(SwkbdRequestCommand) + sizeof(SwkbdCalcArgCommon),
sizeof(SwkbdCalcArgNew));
RequestCalcNew();
break;
}
}
void SoftwareKeyboard::RequestCalcOld() {
if (swkbd_calc_arg_common.flags.set_input_text) {
current_text = Common::UTF16StringFromFixedZeroTerminatedBuffer(
swkbd_calc_arg.input_text.data(), swkbd_calc_arg.input_text.size());
swkbd_calc_arg_old.input_text.data(), swkbd_calc_arg_old.input_text.size());
}
if (swkbd_calc_arg.flags.set_cursor_position) {
current_cursor_position = swkbd_calc_arg.cursor_position;
if (swkbd_calc_arg_common.flags.set_cursor_position) {
current_cursor_position = swkbd_calc_arg_old.cursor_position;
}
if (swkbd_calc_arg.flags.set_utf8_mode) {
inline_use_utf8 = swkbd_calc_arg.utf8_mode;
if (swkbd_calc_arg_common.flags.set_utf8_mode) {
inline_use_utf8 = swkbd_calc_arg_old.utf8_mode;
}
if (swkbd_state <= SwkbdState::InitializedIsHidden &&
swkbd_calc_arg.flags.unset_customize_dic) {
swkbd_calc_arg_common.flags.unset_customize_dic) {
ReplyUnsetCustomizeDic();
}
if (swkbd_state <= SwkbdState::InitializedIsHidden &&
swkbd_calc_arg.flags.unset_user_word_info) {
swkbd_calc_arg_common.flags.unset_user_word_info) {
ReplyReleasedUserWordInfo();
}
if (swkbd_state == SwkbdState::NotInitialized && swkbd_calc_arg.flags.set_initialize_arg) {
InitializeFrontendKeyboard();
if (swkbd_state == SwkbdState::NotInitialized &&
swkbd_calc_arg_common.flags.set_initialize_arg) {
InitializeFrontendInlineKeyboardOld();
ChangeState(SwkbdState::InitializedIsHidden);
ReplyFinishedInitialize();
}
if (!swkbd_calc_arg.flags.set_initialize_arg &&
(swkbd_calc_arg.flags.set_input_text || swkbd_calc_arg.flags.set_cursor_position)) {
if (!swkbd_calc_arg_common.flags.set_initialize_arg &&
(swkbd_calc_arg_common.flags.set_input_text ||
swkbd_calc_arg_common.flags.set_cursor_position)) {
InlineTextChanged();
}
if (swkbd_state == SwkbdState::InitializedIsHidden && swkbd_calc_arg.flags.appear) {
ShowInlineKeyboard();
if (swkbd_state == SwkbdState::InitializedIsHidden && swkbd_calc_arg_common.flags.appear) {
ShowInlineKeyboardOld();
return;
}
if (swkbd_state == SwkbdState::InitializedIsShown && swkbd_calc_arg.flags.disappear) {
if (swkbd_state == SwkbdState::InitializedIsShown && swkbd_calc_arg_common.flags.disappear) {
HideInlineKeyboard();
return;
}
}
void SoftwareKeyboard::RequestCalcNew() {
if (swkbd_calc_arg_common.flags.set_input_text) {
current_text = Common::UTF16StringFromFixedZeroTerminatedBuffer(
swkbd_calc_arg_new.input_text.data(), swkbd_calc_arg_new.input_text.size());
}
if (swkbd_calc_arg_common.flags.set_cursor_position) {
current_cursor_position = swkbd_calc_arg_new.cursor_position;
}
if (swkbd_calc_arg_common.flags.set_utf8_mode) {
inline_use_utf8 = swkbd_calc_arg_new.utf8_mode;
}
if (swkbd_state <= SwkbdState::InitializedIsHidden &&
swkbd_calc_arg_common.flags.unset_customize_dic) {
ReplyUnsetCustomizeDic();
}
if (swkbd_state <= SwkbdState::InitializedIsHidden &&
swkbd_calc_arg_common.flags.unset_user_word_info) {
ReplyReleasedUserWordInfo();
}
if (swkbd_state == SwkbdState::NotInitialized &&
swkbd_calc_arg_common.flags.set_initialize_arg) {
InitializeFrontendInlineKeyboardNew();
ChangeState(SwkbdState::InitializedIsHidden);
ReplyFinishedInitialize();
}
if (!swkbd_calc_arg_common.flags.set_initialize_arg &&
(swkbd_calc_arg_common.flags.set_input_text ||
swkbd_calc_arg_common.flags.set_cursor_position)) {
InlineTextChanged();
}
if (swkbd_state == SwkbdState::InitializedIsHidden && swkbd_calc_arg_common.flags.appear) {
ShowInlineKeyboardNew();
return;
}
if (swkbd_state == SwkbdState::InitializedIsShown && swkbd_calc_arg_common.flags.disappear) {
HideInlineKeyboard();
return;
}

View File

@@ -13,6 +13,11 @@ namespace Core {
class System;
}
namespace Core::Frontend {
struct KeyboardInitializeParameters;
struct InlineAppearParameters;
} // namespace Core::Frontend
namespace Service::AM::Applets {
class SoftwareKeyboard final : public Applet {
@@ -78,13 +83,22 @@ private:
void ChangeState(SwkbdState state);
/**
* Signals the frontend to initialize the software keyboard with common parameters.
* This initializes either the normal software keyboard or the inline software keyboard
* depending on the state of is_background.
* Signals the frontend to initialize the normal software keyboard with common parameters.
* Note that this does not cause the keyboard to appear.
* Use the respective Show*Keyboard() functions to cause the respective keyboards to appear.
* Use the ShowNormalKeyboard() functions to cause the keyboard to appear.
*/
void InitializeFrontendKeyboard();
void InitializeFrontendNormalKeyboard();
/**
* Signals the frontend to initialize the inline software keyboard with common parameters.
* Note that this does not cause the keyboard to appear.
* Use the ShowInlineKeyboard() to cause the keyboard to appear.
*/
void InitializeFrontendInlineKeyboard(
Core::Frontend::KeyboardInitializeParameters initialize_parameters);
void InitializeFrontendInlineKeyboardOld();
void InitializeFrontendInlineKeyboardNew();
/// Signals the frontend to show the normal software keyboard.
void ShowNormalKeyboard();
@@ -94,7 +108,10 @@ private:
std::u16string text_check_message);
/// Signals the frontend to show the inline software keyboard.
void ShowInlineKeyboard();
void ShowInlineKeyboard(Core::Frontend::InlineAppearParameters appear_parameters);
void ShowInlineKeyboardOld();
void ShowInlineKeyboardNew();
/// Signals the frontend to hide the inline software keyboard.
void HideInlineKeyboard();
@@ -111,6 +128,8 @@ private:
void RequestSetUserWordInfo(const std::vector<u8>& request_data);
void RequestSetCustomizeDic(const std::vector<u8>& request_data);
void RequestCalc(const std::vector<u8>& request_data);
void RequestCalcOld();
void RequestCalcNew();
void RequestSetCustomizedDictionaries(const std::vector<u8>& request_data);
void RequestUnsetCustomizedDictionaries(const std::vector<u8>& request_data);
void RequestSetChangedStringV2Flag(const std::vector<u8>& request_data);
@@ -149,7 +168,9 @@ private:
SwkbdState swkbd_state{SwkbdState::NotInitialized};
SwkbdInitializeArg swkbd_initialize_arg;
SwkbdCalcArg swkbd_calc_arg;
SwkbdCalcArgCommon swkbd_calc_arg_common;
SwkbdCalcArgOld swkbd_calc_arg_old;
SwkbdCalcArgNew swkbd_calc_arg_new;
bool use_changed_string_v2{false};
bool use_moved_cursor_v2{false};
bool inline_use_utf8{false};

View File

@@ -10,6 +10,7 @@
#include "common/common_funcs.h"
#include "common/common_types.h"
#include "common/swap.h"
#include "common/uuid.h"
namespace Service::AM::Applets {
@@ -216,7 +217,7 @@ struct SwkbdInitializeArg {
};
static_assert(sizeof(SwkbdInitializeArg) == 0x8, "SwkbdInitializeArg has incorrect size.");
struct SwkbdAppearArg {
struct SwkbdAppearArgOld {
SwkbdType type{};
std::array<char16_t, MAX_OK_TEXT_LENGTH + 1> ok_text{};
char16_t left_optional_symbol_key{};
@@ -229,19 +230,46 @@ struct SwkbdAppearArg {
bool enable_return_button{};
INSERT_PADDING_BYTES(3);
u32 flags{};
INSERT_PADDING_WORDS(6);
bool is_use_save_data{};
INSERT_PADDING_BYTES(7);
Common::UUID user_id{};
};
static_assert(sizeof(SwkbdAppearArg) == 0x48, "SwkbdAppearArg has incorrect size.");
static_assert(sizeof(SwkbdAppearArgOld) == 0x48, "SwkbdAppearArg has incorrect size.");
struct SwkbdCalcArg {
struct SwkbdAppearArgNew {
SwkbdType type{};
std::array<char16_t, MAX_OK_TEXT_LENGTH + 1> ok_text{};
char16_t left_optional_symbol_key{};
char16_t right_optional_symbol_key{};
bool use_prediction{};
bool disable_cancel_button{};
SwkbdKeyDisableFlags key_disable_flags{};
u32 max_text_length{};
u32 min_text_length{};
bool enable_return_button{};
INSERT_PADDING_BYTES(3);
u32 flags{};
bool is_use_save_data{};
INSERT_PADDING_BYTES(7);
Common::UUID user_id{};
u64 start_sampling_number{};
INSERT_PADDING_WORDS(8);
};
static_assert(sizeof(SwkbdAppearArgNew) == 0x70, "SwkbdAppearArg has incorrect size.");
struct SwkbdCalcArgCommon {
u32 unknown{};
u16 calc_arg_size{};
INSERT_PADDING_BYTES(2);
SwkbdCalcArgFlags flags{};
SwkbdInitializeArg initialize_arg{};
};
static_assert(sizeof(SwkbdCalcArgCommon) == 0x18, "SwkbdCalcArgCommon has incorrect size.");
struct SwkbdCalcArgOld {
f32 volume{};
s32 cursor_position{};
SwkbdAppearArg appear_arg{};
SwkbdAppearArgOld appear_arg{};
std::array<char16_t, 0x1FA> input_text{};
bool utf8_mode{};
INSERT_PADDING_BYTES(1);
@@ -265,7 +293,39 @@ struct SwkbdCalcArg {
u8 se_group{};
INSERT_PADDING_BYTES(3);
};
static_assert(sizeof(SwkbdCalcArg) == 0x4A0, "SwkbdCalcArg has incorrect size.");
static_assert(sizeof(SwkbdCalcArgOld) == 0x4A0 - sizeof(SwkbdCalcArgCommon),
"SwkbdCalcArgOld has incorrect size.");
struct SwkbdCalcArgNew {
SwkbdAppearArgNew appear_arg{};
f32 volume{};
s32 cursor_position{};
std::array<char16_t, 0x1FA> input_text{};
bool utf8_mode{};
INSERT_PADDING_BYTES(1);
bool enable_backspace_button{};
INSERT_PADDING_BYTES(3);
bool key_top_as_floating{};
bool footer_scalable{};
bool alpha_enabled_in_input_mode{};
u8 input_mode_fade_type{};
bool disable_touch{};
bool disable_hardware_keyboard{};
INSERT_PADDING_BYTES(8);
f32 key_top_scale_x{};
f32 key_top_scale_y{};
f32 key_top_translate_x{};
f32 key_top_translate_y{};
f32 key_top_bg_alpha{};
f32 footer_bg_alpha{};
f32 balloon_scale{};
INSERT_PADDING_WORDS(4);
u8 se_group{};
INSERT_PADDING_BYTES(3);
INSERT_PADDING_WORDS(8);
};
static_assert(sizeof(SwkbdCalcArgNew) == 0x4E8 - sizeof(SwkbdCalcArgCommon),
"SwkbdCalcArgNew has incorrect size.");
struct SwkbdChangedStringArg {
u32 text_length{};

Some files were not shown because too many files have changed in this diff Show More