Compare commits

...

196 Commits

Author SHA1 Message Date
yuzubot
db6a3d0326 "Merge Tagged PR 1340" 2021-01-18 13:02:55 +00:00
Rodrigo Locatti
2ef4591e58 Merge pull request #5746 from lioncash/sign-compare
texture_cache/util: Resolve -Wsign-compare warning
2021-01-18 03:49:58 -03:00
LC
f1b58f0cd9 Merge pull request #5754 from lat9nq/fix-disable-boxcat
configure_service: Only compile FormatEventStatusString when YUZU_ENABLE_BOXCAT is enabled
2021-01-17 23:52:47 -05:00
LC
dd0679d710 Merge pull request #5757 from Morph1984/npad-handheld
npad: Add check for HANDHELD_INDEX in UpdateControllerAt()
2021-01-17 23:51:30 -05:00
Morph
4a67a5b917 npad: Add check for HANDHELD_INDEX in UpdateControllerAt() 2021-01-17 22:36:17 -05:00
lat9nq
fb796843df configure_service: Only compile FormatEventStatusString when YUZU_ENABLE_BOXCAT is enabled
The function is unused if YUZU_ENABLE_BOXCAT is disabled, causing a
-Wunused-funciton error when compiled.

Wrapping it with `#ifdef YUZU_ENABLE_BOXCAT` to prevent compiling the
function when the variable is disabled. Opting to not use [[maybe
unused]] in case the function is totally unused in the future.
2021-01-17 17:54:29 -05:00
bunnei
e8401964b4 Merge pull request #5360 from ReinUsesLisp/enforce-memclass-access
core: Silence Wclass-memaccess warnings and enforce it
2021-01-17 00:55:10 -08:00
Rodrigo Locatti
132f2006af Merge pull request #5745 from lioncash/documentation
video_core: Resolve -Wdocumentation warnings
2021-01-17 05:37:17 -03:00
bunnei
e1ecf64701 Merge pull request #5744 from lioncash/header-guard
vulkan_debug_callback: Add missing header guard
2021-01-17 00:16:12 -08:00
Lioncash
5f4e7c77bd texture_cache/util: Resolve -Wsign-compare warning
Resolves a -Wsign-compare warning on Clang.
2021-01-17 02:47:48 -05:00
Lioncash
40acc2c079 video_core: Resolve -Wdocumentation warnings
Silences some -Wdocumentation warnings on Clang.
2021-01-17 02:44:21 -05:00
Lioncash
c61b973968 vulkan_debug_callback: Add missing header guard
Prevents inclusion issues from occurring.
2021-01-17 02:39:24 -05:00
Rodrigo Locatti
0e0fc07135 Merge pull request #5740 from lioncash/const-fn
input_interpreter: Mark two member functions as const
2021-01-16 20:02:02 -03:00
Rodrigo Locatti
fd873fd369 Merge pull request #5262 from ReinUsesLisp/buffer-base
buffer_cache/buffer_base: Add a range tracking buffer container and tests
2021-01-16 19:48:26 -03:00
Lioncash
ca9afa3293 input_interpreter: Mark two member functions as const
These aren't stateful functions, so we can make use of const.

While we're at, we can resolve some -Wdocumentation warnings.
2021-01-16 16:08:35 -05:00
bunnei
ff2b7cc0d3 Merge pull request #5366 from Morph1984/button-press
input_interpreter: Add method to check for a button press state
2021-01-16 13:00:54 -08:00
Morph
3c8f936b31 input_interpreter: Add method to check for a button press state
This allows to check for continuous input for the duration of a button press/hold
2021-01-16 10:34:39 -05:00
bunnei
a7fd61fcce Merge pull request #5275 from FernandoS27/fast-native-clock
X86/NativeClock: Improve performance of clock calculations on hot path.
2021-01-15 23:01:42 -08:00
bunnei
8def504d73 Merge pull request #5336 from lioncash/tree
common/tree: Convert defines over to templates
2021-01-15 21:46:25 -08:00
Rodrigo Locatti
c17ee0da5d Merge pull request #5297 from ReinUsesLisp/vulkan-allocator-common
vulkan_memory_allocator: Improvements to the memory allocator
2021-01-15 21:50:05 -03:00
LC
8be9e5b48b Merge pull request #5358 from ReinUsesLisp/rename-insert-padding
common/common_funcs: Rename INSERT_UNION_PADDING_{BYTES,WORDS} to _NOINIT
2021-01-15 16:19:46 -05:00
ReinUsesLisp
5f517e3e16 core/cmake: Enforce Wclass-memaccess
Treat -Wclass-memaccess as an error.
2021-01-15 16:31:19 -03:00
ReinUsesLisp
f8650a9580 core: Silence Wclass-memaccess warnings
This requires making several types trivial and properly initialize
them whenever they are called.
2021-01-15 16:31:19 -03:00
ReinUsesLisp
3ff978aa4f common/common_funcs: Rename INSERT_UNION_PADDING_{BYTES,WORDS} to _NOINIT
INSERT_PADDING_BYTES_NOINIT is more descriptive of the underlying behavior.
2021-01-15 16:27:28 -03:00
ReinUsesLisp
301e2b5b7a vulkan_memory_allocator: Remove unnecesary 'device' memory from commits 2021-01-15 16:19:40 -03:00
ReinUsesLisp
432f045dba vk_texture_cache: Use Download memory types for texture flushes
Use the Download memory type where it matters.
2021-01-15 16:19:40 -03:00
ReinUsesLisp
8f22f5470c vulkan_memory_allocator: Add allocation support for download types
Implements the allocator logic to handle download memory types. This
will try to use HOST_CACHED_BIT when available.
2021-01-15 16:19:39 -03:00
ReinUsesLisp
72541af3bc vulkan_memory_allocator: Add "download" memory usage hint
Allow users of the allocator to hint memory usage for downloads. This
removes the non-descriptive boolean passed for "host visible" or not
host visible memory commits, and uses an enum to hint device local,
upload and download usages.
2021-01-15 16:19:39 -03:00
ReinUsesLisp
fade63b58e vulkan_common: Move allocator to the common directory
Allow using the abstraction from the OpenGL backend.
2021-01-15 16:19:39 -03:00
ReinUsesLisp
c2b550987b renderer_vulkan: Rename Vulkan memory manager to memory allocator
"Memory manager" collides with the guest GPU memory manager, and a
memory allocator sounds closer to what the abstraction aims to be.
2021-01-15 16:19:39 -03:00
ReinUsesLisp
e996f1ad09 vk_memory_manager: Improve memory manager and its API
Fix a bug where the memory allocator could leave gaps between commits.
To fix this the allocation algorithm was reworked, although it's still
short in number of lines of code.

Rework the allocation API to self-contained movable objects instead of
naively using an unique_ptr to do the job for us. Remove the VK prefix.
2021-01-15 16:19:36 -03:00
bunnei
f728a504aa Merge pull request #5355 from lioncash/timer
common/timer: Remove
2021-01-15 09:42:33 -08:00
LC
9754a8145c Merge pull request #5357 from ReinUsesLisp/alignment-log2
common/alignment: Rename AlignBits to AlignUpLog2 and use constraints
2021-01-15 03:12:36 -05:00
Rodrigo Locatti
5b9aedfc21 Merge pull request #5356 from lioncash/clz
common/bit_util: Replace CLZ/CTZ operations with standardized ones
2021-01-15 04:48:58 -03:00
Lioncash
8620de6b20 common/bit_util: Replace CLZ/CTZ operations with standardized ones
Makes for less code that we need to maintain.
2021-01-15 02:15:32 -05:00
ReinUsesLisp
89c15dd115 common/alignment: Upgrade to use constraints instead of static asserts 2021-01-15 04:13:39 -03:00
ReinUsesLisp
fe494a0ccd common/alignment: Rename AlignBits to AlignUpLog2
AlignUpLog2 describes what the function does better than AlignBits.
2021-01-15 04:13:33 -03:00
Lioncash
91084d9396 common/timer: Remove
This is a leftover from citra and dolphin that isn't used at all,
particularly given the <chrono> header exists.
2021-01-15 01:55:33 -05:00
LC
c8bf0caca0 Merge pull request #5354 from ReinUsesLisp/remove-common-color
common/color: Remove
2021-01-15 01:54:22 -05:00
LC
6676687694 Merge pull request #5352 from ReinUsesLisp/remove-tester
cmake: Remove yuzu_tester
2021-01-15 01:48:02 -05:00
ReinUsesLisp
95fa57f007 common/color: Remove
This is a leftover from Citra we no longer use.
2021-01-15 03:47:43 -03:00
LC
7f37822c74 Merge pull request #5353 from ReinUsesLisp/deduplicate-warning-flags
{video_,}core/cmake: Remove Werror flags already defined code-base wide
2021-01-15 01:45:01 -05:00
ReinUsesLisp
fb99446f24 core/cmake: Remove Werror flags already defined code-base wide 2021-01-15 03:39:24 -03:00
ReinUsesLisp
cc2c3e447f video_core/cmake: Remove Werror flags already defined code-base wide
These flags are already defined in src/cmake.
2021-01-15 03:37:34 -03:00
LC
28e78d81b2 Merge pull request #5351 from ReinUsesLisp/vc-unused-functions
cmake: Enforce -Wunused-function code-base wise
2021-01-15 01:36:51 -05:00
Rodrigo Locatti
185388f341 Merge pull request #5350 from ReinUsesLisp/vk-init-warns
vulkan_common: Silence missing initializer warnings
2021-01-15 03:32:01 -03:00
LC
76b465f3ef Merge pull request #5349 from ReinUsesLisp/anv-fix
vulkan_device: Enable shaderStorageImageMultisample conditionally
2021-01-15 01:17:00 -05:00
ReinUsesLisp
af540b0057 cmake: Remove yuzu_tester
We never ended up using yuzu_tester.
Removing it saves code duplication with yuzu_cmd, and distribution size on
prebuilt packages.

For unit testing, we can use catch2 from guest code and dump the results
to a file. Then execute yuzu from a script on ci if we want this to be
automated.
2021-01-15 03:14:44 -03:00
ReinUsesLisp
06e0506cb3 cmake: Enforce -Wunused-function code-base wide 2021-01-15 03:09:48 -03:00
ReinUsesLisp
71264ce9a7 video_core: Enforce -Wunused-function
Stops us from merging code with unused functions in the future.

If something is invoked behind conditionally evaluated code in
a way that the language can't see it (e.g. preprocessor macros), the
potentially unused function should use [[maybe_unused]].
2021-01-15 02:59:25 -03:00
LC
6dc1d48fd1 Merge pull request #5348 from ReinUsesLisp/astc-robustness
astc: Make the decoder more robust to invalid data
2021-01-15 00:59:10 -05:00
ReinUsesLisp
3e03391a49 vk_buffer_cache: Remove unused function 2021-01-15 02:58:55 -03:00
ReinUsesLisp
be8fd5490e vulkan_common: Silence missing initializer warnings
Silence warnings explicitly initializing all members on construction.
2021-01-15 02:55:11 -03:00
ReinUsesLisp
ba2ea7eeac vulkan_device: Enable shaderStorageImageMultisample conditionally
Fix Vulkan initialization on ANV.
2021-01-15 02:47:05 -03:00
ReinUsesLisp
22be115eb2 astc: Increase integer encoded vector size
Invalid ASTC textures seem to write more bytes here, increase
the size to something that can't make us push out of bounds.
2021-01-15 02:24:36 -03:00
ReinUsesLisp
0ec71b78fb astc: Return zero on out of bound bits
Avoid out of bound reads on invalid ASTC textures.
Games can bind invalid textures that make us read or write out of bounds.
2021-01-15 02:24:36 -03:00
LC
93f7719eed Merge pull request #5302 from lat9nq/appimage-update
ci/linux: Make Mainline AppImages updateable
2021-01-14 18:46:27 -05:00
bunnei
4038ca2e5d Merge pull request #5345 from lioncash/unused-var
yuzu: Remove unused variables in Qt code
2021-01-14 01:23:50 -08:00
Lioncash
e11e1dcf2d yuzu: Remove unused variables in Qt code
Removes two unused variables in out Qt code. In this case the removal of
these two results in less allocations, given std::map allocates on the
heap.
2021-01-14 03:05:41 -05:00
Morph
f1e278c30f Merge pull request #5343 from lioncash/qt6
configure_motion_touch: Migrate off QRegExp to QRegularExpression
2021-01-14 15:30:26 +08:00
Morph
980973d83e Merge pull request #5344 from lioncash/move
configure_motion_touch: Prevent use after move in ApplyConfiguration()
2021-01-14 15:24:03 +08:00
Lioncash
45aee996c1 configure_motion_touch: Prevent use after move in ApplyConfiguration()
touch_engine was being compared against after being moved into the
setter for the engine, so this comparison wouldn't behave properly.
2021-01-13 22:37:40 -05:00
Lioncash
a2952ac213 configure_motion_touch: Migrate off QRegExp to QRegularExpression
QRegularExpression was introduced in Qt 5 as a better replacement for
QRegExp. In Qt 6.0 QRegExp is removed entirely.

To remain forward compatible with Qt 6.0, we can transition over to
using QRegularExpression.
2021-01-13 22:25:52 -05:00
LC
5e35c69f35 Merge pull request #5330 from german77/regexerror
Fix IP validator error
2021-01-13 22:08:42 -05:00
bunnei
2c2ef9252f Merge pull request #5342 from lioncash/qt6
yuzu: Migrate off of setMargin() to setContentsMargins()
2021-01-13 10:44:21 -08:00
german
06cf705501 Fix IP validator error where the last octet produced an error if the value was higher than 199 2021-01-13 11:02:28 -06:00
Lioncash
0d7de7c2db yuzu: Migrate off of setMargin() to setContentsMargins()
setMargin() has been deprecated since Qt 5, and replaced with
setContentsMargins(). We can move over to setContentsMargins() to stay
forward-compatible with Qt 6.0.
2021-01-13 07:29:59 -05:00
Morph
baff865d7c Merge pull request #5341 from ReinUsesLisp/anv-storage
vulkan_device: Remove requirement on shaderStorageImageMultisample
2021-01-13 18:00:52 +08:00
ReinUsesLisp
d9a15a935b vulkan_device: Remove requirement on shaderStorageImageMultisample
yuzu doesn't currently emulate MS image stores. Requiring this makes no
sense for now. Fixes ANV not booting any games on Vulkan.
2021-01-13 06:21:33 -03:00
ReinUsesLisp
7bd603061c tests: Add unit tests for the GPU range tracking buffer container
Due to how error prone the container design is, this commit adds unit
tests for it.

Some tests taken from here are based on bugs from using this buffer
container in games, so if we ever break it in the future in a way that
might harm games, the tests should fail.
2021-01-13 04:31:40 -03:00
LC
c320da3f63 Merge pull request #5340 from Morph1984/gcc-warnings
cmake: Enforce -Werror=switch and -Werror=unused-variable
2021-01-13 02:30:31 -05:00
ReinUsesLisp
a4bfae1b55 buffer_cache/buffer_base: Add a range tracking buffer container
It keeps track of the modified CPU and GPU ranges on a CPU page
granularity, notifying the given rasterizer about state changes
in the tracking behavior of the buffer.

Use a small vector optimization to store buffers smaller than 256 KiB
locally instead of using free store memory allocations.
2021-01-13 04:14:58 -03:00
Morph
2b98da2ed4 cmake: Enforce -Werror=switch and -Werror=unused-variable 2021-01-13 01:57:18 -05:00
bunnei
0fb19e9bef Merge pull request #5280 from FearlessTobi/port-5666
Port citra-emu/citra#5666: "Rotate previous log file to "citra_log.txt.old""
2021-01-12 22:16:57 -08:00
bunnei
de1a316369 Merge pull request #5311 from ReinUsesLisp/fence-wait
vk_fence_manager: Use timeline semaphores instead of spin waits
2021-01-12 21:00:05 -08:00
Lioncash
b15e1a3501 common/tree: Convert defines over to templates
Reworks the tree header to operate off of templates as opposed to a
series of defines.

This allows all tree facilities to obey namespacing rules, and also
allows this code to be used within modules once compiler support is in
place.

This also gets rid to use a macro to define functions and structs for
necessary data types. With templates, these will be generated when
they're actually used, eliminating the need for the separate
declaration.
2021-01-12 16:46:36 -05:00
Lioncash
197b5d19bc common/tree: Remove unused splay tree defines
Makes for less code to take care of.
2021-01-12 02:32:41 -05:00
bunnei
99d2d77062 Merge pull request #5333 from lioncash/define
common/parent_of_member: Replace TYPED_STORAGE define with template alias
2021-01-11 20:47:30 -08:00
Lioncash
703c57a119 common/parent_of_member: Replace TYPED_STORAGE define with template alias
Provides the same construct, but makes it obey namespacing.
2021-01-11 18:26:04 -05:00
bunnei
eb3cb54aa5 Merge pull request #5266 from bunnei/kernel-synch
Rewrite KSynchronizationObject, KConditonVariable, and KAddressArbiter
2021-01-11 14:36:26 -08:00
bunnei
03dfc8d8e7 hle: kernel: thread: Preserve thread wait reason for debugging only.
- This is decoupled from core functionality and used for debugging only.
2021-01-11 14:23:17 -08:00
bunnei
81c1bfafea yuzu: debugger: wait_tree: Handle unknown ThreadState. 2021-01-11 14:23:16 -08:00
bunnei
6b2f653143 hle: kernel: k_scheduler_lock: Fix shadowing errors. 2021-01-11 14:23:16 -08:00
bunnei
354130cd84 core: arm: arm_interface: Fix shadowing errors. 2021-01-11 14:23:16 -08:00
bunnei
82f6037ec2 core: hle: Add missing calls to MicroProfileOnThreadExit. 2021-01-11 14:23:16 -08:00
bunnei
912dd50146 core: hle: Integrate new KConditionVariable and KAddressArbiter implementations. 2021-01-11 14:23:16 -08:00
bunnei
952d1ac487 core: hle: kernel: Update KAddressArbiter. 2021-01-11 14:23:16 -08:00
bunnei
b4e6d6c385 core: hle: kernel: Update KConditionVariable. 2021-01-11 14:23:16 -08:00
bunnei
1212fa60b6 core: hle: kernel: Begin moving common SVC defintions to its own header. 2021-01-11 14:23:16 -08:00
bunnei
8a155c4058 hle: kernel: Remove unnecessary AddressArbiter definition. 2021-01-11 14:23:16 -08:00
bunnei
92d5c63f01 common: common_funcs: Add R_UNLESS macro. 2021-01-11 14:23:16 -08:00
bunnei
f12701b303 hle: kernel: k_scheduler: Cleanup OnThreadPriorityChanged. 2021-01-11 14:23:16 -08:00
bunnei
d1309fb275 hle: kernel: Rename thread "status" to "state". 2021-01-11 14:23:16 -08:00
bunnei
c3c43e32fc hle: kernel: thread: Replace ThreadStatus/ThreadSchedStatus with a single ThreadState.
- This is how the real kernel works, and is more accurate and simpler.
2021-01-11 14:23:16 -08:00
bunnei
7420a717e6 core: hle: kernel: Add some useful functions for checking kernel addresses. 2021-01-11 14:23:16 -08:00
bunnei
4bbf173fc1 core: hle: kernel: svc_types: Add type definitions for KAddressArbiter. 2021-01-11 14:23:16 -08:00
bunnei
fb43b8efd2 common: Introduce useful tree structures. 2021-01-11 14:23:16 -08:00
bunnei
35c3c078e3 core: hle: kernel: Update KSynchronizationObject. 2021-01-11 14:23:16 -08:00
bunnei
1ae883435d core: hle: kernel: Begin moving common SVC results to its own header. 2021-01-11 14:23:16 -08:00
bunnei
8fc6e92ef1 hle: service: nfp: Remove incorrect signaling behavior in GetDeviceState. 2021-01-11 14:23:16 -08:00
bunnei
46cd71d1c7 Merge pull request #5229 from Morph1984/fullscreen-opt
yuzu/main: Add basic command line arguments
2021-01-10 18:53:04 -08:00
LC
5e161b2531 Merge pull request #5324 from Morph1984/docked-default
config: Enable docked mode by default
2021-01-10 20:51:33 -05:00
bunnei
32df83e55d Merge pull request #5312 from german77/overclockenabled
apm: Stub IsCpuOverclockEnabled
2021-01-10 14:30:13 -08:00
Morph
05f58144c9 config: Enable docked mode by default 2021-01-10 09:37:38 -05:00
bunnei
fe9588f4a0 Merge pull request #5323 from Morph1984/enforce-c4101
cmake: Enforce C4101
2021-01-09 22:49:58 -08:00
Morph
25724898d0 cmake: Enforce C4101
This matches GCC's -Wunused-variable
2021-01-10 01:16:25 -05:00
Morph
e07540264d yuzu_cmd: Silence unreferenced local variable warning 2021-01-10 01:10:36 -05:00
LC
0f932d30f5 Merge pull request #5320 from ReinUsesLisp/div-ceil-type
common/div_ceil: Return numerator type
2021-01-09 16:45:29 -05:00
LC
64a24f3344 Merge pull request #5322 from Morph1984/resolve-c4062-msvc
general: Resolve C4062 warnings on MSVC
2021-01-09 16:43:47 -05:00
Morph
4aae21e1e4 general: Resolve C4062 warnings on MSVC 2021-01-09 14:46:35 -05:00
ReinUsesLisp
c190586597 common/div_ceil: Return numerator type
Fixes instances where DivCeil(u32, u64) would surprisingly return u64,
instead of the more natural u32.
2021-01-09 03:16:10 -03:00
Rodrigo Locatti
7bad1974a6 Merge pull request #5319 from ReinUsesLisp/msvc-warnings
cmake: Enforce C4062, C4265, C4388, and C5038
2021-01-09 03:13:25 -03:00
ReinUsesLisp
d7128845c9 cmake: Enforce C4062, C4265, C4388, and C5038
This should match some warnings we treat as errors on gcc and clang,
caching bugs early and reducing the number of instances where we have to
edit commits to make CI happy when developing from Windows.
2021-01-09 02:19:17 -03:00
ReinUsesLisp
c68d0dc851 file_sys/registered_cache: Silence virtual functions without override warnings 2021-01-09 00:04:12 -03:00
ReinUsesLisp
b4451c5e81 core: Silence unhandled enum in switch warnings 2021-01-08 23:21:07 -03:00
ReinUsesLisp
613b3671b7 tests/ring_buffer: Silence signed/unsigned mismatch warnings 2021-01-08 23:14:38 -03:00
bunnei
8eea7c1176 Merge pull request #5231 from ReinUsesLisp/dyn-bindings
renderer_vulkan/fixed_pipeline_state: Move enabled bindings to static state
2021-01-08 12:24:46 -08:00
german
385a4555d5 Stub IsCpuOverclockEnabled 2021-01-08 09:44:56 -06:00
bunnei
61f707d708 Merge pull request #5300 from JeremyStarTM/patch-1
Removed MacOS build link
2021-01-08 00:02:12 -08:00
ReinUsesLisp
154a7653f9 vk_fence_manager: Use timeline semaphores instead of spin waits
With timeline semaphores we can avoid creating objects. Instead of
creating an event, grab the current tick from the scheduler and flush
the current command buffer. When the fence has to be queried/waited, we
can do so against the master semaphore instead of spinning on an event.

If Vulkan supported NVN like events or fences, we could signal from the
command buffer and wait for that without splitting things in two
separate command buffers.
2021-01-08 02:47:28 -03:00
bunnei
c72571055b Merge pull request #5310 from lat9nq/fix-disable-web-service
CMakeLists: Disable YUZU_ENABLE_BOXCAT if ENABLE_WEB_SERVICE is disabled
2021-01-07 17:10:34 -08:00
lat9nq
78be397723 CMakeLists: Disable YUZU_ENABLE_BOXCAT if ENABLE_WEB_SERVICE is disabled
Boxcat is a web service but is still enabled if ENABLE_WEB_SERVICE is
disabled during the CMake stage, which causes compilation issues with
either missing headers or missing libraries.

This disables YUZU_ENABLE_BOXCAT regardless of the input if
ENABLE_WEB_SERVICE is disabled.
2021-01-07 17:28:15 -05:00
bunnei
aaf9e39f56 Merge pull request #5237 from ameerj/nvdec-syncpt
nvdec: Incorporate syncpoint manager
2021-01-07 12:42:28 -08:00
Ameer J
16392a23cc remove inaccurate reference
Co-authored-by: LC <mathew1800@gmail.com>
2021-01-07 14:33:45 -05:00
ameerj
06cef3355e fix for nvdec disabled, cleanup host1x 2021-01-07 14:33:45 -05:00
ameerj
2c27127d04 nvdec syncpt incorporation
laying the groundwork for async gpu, although this does not fully implement async nvdec operations
2021-01-07 14:33:45 -05:00
Morph
bcb702fa3e Merge pull request #5306 from MerryMage/ignore-library-Open
vulkan_library: Common::DynamicLibrary::Open is [[nodiscard]]
2021-01-08 01:44:18 +08:00
MerryMage
21199cb965 vulkan_library: Common::DynamicLibrary::Open is [[nodiscard]]
Ignore the return value on __APPLE__ systems as well
2021-01-07 17:37:47 +00:00
Morph
123568ef80 Merge pull request #5305 from MerryMage/page_shift
texture_cache: Replace PAGE_SHIFT with PAGE_BITS
2021-01-08 00:55:34 +08:00
MerryMage
aace20afc7 texture_cache: Replace PAGE_SHIFT with PAGE_BITS
PAGE_SHIFT is a #define in system headers that leaks into user code on some systems
2021-01-07 16:51:34 +00:00
lat9nq
0d24b1a31b ci/linux: Make Mainline AppImages updateable
Moves the final step for building the AppImage to the upload script.
Instructs appimagetool to embed update information into the AppImage if
the release target is Mainline. Also tells it to create a zsync file to
enable partial-downloads when updating the AppImage.

Also renames the AppImage from `yuzu-{version info}-x86_64.AppImage` to
`yuzu-{version info}.AppImage` to avoid a bug in the downloads page at
yuzu-emu.org/downloads.
2021-01-06 13:23:56 -05:00
JeremyStarTM
5b60899fbc Removed MacOS build link
The MacOS build link was removed in the README.md because it no longer exist.
2021-01-06 11:39:27 +01:00
Morph
e8d40559d5 Merge pull request #5288 from ReinUsesLisp/workaround-garbage
gl_texture_cache: Avoid format views on Intel and AMD
2021-01-06 15:39:51 +08:00
bunnei
e112d0a52f Merge pull request #5250 from lat9nq/appimage
ci/linux: Build an AppImage
2021-01-05 21:34:08 -08:00
bunnei
dc02b03c4a Merge pull request #5293 from ReinUsesLisp/return-values
core: Enforce C4715 (not all control paths return a value)
2021-01-05 19:04:15 -08:00
bunnei
275b96a0e2 Merge pull request #5289 from ReinUsesLisp/vulkan-device
vulkan_common: Move device abstraction to the common directory and allow surfaceless devices
2021-01-05 17:44:56 -08:00
ReinUsesLisp
43d9f417ae core: Enforce C4715 (not all control paths return a value) 2021-01-05 04:18:40 -03:00
ReinUsesLisp
4f13e270c8 core: Silence warnings when compiling without asserts 2021-01-05 04:18:16 -03:00
LC
2a6e6306d8 Merge pull request #5292 from ReinUsesLisp/empty-set
vk_rasterizer: Skip binding empty descriptor sets on compute
2021-01-04 21:32:57 -05:00
bunnei
4e6aa1cfdd Merge pull request #5261 from gal20/hide_mouse_patch
yuzu/main: Fix 'Hide mouse on inactivity' and port citra-emu/citra#5476
2021-01-04 17:19:04 -08:00
ReinUsesLisp
1ccf805367 vk_rasterizer: Skip binding empty descriptor sets on compute
Fixes unit tests where compute shaders had no descriptors in the set,
making Vulkan drivers crash when binding an empty set.
2021-01-04 17:56:39 -03:00
Morph
ace8a8e86e Merge pull request #5284 from ameerj/bufferq-oor-fix
buffer_queue: Fix data race by protecting queue_sequence access
2021-01-04 15:42:40 +08:00
ameerj
6b354ccaee buffer_queue: Protect queue_sequence list access with a mutex
fixes a data race as this is an unprotected variable manipulated by multiple threads
2021-01-04 01:36:41 -05:00
ReinUsesLisp
ac1e4734c2 vulkan_device: Allow creating a device without surface 2021-01-04 02:22:22 -03:00
ReinUsesLisp
d235cf3933 renderer_vulkan/nsight_aftermath_tracker: Move to vulkan_common 2021-01-04 02:22:22 -03:00
ReinUsesLisp
3753553b6a renderer_vulkan: Move device abstraction to vulkan_common 2021-01-04 02:22:22 -03:00
Rodrigo Locatti
4801f4250d Merge pull request #5286 from ReinUsesLisp/rename-vk-device
renderer_vulkan: Rename VKDevice to Device
2021-01-04 02:22:02 -03:00
ReinUsesLisp
7d904fef2e gl_texture_cache: Avoid format views on Intel and AMD
Intel and AMD proprietary drivers are incapable of rendering to texture
views of different formats than the original texture. Avoid creating
these at a cache level. This will consume more memory, emulating them
with copies.
2021-01-04 02:06:40 -03:00
ReinUsesLisp
3a49c1a691 gl_texture_cache: Create base images with sRGB
This breaks accelerated decoders trying to imageStore into images with
sRGB. The decoders are currently disabled so this won't cause issues at
runtime.
2021-01-04 01:54:54 -03:00
FearlessTobi
beb951770a Address review comments 2021-01-04 04:36:50 +01:00
xperia64
fd5776aac2 Delete the old log file before rotating (#5675) 2021-01-04 04:33:34 +01:00
Rodrigo Locatti
87a8925523 Merge pull request #5285 from lioncash/error-str
main: Resolve error string not displaying
2021-01-03 19:56:15 -03:00
ReinUsesLisp
974d731926 renderer_vulkan: Rename VKDevice to Device
The "VK" prefix predates the "Vulkan" namespace. It was carried around
the codebase for consistency. "VKDevice" currently is a bad alias with
"VkDevice" (only an upcase character of difference) that can cause
confusion. Rename all instances of it.
2021-01-03 17:51:48 -03:00
Rodrigo Locatti
7265e80c12 Merge pull request #5230 from ReinUsesLisp/vulkan-common
vulkan_common: Move reusable Vulkan abstractions to a separate directory
2021-01-03 17:38:29 -03:00
Lioncash
86592b274e main: Resolve error string not displaying
During the transition to make the error dialog translatable, I
accidentally got rid of the conversion to ResultStatus, which prevented
operator<< from being invoked during formatting.

This adds a function to directly retrieve the result status string
instead so that it displays again.
2021-01-03 13:18:04 -05:00
bunnei
71e18dddbe Merge pull request #5278 from MerryMage/cpuopt_unsafe_inaccurate_nan
dynarmic: Add Unsafe_InaccurateNaN optimization
2021-01-03 03:27:29 -08:00
bunnei
f64456c7e2 Merge pull request #5279 from bunnei/buffer-queue-connect
hle: service: nvflinger: buffer_queue: Do not reset id/layer_id on Connect.
2021-01-03 01:01:38 -08:00
Morph
ec58aabb26 Merge pull request #5281 from FearlessTobi/port-5668
Port citra-emu/citra#5668: "Update zstd to v1.4.8"
2021-01-03 12:25:21 +08:00
FearlessTobi
c90268127b Update zstd to v1.4.8
Co-Authored-By: Vitor K <29167336+vitor-k@users.noreply.github.com>
2021-01-03 01:58:14 +01:00
bunnei
bf8bd60ab3 Fix the old log file to work with the log parser. 2021-01-03 01:44:52 +01:00
xperia64
f478a57737 Rotate previous log file to '.old' if it exists 2021-01-03 01:44:42 +01:00
bunnei
235b5d27ae Merge pull request #5267 from lioncash/localize
main: Make the loader error dialog fully translatable
2021-01-02 15:44:32 -08:00
bunnei
beaa25d777 hle: service: nvflinger: buffer_queue: Do not reset id/layer_id on Connect.
- This behavior is a mistake, fixes Katana Zero.
2021-01-02 15:42:16 -08:00
MerryMage
8a5356357f externals: Update dynarmic to 3806284cb 2021-01-02 20:42:11 +00:00
bunnei
62f67df6d7 Merge pull request #5277 from Morph1984/fix-comments
general: Fix various spelling errors
2021-01-02 12:33:48 -08:00
bunnei
55fb8e7bdd Merge pull request #5273 from timleg002/patch-1
typo fix
2021-01-02 12:31:19 -08:00
MerryMage
57c9da1b39 dynarmic: Add Unsafe_InaccurateNaN optimization 2021-01-02 20:13:21 +00:00
Morph
a745d87971 general: Fix various spelling errors 2021-01-02 10:23:41 -05:00
Fernando Sahmkow
53d92318b8 X86/NativeClock: Reimplement RTDSC access to be lock free. 2021-01-02 04:00:27 +01:00
Fernando Sahmkow
d4f871cb6a X86/NativeClock: Improve performance of clock calculations on hot path. 2021-01-02 00:43:47 +01:00
bunnei
1ff341f3dc Merge pull request #5209 from Morph1984/refactor-controller-connect
configure_input: Modify controller connection delay
2021-01-01 13:10:34 -08:00
Timotej Leginus
0d47c1d527 typo fix
typo fix
2021-01-01 21:29:53 +01:00
LC
9e109849ff Merge pull request #5271 from MerryMage/rm-mem-Special
memory: Remove MemoryHook
2021-01-01 11:02:14 -05:00
Morph
904ac1daec configure_input: Modify controller connection delay
Increases the controller connection delay to 60ms and refactors it to attempt to disconnect all controllers prior to connecting all controllers in HID.
2021-01-01 06:39:24 -05:00
MerryMage
6d30745d77 memory: Remove MemoryHook 2021-01-01 11:34:38 +00:00
bunnei
eb318ffffc Merge pull request #5249 from ReinUsesLisp/lock-free-pages
core/memory: Read and write page table atomically
2021-01-01 02:54:01 -08:00
bunnei
0bddb794b0 Merge pull request #5239 from FearlessTobi/enable-translation
.ci/templates: Enable QT translation for MSVC CI
2020-12-31 23:31:23 -08:00
gal20
5dfb8743cb yuzu/main: fix mouse not showing on move and port citra-emu/citra#5476 2020-12-31 21:16:09 +02:00
Lioncash
8c27a74132 main: Make the loader error dialog fully translatable
Makes the dialog fully localizable and also adds disambiguation comments
to help translators understand what the formatting specifiers indicate.
2020-12-31 12:44:31 -05:00
Lioncash
803ac4ca59 main: Tidy up enum comparison
enum classes are comparable with one another, so these casts aren't
necessary.
2020-12-31 10:21:15 -05:00
ReinUsesLisp
cdbee27692 vulkan_instance: Allow different Vulkan versions and enforce 1.1
For listing the available physical devices we can use Vulkan 1.0.
Now that MoltenVK supports 1.1 we can require it for running games.

Add missing documentation.
2020-12-31 02:07:34 -03:00
ReinUsesLisp
7344a7c447 vk_device: Use an array to report lacking device limits
This makes easier to add and tune the required device limits.
2020-12-31 02:07:34 -03:00
ReinUsesLisp
f687392e6f vk_device: Stop initialization when device is not suitable
VKDevice::IsSuitable was not being called. To address this issue, check
suitability before initialization and throw an exception if it fails.

By doing this, we can deduplicate some code on queue searches.
Previosuly we would first search if a present and graphics queue
existed, then on initialization we would search again to find the index.
2020-12-31 02:07:33 -03:00
ReinUsesLisp
53ea06dc17 renderer_vulkan: Remove two step initialization on VKDevice
The Vulkan device abstraction either initializes successfully on the
constructor or throws a Vulkan exception.
2020-12-31 02:07:33 -03:00
ReinUsesLisp
085adfea00 renderer_vulkan: Throw when enumerating devices fails
Report device enumeration errors with exceptions to be consistent with
other initialization related function calls. Reduces the amount of code
to maintain.
2020-12-31 02:07:33 -03:00
ReinUsesLisp
11f0f7598d renderer_vulkan: Initialize surface in separate file
Move surface initialization code to a separate file. It's unlikely to
use this code outside of Vulkan, but keeping platform-specific code
(Win32, Xlib, Wayland) in its own translation unit keeps things cleaner.
2020-12-31 02:07:33 -03:00
ReinUsesLisp
dce8720780 renderer_vulkan: Catch and report exceptions
Move more Vulkan code to report errors with exceptions and report them
through a log before notifying it with an error boolean for backwards
compatibility. In the future we can replace the rasterizer two-step
initialization to always use exceptions.
2020-12-31 02:07:33 -03:00
ReinUsesLisp
47843b4f09 renderer_vulkan: Create debug callback on separate file and throw
Initialize debug callbacks (messenger) from a separate file. This allows
sharing code with different backends.

Change our Vulkan error handling to use exceptions instead of error
codes, simplifying the initialization process.
2020-12-31 02:07:33 -03:00
ReinUsesLisp
25f88d99ce renderer_vulkan: Move instance initialization to a separate file
Simplify Vulkan's backend initialization code by moving it to a separate
file, allowing us to initialize a Vulkan instance from different
backends.
2020-12-31 02:07:33 -03:00
ReinUsesLisp
d1435009ed vulkan_common: Rename renderer_vulkan/wrapper.h to vulkan_common/vulkan_wrapper.h
Allows sharing Vulkan wrapper code between different rendering backends.
2020-12-31 02:07:14 -03:00
ReinUsesLisp
d937421422 vulkan_common: Move dynamic library load to a separate file
Allows us to initialize a Vulkan dynamic library from different backends
without duplicating code.
2020-12-31 02:02:48 -03:00
lat9nq
43cad754d5 ci: Build an AppImage
This builds yuzu in an AppImage alongside the other archives during
release. Required to allow distributing yuzu in the future with upgraded
dependencies, such as Qt.
2020-12-30 16:05:15 -05:00
ReinUsesLisp
b3587102d1 core/memory: Read and write page table atomically
Squash attributes into the pointer's integer, making them an uintptr_t
pair containing 2 bits at the bottom and then the pointer. These bits
are currently unused thanks to alignment requirements.

Configure Dynarmic to mask out these bits on pointer reads.

While we are at it, remove some unused attributes carried over from
Citra.

Read/Write and other hot functions use a two step unpacking process that
is less readable to stop MSVC from emitting an extra AND instruction in
the hot path:

 mov         rdi,rcx
 shr         rdx,0Ch
 mov         r8,qword ptr [rax+8]
 mov         rax,qword ptr [r8+rdx*8]
 mov         rdx,rax
-and         al,3
 and         rdx,0FFFFFFFFFFFFFFFCh
 je          Core::Memory::Memory::Impl::Read<unsigned char>
 mov         rax,qword ptr [vaddr]
 movzx       eax,byte ptr [rdx+rax]
2020-12-29 21:54:49 -03:00
FearlessTobi
368b3ee227 .ci/templates: Enable QT translation for MSVC CI
Previously this flag was missing, causing translation files not to be shipped with CI builds of yuzu.
2020-12-28 15:54:02 +01:00
ReinUsesLisp
661483f313 renderer_vulkan/fixed_pipeline_state: Move enabled bindings to static state
Without using VK_EXT_robustness2, we can't consider the 'enabled' (not
null) vertex buffers as dynamic state, as this leads to invalid Vulkan
state. Move this to static state that is always hashed and compared in
the pipeline key.

The bits for enabled vertex buffers are moved into the attribute state
bitfield. This is not 'correct' as it's not an attribute state, but that
struct has bits to spare, and it's used in an array of 32 elements (the
exact same number of vertex buffer bindings).
2020-12-25 23:34:38 -03:00
Morph
ff3aa5d380 yuzu/main: Add basic command line arguments
The following command line arguments are supported:

yuzu.exe "path_to_game" - Launches a game at "path_to_game"
yuzu.exe -f - Launches the next game in fullscreen
yuzu.exe -g "path_to_game" - Launches a game at "path_to_game"
yuzu.exe -f -g "path_to_game" - Launches a game at "path_to_game" in fullscreen
2020-12-25 15:41:00 -05:00
275 changed files with 7336 additions and 6512 deletions

View File

@@ -15,5 +15,5 @@ mv "${REV_NAME}-source.tar.xz" $RELEASE_NAME
7z a "$REV_NAME.7z" $RELEASE_NAME
# move the compiled archive into the artifacts directory to be uploaded by travis releases
mv "$ARCHIVE_NAME" artifacts/
mv "$REV_NAME.7z" artifacts/
mv "$ARCHIVE_NAME" "${ARTIFACTS_DIR}/"
mv "$REV_NAME.7z" "${ARTIFACTS_DIR}/"

View File

@@ -2,5 +2,6 @@
GITDATE="`git show -s --date=short --format='%ad' | sed 's/-//g'`"
GITREV="`git show -s --format='%h'`"
ARTIFACTS_DIR="artifacts"
mkdir -p artifacts
mkdir -p "${ARTIFACTS_DIR}/"

View File

@@ -1,14 +1,49 @@
#!/bin/bash -ex
# Exit on error, rather than continuing with the rest of the script.
set -e
cd /yuzu
ccache -s
mkdir build || true && cd build
cmake .. -G Ninja -DDISPLAY_VERSION=$1 -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER=/usr/lib/ccache/gcc -DCMAKE_CXX_COMPILER=/usr/lib/ccache/g++ -DYUZU_ENABLE_COMPATIBILITY_REPORTING=${ENABLE_COMPATIBILITY_REPORTING:-"OFF"} -DENABLE_COMPATIBILITY_LIST_DOWNLOAD=ON -DUSE_DISCORD_PRESENCE=ON -DENABLE_QT_TRANSLATION=ON
cmake .. -DDISPLAY_VERSION=$1 -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER=/usr/lib/ccache/gcc -DCMAKE_CXX_COMPILER=/usr/lib/ccache/g++ -DYUZU_ENABLE_COMPATIBILITY_REPORTING=${ENABLE_COMPATIBILITY_REPORTING:-"OFF"} -DENABLE_COMPATIBILITY_LIST_DOWNLOAD=ON -DUSE_DISCORD_PRESENCE=ON -DENABLE_QT_TRANSLATION=ON -DCMAKE_INSTALL_PREFIX="/usr"
ninja
make -j$(nproc)
ccache -s
ctest -VV -C Release
make install DESTDIR=AppDir
rm -vf AppDir/usr/bin/yuzu-cmd AppDir/usr/bin/yuzu-tester
# Download tools needed to build an AppImage
wget -nc https://github.com/linuxdeploy/linuxdeploy/releases/download/continuous/linuxdeploy-x86_64.AppImage
wget -nc https://github.com/linuxdeploy/linuxdeploy-plugin-qt/releases/download/continuous/linuxdeploy-plugin-qt-x86_64.AppImage
wget -nc https://github.com/darealshinji/AppImageKit-checkrt/releases/download/continuous/AppRun-patched-x86_64
wget -nc https://github.com/darealshinji/AppImageKit-checkrt/releases/download/continuous/exec-x86_64.so
# Set executable bit
chmod 755 \
AppRun-patched-x86_64 \
exec-x86_64.so \
linuxdeploy-x86_64.AppImage \
linuxdeploy-plugin-qt-x86_64.AppImage
# Workaround for https://github.com/AppImage/AppImageKit/issues/828
export APPIMAGE_EXTRACT_AND_RUN=1
mkdir -p AppDir/usr/optional
mkdir -p AppDir/usr/optional/libstdc++
mkdir -p AppDir/usr/optional/libgcc_s
# Deploy yuzu's needed dependencies
./linuxdeploy-x86_64.AppImage --appdir AppDir --plugin qt
# Workaround for building yuzu with GCC 10 but also trying to distribute it to Ubuntu 18.04 et al.
# See https://github.com/darealshinji/AppImageKit-checkrt
cp exec-x86_64.so AppDir/usr/optional/exec.so
cp AppRun-patched-x86_64 AppDir/AppRun
cp --dereference /usr/lib/x86_64-linux-gnu/libstdc++.so.6 AppDir/usr/optional/libstdc++/libstdc++.so.6
cp --dereference /lib/x86_64-linux-gnu/libgcc_s.so.1 AppDir/usr/optional/libgcc_s/libgcc_s.so.1

View File

@@ -2,6 +2,7 @@
. .ci/scripts/common/pre-upload.sh
APPIMAGE_NAME="yuzu-${GITDATE}-${GITREV}.AppImage"
REV_NAME="yuzu-linux-${GITDATE}-${GITREV}"
ARCHIVE_NAME="${REV_NAME}.tar.xz"
COMPRESSION_FLAGS="-cJvf"
@@ -17,4 +18,24 @@ mkdir "$DIR_NAME"
cp build/bin/yuzu-cmd "$DIR_NAME"
cp build/bin/yuzu "$DIR_NAME"
# Build an AppImage
cd build
wget -nc https://github.com/AppImage/AppImageKit/releases/download/continuous/appimagetool-x86_64.AppImage
chmod 755 appimagetool-x86_64.AppImage
if [ "${RELEASE_NAME}" = "mainline" ]; then
# Generate update information if releasing to mainline
./appimagetool-x86_64.AppImage -u "gh-releases-zsync|yuzu-emu|yuzu-${RELEASE_NAME}|latest|yuzu-*.AppImage.zsync" AppDir "${APPIMAGE_NAME}"
else
./appimagetool-x86_64.AppImage AppDir "${APPIMAGE_NAME}"
fi
cd ..
# Copy the AppImage and update info to the artifacts directory and avoid compressing it
cp "build/${APPIMAGE_NAME}" "${ARTIFACTS_DIR}/"
if [ -f "build/${APPIMAGE_NAME}.zsync" ]; then
cp "build/${APPIMAGE_NAME}.zsync" "${ARTIFACTS_DIR}/"
fi
. .ci/scripts/common/post-upload.sh

View File

@@ -8,7 +8,7 @@ steps:
displayName: 'Install vulkan-sdk'
- script: python -m pip install --upgrade pip conan
displayName: 'Install conan'
- script: refreshenv && mkdir build && cd build && cmake -G "Visual Studio 16 2019" -A x64 --config Release -DYUZU_USE_BUNDLED_QT=1 -DYUZU_USE_QT_WEB_ENGINE=ON -DENABLE_COMPATIBILITY_LIST_DOWNLOAD=ON -DYUZU_ENABLE_COMPATIBILITY_REPORTING=${COMPAT} -DUSE_DISCORD_PRESENCE=ON -DDISPLAY_VERSION=${{ parameters['version'] }} .. && cd ..
- script: refreshenv && mkdir build && cd build && cmake -G "Visual Studio 16 2019" -A x64 --config Release -DYUZU_USE_BUNDLED_QT=1 -DYUZU_USE_QT_WEB_ENGINE=ON -DENABLE_COMPATIBILITY_LIST_DOWNLOAD=ON -DYUZU_ENABLE_COMPATIBILITY_REPORTING=${COMPAT} -DUSE_DISCORD_PRESENCE=ON -DENABLE_QT_TRANSLATION=ON -DDISPLAY_VERSION=${{ parameters['version'] }} .. && cd ..
displayName: 'Configure CMake'
- task: MSBuild@1
displayName: 'Build'

View File

@@ -26,6 +26,10 @@ option(ENABLE_CUBEB "Enables the cubeb audio backend" ON)
option(USE_DISCORD_PRESENCE "Enables Discord Rich Presence" OFF)
if (NOT ENABLE_WEB_SERVICE)
set(YUZU_ENABLE_BOXCAT OFF)
endif()
# Default to a Release build
get_property(IS_MULTI_CONFIG GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
if (NOT IS_MULTI_CONFIG AND NOT CMAKE_BUILD_TYPE)
@@ -165,7 +169,7 @@ macro(yuzu_find_packages)
"lz4 1.8 lz4/1.9.2"
"nlohmann_json 3.8 nlohmann_json/3.8.0"
"ZLIB 1.2 zlib/1.2.11"
"zstd 1.4 zstd/1.4.5"
"zstd 1.4 zstd/1.4.8"
)
foreach(PACKAGE ${REQUIRED_LIBS})
@@ -239,7 +243,7 @@ if(ENABLE_QT)
if (YUZU_USE_QT_WEB_ENGINE)
find_package(Qt5 COMPONENTS WebEngineCore WebEngineWidgets)
endif()
if (ENABLE_QT_TRANSLATION)
find_package(Qt5 REQUIRED COMPONENTS LinguistTools ${QT_PREFIX_HINT})
endif()
@@ -322,7 +326,7 @@ if (CONAN_REQUIRED_LIBS)
list(APPEND Boost_LIBRARIES Boost::context)
endif()
endif()
# Due to issues with variable scopes in functions, we need to also find_package(qt5) outside of the function
if(ENABLE_QT)
list(APPEND CMAKE_MODULE_PATH "${CONAN_QT_ROOT_RELEASE}")

View File

@@ -30,7 +30,6 @@ If you want to contribute to the user interface translation, please check out th
* __Windows__: [Windows Build](https://github.com/yuzu-emu/yuzu/wiki/Building-For-Windows)
* __Linux__: [Linux Build](https://github.com/yuzu-emu/yuzu/wiki/Building-For-Linux)
* __macOS__: [macOS Build](https://github.com/yuzu-emu/yuzu/wiki/Building-for-macOS)
### Support

View File

@@ -45,10 +45,15 @@ if (MSVC)
# Warnings
/W3
/we4062 # enumerator 'identifier' in a switch of enum 'enumeration' is not handled
/we4101 # 'identifier': unreferenced local variable
/we4265 # 'class': class has virtual functions, but destructor is not virtual
/we4388 # signed/unsigned mismatch
/we4547 # 'operator' : operator before comma has no effect; expected operator with side-effect
/we4549 # 'operator1': operator before comma has no effect; did you intend 'operator2'?
/we4555 # Expression has no effect; expected expression with side-effect
/we4834 # Discarding return value of function with 'nodiscard' attribute
/we5038 # data member 'member1' will be initialized after data member 'member2'
)
# /GS- - No stack buffer overflow checks
@@ -62,8 +67,11 @@ else()
-Werror=implicit-fallthrough
-Werror=missing-declarations
-Werror=reorder
-Werror=switch
-Werror=uninitialized
-Werror=unused-function
-Werror=unused-result
-Werror=unused-variable
-Wextra
-Wmissing-declarations
-Wno-attributes
@@ -122,7 +130,6 @@ add_subdirectory(tests)
if (ENABLE_SDL2)
add_subdirectory(yuzu_cmd)
add_subdirectory(yuzu_tester)
endif()
if (ENABLE_QT)

View File

@@ -40,17 +40,17 @@ public:
SinkSampleFormat sample_format;
std::array<u8, AudioCommon::MAX_CHANNEL_COUNT> input;
bool in_use;
INSERT_UNION_PADDING_BYTES(5);
INSERT_PADDING_BYTES_NOINIT(5);
};
static_assert(sizeof(CircularBufferIn) == 0x28,
"SinkInfo::CircularBufferIn is in invalid size");
struct DeviceIn {
std::array<u8, 255> device_name;
INSERT_UNION_PADDING_BYTES(1);
INSERT_PADDING_BYTES_NOINIT(1);
s32_le input_count;
std::array<u8, AudioCommon::MAX_CHANNEL_COUNT> input;
INSERT_UNION_PADDING_BYTES(1);
INSERT_PADDING_BYTES_NOINIT(1);
bool down_matrix_enabled;
DownmixCoefficients down_matrix_coef;
};

View File

@@ -86,28 +86,28 @@ struct BehaviorFlags {
static_assert(sizeof(BehaviorFlags) == 0x4, "BehaviorFlags is an invalid size");
struct ADPCMContext {
u16 header{};
s16 yn1{};
s16 yn2{};
u16 header;
s16 yn1;
s16 yn2;
};
static_assert(sizeof(ADPCMContext) == 0x6, "ADPCMContext is an invalid size");
struct VoiceState {
s64 played_sample_count{};
s32 offset{};
s32 wave_buffer_index{};
std::array<bool, AudioCommon::MAX_WAVE_BUFFERS> is_wave_buffer_valid{};
s32 wave_buffer_consumed{};
std::array<s32, AudioCommon::MAX_SAMPLE_HISTORY> sample_history{};
s32 fraction{};
VAddr context_address{};
Codec::ADPCM_Coeff coeff{};
ADPCMContext context{};
std::array<s64, 2> biquad_filter_state{};
std::array<s32, AudioCommon::MAX_MIX_BUFFERS> previous_samples{};
u32 external_context_size{};
bool is_external_context_used{};
bool voice_dropped{};
s64 played_sample_count;
s32 offset;
s32 wave_buffer_index;
std::array<bool, AudioCommon::MAX_WAVE_BUFFERS> is_wave_buffer_valid;
s32 wave_buffer_consumed;
std::array<s32, AudioCommon::MAX_SAMPLE_HISTORY> sample_history;
s32 fraction;
VAddr context_address;
Codec::ADPCM_Coeff coeff;
ADPCMContext context;
std::array<s64, 2> biquad_filter_state;
std::array<s32, AudioCommon::MAX_MIX_BUFFERS> previous_samples;
u32 external_context_size;
bool is_external_context_used;
bool voice_dropped;
};
class VoiceChannelResource {

View File

@@ -98,7 +98,6 @@ add_library(common STATIC
algorithm.h
alignment.h
assert.h
atomic_ops.cpp
atomic_ops.h
detached_tasks.cpp
detached_tasks.h
@@ -108,7 +107,6 @@ add_library(common STATIC
bit_util.h
cityhash.cpp
cityhash.h
color.h
common_funcs.h
common_paths.h
common_types.h
@@ -123,6 +121,7 @@ add_library(common STATIC
hash.h
hex_util.cpp
hex_util.h
intrusive_red_black_tree.h
logging/backend.cpp
logging/backend.h
logging/filter.cpp
@@ -135,8 +134,6 @@ add_library(common STATIC
math_util.h
memory_detect.cpp
memory_detect.h
memory_hook.cpp
memory_hook.h
microprofile.cpp
microprofile.h
microprofileui.h
@@ -145,6 +142,7 @@ add_library(common STATIC
page_table.h
param_package.cpp
param_package.h
parent_of_member.h
quaternion.h
ring_buffer.h
scm_rev.cpp
@@ -167,8 +165,7 @@ add_library(common STATIC
threadsafe_queue.h
time_zone.cpp
time_zone.h
timer.cpp
timer.h
tree.h
uint128.cpp
uint128.h
uuid.cpp

View File

@@ -9,50 +9,45 @@
namespace Common {
template <typename T>
[[nodiscard]] constexpr T AlignUp(T value, std::size_t size) {
static_assert(std::is_unsigned_v<T>, "T must be an unsigned value.");
requires std::is_unsigned_v<T>[[nodiscard]] constexpr T AlignUp(T value, size_t size) {
auto mod{static_cast<T>(value % size)};
value -= mod;
return static_cast<T>(mod == T{0} ? value : value + size);
}
template <typename T>
[[nodiscard]] constexpr T AlignDown(T value, std::size_t size) {
static_assert(std::is_unsigned_v<T>, "T must be an unsigned value.");
requires std::is_unsigned_v<T>[[nodiscard]] constexpr T AlignUpLog2(T value, size_t align_log2) {
return static_cast<T>((value + ((1ULL << align_log2) - 1)) >> align_log2 << align_log2);
}
template <typename T>
requires std::is_unsigned_v<T>[[nodiscard]] constexpr T AlignDown(T value, size_t size) {
return static_cast<T>(value - value % size);
}
template <typename T>
[[nodiscard]] constexpr T AlignBits(T value, std::size_t align) {
static_assert(std::is_unsigned_v<T>, "T must be an unsigned value.");
return static_cast<T>((value + ((1ULL << align) - 1)) >> align << align);
}
template <typename T>
[[nodiscard]] constexpr bool Is4KBAligned(T value) {
static_assert(std::is_unsigned_v<T>, "T must be an unsigned value.");
requires std::is_unsigned_v<T>[[nodiscard]] constexpr bool Is4KBAligned(T value) {
return (value & 0xFFF) == 0;
}
template <typename T>
[[nodiscard]] constexpr bool IsWordAligned(T value) {
static_assert(std::is_unsigned_v<T>, "T must be an unsigned value.");
requires std::is_unsigned_v<T>[[nodiscard]] constexpr bool IsWordAligned(T value) {
return (value & 0b11) == 0;
}
template <typename T>
[[nodiscard]] constexpr bool IsAligned(T value, std::size_t alignment) {
using U = typename std::make_unsigned<T>::type;
requires std::is_integral_v<T>[[nodiscard]] constexpr bool IsAligned(T value, size_t alignment) {
using U = typename std::make_unsigned_t<T>;
const U mask = static_cast<U>(alignment - 1);
return (value & mask) == 0;
}
template <typename T, std::size_t Align = 16>
template <typename T, size_t Align = 16>
class AlignmentAllocator {
public:
using value_type = T;
using size_type = std::size_t;
using difference_type = std::ptrdiff_t;
using size_type = size_t;
using difference_type = ptrdiff_t;
using propagate_on_container_copy_assignment = std::true_type;
using propagate_on_container_move_assignment = std::true_type;

View File

@@ -29,22 +29,19 @@ assert_noinline_call(const Fn& fn) {
}
#define ASSERT(_a_) \
do \
if (!(_a_)) { \
assert_noinline_call([] { LOG_CRITICAL(Debug, "Assertion Failed!"); }); \
} \
while (0)
if (!(_a_)) { \
LOG_CRITICAL(Debug, "Assertion Failed!"); \
}
#define ASSERT_MSG(_a_, ...) \
do \
if (!(_a_)) { \
assert_noinline_call([&] { LOG_CRITICAL(Debug, "Assertion Failed!\n" __VA_ARGS__); }); \
} \
while (0)
if (!(_a_)) { \
LOG_CRITICAL(Debug, "Assertion Failed! " __VA_ARGS__); \
}
#define UNREACHABLE() assert_noinline_call([] { LOG_CRITICAL(Debug, "Unreachable code!"); })
#define UNREACHABLE() \
{ LOG_CRITICAL(Debug, "Unreachable code!"); }
#define UNREACHABLE_MSG(...) \
assert_noinline_call([&] { LOG_CRITICAL(Debug, "Unreachable code!\n" __VA_ARGS__); })
{ LOG_CRITICAL(Debug, "Unreachable code!\n" __VA_ARGS__); }
#ifdef _DEBUG
#define DEBUG_ASSERT(_a_) ASSERT(_a_)

View File

@@ -1,75 +0,0 @@
// Copyright 2020 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <cstring>
#include "common/atomic_ops.h"
#if _MSC_VER
#include <intrin.h>
#endif
namespace Common {
#if _MSC_VER
bool AtomicCompareAndSwap(volatile u8* pointer, u8 value, u8 expected) {
const u8 result =
_InterlockedCompareExchange8(reinterpret_cast<volatile char*>(pointer), value, expected);
return result == expected;
}
bool AtomicCompareAndSwap(volatile u16* pointer, u16 value, u16 expected) {
const u16 result =
_InterlockedCompareExchange16(reinterpret_cast<volatile short*>(pointer), value, expected);
return result == expected;
}
bool AtomicCompareAndSwap(volatile u32* pointer, u32 value, u32 expected) {
const u32 result =
_InterlockedCompareExchange(reinterpret_cast<volatile long*>(pointer), value, expected);
return result == expected;
}
bool AtomicCompareAndSwap(volatile u64* pointer, u64 value, u64 expected) {
const u64 result = _InterlockedCompareExchange64(reinterpret_cast<volatile __int64*>(pointer),
value, expected);
return result == expected;
}
bool AtomicCompareAndSwap(volatile u64* pointer, u128 value, u128 expected) {
return _InterlockedCompareExchange128(reinterpret_cast<volatile __int64*>(pointer), value[1],
value[0],
reinterpret_cast<__int64*>(expected.data())) != 0;
}
#else
bool AtomicCompareAndSwap(volatile u8* pointer, u8 value, u8 expected) {
return __sync_bool_compare_and_swap(pointer, expected, value);
}
bool AtomicCompareAndSwap(volatile u16* pointer, u16 value, u16 expected) {
return __sync_bool_compare_and_swap(pointer, expected, value);
}
bool AtomicCompareAndSwap(volatile u32* pointer, u32 value, u32 expected) {
return __sync_bool_compare_and_swap(pointer, expected, value);
}
bool AtomicCompareAndSwap(volatile u64* pointer, u64 value, u64 expected) {
return __sync_bool_compare_and_swap(pointer, expected, value);
}
bool AtomicCompareAndSwap(volatile u64* pointer, u128 value, u128 expected) {
unsigned __int128 value_a;
unsigned __int128 expected_a;
std::memcpy(&value_a, value.data(), sizeof(u128));
std::memcpy(&expected_a, expected.data(), sizeof(u128));
return __sync_bool_compare_and_swap((unsigned __int128*)pointer, expected_a, value_a);
}
#endif
} // namespace Common

View File

@@ -4,14 +4,75 @@
#pragma once
#include <cstring>
#include <memory>
#include "common/common_types.h"
#if _MSC_VER
#include <intrin.h>
#endif
namespace Common {
[[nodiscard]] bool AtomicCompareAndSwap(volatile u8* pointer, u8 value, u8 expected);
[[nodiscard]] bool AtomicCompareAndSwap(volatile u16* pointer, u16 value, u16 expected);
[[nodiscard]] bool AtomicCompareAndSwap(volatile u32* pointer, u32 value, u32 expected);
[[nodiscard]] bool AtomicCompareAndSwap(volatile u64* pointer, u64 value, u64 expected);
[[nodiscard]] bool AtomicCompareAndSwap(volatile u64* pointer, u128 value, u128 expected);
#if _MSC_VER
[[nodiscard]] inline bool AtomicCompareAndSwap(volatile u8* pointer, u8 value, u8 expected) {
const u8 result =
_InterlockedCompareExchange8(reinterpret_cast<volatile char*>(pointer), value, expected);
return result == expected;
}
[[nodiscard]] inline bool AtomicCompareAndSwap(volatile u16* pointer, u16 value, u16 expected) {
const u16 result =
_InterlockedCompareExchange16(reinterpret_cast<volatile short*>(pointer), value, expected);
return result == expected;
}
[[nodiscard]] inline bool AtomicCompareAndSwap(volatile u32* pointer, u32 value, u32 expected) {
const u32 result =
_InterlockedCompareExchange(reinterpret_cast<volatile long*>(pointer), value, expected);
return result == expected;
}
[[nodiscard]] inline bool AtomicCompareAndSwap(volatile u64* pointer, u64 value, u64 expected) {
const u64 result = _InterlockedCompareExchange64(reinterpret_cast<volatile __int64*>(pointer),
value, expected);
return result == expected;
}
[[nodiscard]] inline bool AtomicCompareAndSwap(volatile u64* pointer, u128 value, u128 expected) {
return _InterlockedCompareExchange128(reinterpret_cast<volatile __int64*>(pointer), value[1],
value[0],
reinterpret_cast<__int64*>(expected.data())) != 0;
}
#else
[[nodiscard]] inline bool AtomicCompareAndSwap(volatile u8* pointer, u8 value, u8 expected) {
return __sync_bool_compare_and_swap(pointer, expected, value);
}
[[nodiscard]] inline bool AtomicCompareAndSwap(volatile u16* pointer, u16 value, u16 expected) {
return __sync_bool_compare_and_swap(pointer, expected, value);
}
[[nodiscard]] inline bool AtomicCompareAndSwap(volatile u32* pointer, u32 value, u32 expected) {
return __sync_bool_compare_and_swap(pointer, expected, value);
}
[[nodiscard]] inline bool AtomicCompareAndSwap(volatile u64* pointer, u64 value, u64 expected) {
return __sync_bool_compare_and_swap(pointer, expected, value);
}
[[nodiscard]] inline bool AtomicCompareAndSwap(volatile u64* pointer, u128 value, u128 expected) {
unsigned __int128 value_a;
unsigned __int128 expected_a;
std::memcpy(&value_a, value.data(), sizeof(u128));
std::memcpy(&expected_a, expected.data(), sizeof(u128));
return __sync_bool_compare_and_swap((unsigned __int128*)pointer, expected_a, value_a);
}
#endif
} // namespace Common

View File

@@ -21,82 +21,6 @@ template <typename T>
return sizeof(T) * CHAR_BIT;
}
#ifdef _MSC_VER
[[nodiscard]] inline u32 CountLeadingZeroes32(u32 value) {
unsigned long leading_zero = 0;
if (_BitScanReverse(&leading_zero, value) != 0) {
return 31 - leading_zero;
}
return 32;
}
[[nodiscard]] inline u32 CountLeadingZeroes64(u64 value) {
unsigned long leading_zero = 0;
if (_BitScanReverse64(&leading_zero, value) != 0) {
return 63 - leading_zero;
}
return 64;
}
#else
[[nodiscard]] inline u32 CountLeadingZeroes32(u32 value) {
if (value == 0) {
return 32;
}
return static_cast<u32>(__builtin_clz(value));
}
[[nodiscard]] inline u32 CountLeadingZeroes64(u64 value) {
if (value == 0) {
return 64;
}
return static_cast<u32>(__builtin_clzll(value));
}
#endif
#ifdef _MSC_VER
[[nodiscard]] inline u32 CountTrailingZeroes32(u32 value) {
unsigned long trailing_zero = 0;
if (_BitScanForward(&trailing_zero, value) != 0) {
return trailing_zero;
}
return 32;
}
[[nodiscard]] inline u32 CountTrailingZeroes64(u64 value) {
unsigned long trailing_zero = 0;
if (_BitScanForward64(&trailing_zero, value) != 0) {
return trailing_zero;
}
return 64;
}
#else
[[nodiscard]] inline u32 CountTrailingZeroes32(u32 value) {
if (value == 0) {
return 32;
}
return static_cast<u32>(__builtin_ctz(value));
}
[[nodiscard]] inline u32 CountTrailingZeroes64(u64 value) {
if (value == 0) {
return 64;
}
return static_cast<u32>(__builtin_ctzll(value));
}
#endif
#ifdef _MSC_VER
[[nodiscard]] inline u32 MostSignificantBit32(const u32 value) {

View File

@@ -1,271 +0,0 @@
// Copyright 2014 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <cstring>
#include "common/common_types.h"
#include "common/swap.h"
#include "common/vector_math.h"
namespace Common::Color {
/// Convert a 1-bit color component to 8 bit
[[nodiscard]] constexpr u8 Convert1To8(u8 value) {
return value * 255;
}
/// Convert a 4-bit color component to 8 bit
[[nodiscard]] constexpr u8 Convert4To8(u8 value) {
return (value << 4) | value;
}
/// Convert a 5-bit color component to 8 bit
[[nodiscard]] constexpr u8 Convert5To8(u8 value) {
return (value << 3) | (value >> 2);
}
/// Convert a 6-bit color component to 8 bit
[[nodiscard]] constexpr u8 Convert6To8(u8 value) {
return (value << 2) | (value >> 4);
}
/// Convert a 8-bit color component to 1 bit
[[nodiscard]] constexpr u8 Convert8To1(u8 value) {
return value >> 7;
}
/// Convert a 8-bit color component to 4 bit
[[nodiscard]] constexpr u8 Convert8To4(u8 value) {
return value >> 4;
}
/// Convert a 8-bit color component to 5 bit
[[nodiscard]] constexpr u8 Convert8To5(u8 value) {
return value >> 3;
}
/// Convert a 8-bit color component to 6 bit
[[nodiscard]] constexpr u8 Convert8To6(u8 value) {
return value >> 2;
}
/**
* Decode a color stored in RGBA8 format
* @param bytes Pointer to encoded source color
* @return Result color decoded as Common::Vec4<u8>
*/
[[nodiscard]] inline Common::Vec4<u8> DecodeRGBA8(const u8* bytes) {
return {bytes[3], bytes[2], bytes[1], bytes[0]};
}
/**
* Decode a color stored in RGB8 format
* @param bytes Pointer to encoded source color
* @return Result color decoded as Common::Vec4<u8>
*/
[[nodiscard]] inline Common::Vec4<u8> DecodeRGB8(const u8* bytes) {
return {bytes[2], bytes[1], bytes[0], 255};
}
/**
* Decode a color stored in RG8 (aka HILO8) format
* @param bytes Pointer to encoded source color
* @return Result color decoded as Common::Vec4<u8>
*/
[[nodiscard]] inline Common::Vec4<u8> DecodeRG8(const u8* bytes) {
return {bytes[1], bytes[0], 0, 255};
}
/**
* Decode a color stored in RGB565 format
* @param bytes Pointer to encoded source color
* @return Result color decoded as Common::Vec4<u8>
*/
[[nodiscard]] inline Common::Vec4<u8> DecodeRGB565(const u8* bytes) {
u16_le pixel;
std::memcpy(&pixel, bytes, sizeof(pixel));
return {Convert5To8((pixel >> 11) & 0x1F), Convert6To8((pixel >> 5) & 0x3F),
Convert5To8(pixel & 0x1F), 255};
}
/**
* Decode a color stored in RGB5A1 format
* @param bytes Pointer to encoded source color
* @return Result color decoded as Common::Vec4<u8>
*/
[[nodiscard]] inline Common::Vec4<u8> DecodeRGB5A1(const u8* bytes) {
u16_le pixel;
std::memcpy(&pixel, bytes, sizeof(pixel));
return {Convert5To8((pixel >> 11) & 0x1F), Convert5To8((pixel >> 6) & 0x1F),
Convert5To8((pixel >> 1) & 0x1F), Convert1To8(pixel & 0x1)};
}
/**
* Decode a color stored in RGBA4 format
* @param bytes Pointer to encoded source color
* @return Result color decoded as Common::Vec4<u8>
*/
[[nodiscard]] inline Common::Vec4<u8> DecodeRGBA4(const u8* bytes) {
u16_le pixel;
std::memcpy(&pixel, bytes, sizeof(pixel));
return {Convert4To8((pixel >> 12) & 0xF), Convert4To8((pixel >> 8) & 0xF),
Convert4To8((pixel >> 4) & 0xF), Convert4To8(pixel & 0xF)};
}
/**
* Decode a depth value stored in D16 format
* @param bytes Pointer to encoded source value
* @return Depth value as an u32
*/
[[nodiscard]] inline u32 DecodeD16(const u8* bytes) {
u16_le data;
std::memcpy(&data, bytes, sizeof(data));
return data;
}
/**
* Decode a depth value stored in D24 format
* @param bytes Pointer to encoded source value
* @return Depth value as an u32
*/
[[nodiscard]] inline u32 DecodeD24(const u8* bytes) {
return (bytes[2] << 16) | (bytes[1] << 8) | bytes[0];
}
/**
* Decode a depth value and a stencil value stored in D24S8 format
* @param bytes Pointer to encoded source values
* @return Resulting values stored as a Common::Vec2
*/
[[nodiscard]] inline Common::Vec2<u32> DecodeD24S8(const u8* bytes) {
return {static_cast<u32>((bytes[2] << 16) | (bytes[1] << 8) | bytes[0]), bytes[3]};
}
/**
* Encode a color as RGBA8 format
* @param color Source color to encode
* @param bytes Destination pointer to store encoded color
*/
inline void EncodeRGBA8(const Common::Vec4<u8>& color, u8* bytes) {
bytes[3] = color.r();
bytes[2] = color.g();
bytes[1] = color.b();
bytes[0] = color.a();
}
/**
* Encode a color as RGB8 format
* @param color Source color to encode
* @param bytes Destination pointer to store encoded color
*/
inline void EncodeRGB8(const Common::Vec4<u8>& color, u8* bytes) {
bytes[2] = color.r();
bytes[1] = color.g();
bytes[0] = color.b();
}
/**
* Encode a color as RG8 (aka HILO8) format
* @param color Source color to encode
* @param bytes Destination pointer to store encoded color
*/
inline void EncodeRG8(const Common::Vec4<u8>& color, u8* bytes) {
bytes[1] = color.r();
bytes[0] = color.g();
}
/**
* Encode a color as RGB565 format
* @param color Source color to encode
* @param bytes Destination pointer to store encoded color
*/
inline void EncodeRGB565(const Common::Vec4<u8>& color, u8* bytes) {
const u16_le data =
(Convert8To5(color.r()) << 11) | (Convert8To6(color.g()) << 5) | Convert8To5(color.b());
std::memcpy(bytes, &data, sizeof(data));
}
/**
* Encode a color as RGB5A1 format
* @param color Source color to encode
* @param bytes Destination pointer to store encoded color
*/
inline void EncodeRGB5A1(const Common::Vec4<u8>& color, u8* bytes) {
const u16_le data = (Convert8To5(color.r()) << 11) | (Convert8To5(color.g()) << 6) |
(Convert8To5(color.b()) << 1) | Convert8To1(color.a());
std::memcpy(bytes, &data, sizeof(data));
}
/**
* Encode a color as RGBA4 format
* @param color Source color to encode
* @param bytes Destination pointer to store encoded color
*/
inline void EncodeRGBA4(const Common::Vec4<u8>& color, u8* bytes) {
const u16 data = (Convert8To4(color.r()) << 12) | (Convert8To4(color.g()) << 8) |
(Convert8To4(color.b()) << 4) | Convert8To4(color.a());
std::memcpy(bytes, &data, sizeof(data));
}
/**
* Encode a 16 bit depth value as D16 format
* @param value 16 bit source depth value to encode
* @param bytes Pointer where to store the encoded value
*/
inline void EncodeD16(u32 value, u8* bytes) {
const u16_le data = static_cast<u16>(value);
std::memcpy(bytes, &data, sizeof(data));
}
/**
* Encode a 24 bit depth value as D24 format
* @param value 24 bit source depth value to encode
* @param bytes Pointer where to store the encoded value
*/
inline void EncodeD24(u32 value, u8* bytes) {
bytes[0] = value & 0xFF;
bytes[1] = (value >> 8) & 0xFF;
bytes[2] = (value >> 16) & 0xFF;
}
/**
* Encode a 24 bit depth and 8 bit stencil values as D24S8 format
* @param depth 24 bit source depth value to encode
* @param stencil 8 bit source stencil value to encode
* @param bytes Pointer where to store the encoded value
*/
inline void EncodeD24S8(u32 depth, u8 stencil, u8* bytes) {
bytes[0] = depth & 0xFF;
bytes[1] = (depth >> 8) & 0xFF;
bytes[2] = (depth >> 16) & 0xFF;
bytes[3] = stencil;
}
/**
* Encode a 24 bit depth value as D24X8 format (32 bits per pixel with 8 bits unused)
* @param depth 24 bit source depth value to encode
* @param bytes Pointer where to store the encoded value
* @note unused bits will not be modified
*/
inline void EncodeD24X8(u32 depth, u8* bytes) {
bytes[0] = depth & 0xFF;
bytes[1] = (depth >> 8) & 0xFF;
bytes[2] = (depth >> 16) & 0xFF;
}
/**
* Encode an 8 bit stencil value as X24S8 format (32 bits per pixel with 24 bits unused)
* @param stencil 8 bit source stencil value to encode
* @param bytes Pointer where to store the encoded value
* @note unused bits will not be modified
*/
inline void EncodeX24S8(u8 stencil, u8* bytes) {
bytes[3] = stencil;
}
} // namespace Common::Color

View File

@@ -24,10 +24,10 @@
#define INSERT_PADDING_WORDS(num_words) \
std::array<u32, num_words> CONCAT2(pad, __LINE__) {}
/// These are similar to the INSERT_PADDING_* macros, but are needed for padding unions. This is
/// because unions can only be initialized by one member.
#define INSERT_UNION_PADDING_BYTES(num_bytes) std::array<u8, num_bytes> CONCAT2(pad, __LINE__)
#define INSERT_UNION_PADDING_WORDS(num_words) std::array<u32, num_words> CONCAT2(pad, __LINE__)
/// These are similar to the INSERT_PADDING_* macros but do not zero-initialize the contents.
/// This keeps the structure trivial to construct.
#define INSERT_PADDING_BYTES_NOINIT(num_bytes) std::array<u8, num_bytes> CONCAT2(pad, __LINE__)
#define INSERT_PADDING_WORDS_NOINIT(num_words) std::array<u32, num_words> CONCAT2(pad, __LINE__)
#ifndef _MSC_VER
@@ -93,6 +93,14 @@ __declspec(dllimport) void __stdcall DebugBreak(void);
return static_cast<T>(key) == 0; \
}
/// Evaluates a boolean expression, and returns a result unless that expression is true.
#define R_UNLESS(expr, res) \
{ \
if (!(expr)) { \
return res; \
} \
}
namespace Common {
[[nodiscard]] constexpr u32 MakeMagic(char a, char b, char c, char d) {

View File

@@ -11,16 +11,16 @@ namespace Common {
/// Ceiled integer division.
template <typename N, typename D>
requires std::is_integral_v<N>&& std::is_unsigned_v<D>[[nodiscard]] constexpr auto DivCeil(
N number, D divisor) {
return (static_cast<D>(number) + divisor - 1) / divisor;
requires std::is_integral_v<N>&& std::is_unsigned_v<D>[[nodiscard]] constexpr N DivCeil(N number,
D divisor) {
return static_cast<N>((static_cast<D>(number) + divisor - 1) / divisor);
}
/// Ceiled integer division with logarithmic divisor in base 2
template <typename N, typename D>
requires std::is_integral_v<N>&& std::is_unsigned_v<D>[[nodiscard]] constexpr auto DivCeilLog2(
requires std::is_integral_v<N>&& std::is_unsigned_v<D>[[nodiscard]] constexpr N DivCeilLog2(
N value, D alignment_log2) {
return (static_cast<D>(value) + (D(1) << alignment_log2) - 1) >> alignment_log2;
return static_cast<N>((static_cast<D>(value) + (D(1) << alignment_log2) - 1) >> alignment_log2);
}
} // namespace Common

View File

@@ -0,0 +1,602 @@
// Copyright 2021 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include "common/parent_of_member.h"
#include "common/tree.h"
namespace Common {
namespace impl {
class IntrusiveRedBlackTreeImpl;
}
struct IntrusiveRedBlackTreeNode {
public:
using EntryType = RBEntry<IntrusiveRedBlackTreeNode>;
constexpr IntrusiveRedBlackTreeNode() = default;
void SetEntry(const EntryType& new_entry) {
entry = new_entry;
}
[[nodiscard]] EntryType& GetEntry() {
return entry;
}
[[nodiscard]] const EntryType& GetEntry() const {
return entry;
}
private:
EntryType entry{};
friend class impl::IntrusiveRedBlackTreeImpl;
template <class, class, class>
friend class IntrusiveRedBlackTree;
};
template <class T, class Traits, class Comparator>
class IntrusiveRedBlackTree;
namespace impl {
class IntrusiveRedBlackTreeImpl {
private:
template <class, class, class>
friend class ::Common::IntrusiveRedBlackTree;
using RootType = RBHead<IntrusiveRedBlackTreeNode>;
RootType root;
public:
template <bool Const>
class Iterator;
using value_type = IntrusiveRedBlackTreeNode;
using size_type = size_t;
using difference_type = ptrdiff_t;
using pointer = value_type*;
using const_pointer = const value_type*;
using reference = value_type&;
using const_reference = const value_type&;
using iterator = Iterator<false>;
using const_iterator = Iterator<true>;
template <bool Const>
class Iterator {
public:
using iterator_category = std::bidirectional_iterator_tag;
using value_type = typename IntrusiveRedBlackTreeImpl::value_type;
using difference_type = typename IntrusiveRedBlackTreeImpl::difference_type;
using pointer = std::conditional_t<Const, IntrusiveRedBlackTreeImpl::const_pointer,
IntrusiveRedBlackTreeImpl::pointer>;
using reference = std::conditional_t<Const, IntrusiveRedBlackTreeImpl::const_reference,
IntrusiveRedBlackTreeImpl::reference>;
private:
pointer node;
public:
explicit Iterator(pointer n) : node(n) {}
bool operator==(const Iterator& rhs) const {
return this->node == rhs.node;
}
bool operator!=(const Iterator& rhs) const {
return !(*this == rhs);
}
pointer operator->() const {
return this->node;
}
reference operator*() const {
return *this->node;
}
Iterator& operator++() {
this->node = GetNext(this->node);
return *this;
}
Iterator& operator--() {
this->node = GetPrev(this->node);
return *this;
}
Iterator operator++(int) {
const Iterator it{*this};
++(*this);
return it;
}
Iterator operator--(int) {
const Iterator it{*this};
--(*this);
return it;
}
operator Iterator<true>() const {
return Iterator<true>(this->node);
}
};
private:
// Define accessors using RB_* functions.
bool EmptyImpl() const {
return root.IsEmpty();
}
IntrusiveRedBlackTreeNode* GetMinImpl() const {
return RB_MIN(const_cast<RootType*>(&root));
}
IntrusiveRedBlackTreeNode* GetMaxImpl() const {
return RB_MAX(const_cast<RootType*>(&root));
}
IntrusiveRedBlackTreeNode* RemoveImpl(IntrusiveRedBlackTreeNode* node) {
return RB_REMOVE(&root, node);
}
public:
static IntrusiveRedBlackTreeNode* GetNext(IntrusiveRedBlackTreeNode* node) {
return RB_NEXT(node);
}
static IntrusiveRedBlackTreeNode* GetPrev(IntrusiveRedBlackTreeNode* node) {
return RB_PREV(node);
}
static const IntrusiveRedBlackTreeNode* GetNext(const IntrusiveRedBlackTreeNode* node) {
return static_cast<const IntrusiveRedBlackTreeNode*>(
GetNext(const_cast<IntrusiveRedBlackTreeNode*>(node)));
}
static const IntrusiveRedBlackTreeNode* GetPrev(const IntrusiveRedBlackTreeNode* node) {
return static_cast<const IntrusiveRedBlackTreeNode*>(
GetPrev(const_cast<IntrusiveRedBlackTreeNode*>(node)));
}
public:
constexpr IntrusiveRedBlackTreeImpl() {}
// Iterator accessors.
iterator begin() {
return iterator(this->GetMinImpl());
}
const_iterator begin() const {
return const_iterator(this->GetMinImpl());
}
iterator end() {
return iterator(static_cast<IntrusiveRedBlackTreeNode*>(nullptr));
}
const_iterator end() const {
return const_iterator(static_cast<const IntrusiveRedBlackTreeNode*>(nullptr));
}
const_iterator cbegin() const {
return this->begin();
}
const_iterator cend() const {
return this->end();
}
iterator iterator_to(reference ref) {
return iterator(&ref);
}
const_iterator iterator_to(const_reference ref) const {
return const_iterator(&ref);
}
// Content management.
bool empty() const {
return this->EmptyImpl();
}
reference back() {
return *this->GetMaxImpl();
}
const_reference back() const {
return *this->GetMaxImpl();
}
reference front() {
return *this->GetMinImpl();
}
const_reference front() const {
return *this->GetMinImpl();
}
iterator erase(iterator it) {
auto cur = std::addressof(*it);
auto next = GetNext(cur);
this->RemoveImpl(cur);
return iterator(next);
}
};
} // namespace impl
template <typename T>
concept HasLightCompareType = requires {
{ std::is_same<typename T::LightCompareType, void>::value }
->std::convertible_to<bool>;
};
namespace impl {
template <typename T, typename Default>
consteval auto* GetLightCompareType() {
if constexpr (HasLightCompareType<T>) {
return static_cast<typename T::LightCompareType*>(nullptr);
} else {
return static_cast<Default*>(nullptr);
}
}
} // namespace impl
template <typename T, typename Default>
using LightCompareType = std::remove_pointer_t<decltype(impl::GetLightCompareType<T, Default>())>;
template <class T, class Traits, class Comparator>
class IntrusiveRedBlackTree {
public:
using ImplType = impl::IntrusiveRedBlackTreeImpl;
private:
ImplType impl{};
public:
template <bool Const>
class Iterator;
using value_type = T;
using size_type = size_t;
using difference_type = ptrdiff_t;
using pointer = T*;
using const_pointer = const T*;
using reference = T&;
using const_reference = const T&;
using iterator = Iterator<false>;
using const_iterator = Iterator<true>;
using light_value_type = LightCompareType<Comparator, value_type>;
using const_light_pointer = const light_value_type*;
using const_light_reference = const light_value_type&;
template <bool Const>
class Iterator {
public:
friend class IntrusiveRedBlackTree<T, Traits, Comparator>;
using ImplIterator =
std::conditional_t<Const, ImplType::const_iterator, ImplType::iterator>;
using iterator_category = std::bidirectional_iterator_tag;
using value_type = typename IntrusiveRedBlackTree::value_type;
using difference_type = typename IntrusiveRedBlackTree::difference_type;
using pointer = std::conditional_t<Const, IntrusiveRedBlackTree::const_pointer,
IntrusiveRedBlackTree::pointer>;
using reference = std::conditional_t<Const, IntrusiveRedBlackTree::const_reference,
IntrusiveRedBlackTree::reference>;
private:
ImplIterator iterator;
private:
explicit Iterator(ImplIterator it) : iterator(it) {}
explicit Iterator(typename std::conditional<Const, ImplType::const_iterator,
ImplType::iterator>::type::pointer ptr)
: iterator(ptr) {}
ImplIterator GetImplIterator() const {
return this->iterator;
}
public:
bool operator==(const Iterator& rhs) const {
return this->iterator == rhs.iterator;
}
bool operator!=(const Iterator& rhs) const {
return !(*this == rhs);
}
pointer operator->() const {
return Traits::GetParent(std::addressof(*this->iterator));
}
reference operator*() const {
return *Traits::GetParent(std::addressof(*this->iterator));
}
Iterator& operator++() {
++this->iterator;
return *this;
}
Iterator& operator--() {
--this->iterator;
return *this;
}
Iterator operator++(int) {
const Iterator it{*this};
++this->iterator;
return it;
}
Iterator operator--(int) {
const Iterator it{*this};
--this->iterator;
return it;
}
operator Iterator<true>() const {
return Iterator<true>(this->iterator);
}
};
private:
static int CompareImpl(const IntrusiveRedBlackTreeNode* lhs,
const IntrusiveRedBlackTreeNode* rhs) {
return Comparator::Compare(*Traits::GetParent(lhs), *Traits::GetParent(rhs));
}
static int LightCompareImpl(const void* elm, const IntrusiveRedBlackTreeNode* rhs) {
return Comparator::Compare(*static_cast<const_light_pointer>(elm), *Traits::GetParent(rhs));
}
// Define accessors using RB_* functions.
IntrusiveRedBlackTreeNode* InsertImpl(IntrusiveRedBlackTreeNode* node) {
return RB_INSERT(&impl.root, node, CompareImpl);
}
IntrusiveRedBlackTreeNode* FindImpl(const IntrusiveRedBlackTreeNode* node) const {
return RB_FIND(const_cast<ImplType::RootType*>(&impl.root),
const_cast<IntrusiveRedBlackTreeNode*>(node), CompareImpl);
}
IntrusiveRedBlackTreeNode* NFindImpl(const IntrusiveRedBlackTreeNode* node) const {
return RB_NFIND(const_cast<ImplType::RootType*>(&impl.root),
const_cast<IntrusiveRedBlackTreeNode*>(node), CompareImpl);
}
IntrusiveRedBlackTreeNode* FindLightImpl(const_light_pointer lelm) const {
return RB_FIND_LIGHT(const_cast<ImplType::RootType*>(&impl.root),
static_cast<const void*>(lelm), LightCompareImpl);
}
IntrusiveRedBlackTreeNode* NFindLightImpl(const_light_pointer lelm) const {
return RB_NFIND_LIGHT(const_cast<ImplType::RootType*>(&impl.root),
static_cast<const void*>(lelm), LightCompareImpl);
}
public:
constexpr IntrusiveRedBlackTree() = default;
// Iterator accessors.
iterator begin() {
return iterator(this->impl.begin());
}
const_iterator begin() const {
return const_iterator(this->impl.begin());
}
iterator end() {
return iterator(this->impl.end());
}
const_iterator end() const {
return const_iterator(this->impl.end());
}
const_iterator cbegin() const {
return this->begin();
}
const_iterator cend() const {
return this->end();
}
iterator iterator_to(reference ref) {
return iterator(this->impl.iterator_to(*Traits::GetNode(std::addressof(ref))));
}
const_iterator iterator_to(const_reference ref) const {
return const_iterator(this->impl.iterator_to(*Traits::GetNode(std::addressof(ref))));
}
// Content management.
bool empty() const {
return this->impl.empty();
}
reference back() {
return *Traits::GetParent(std::addressof(this->impl.back()));
}
const_reference back() const {
return *Traits::GetParent(std::addressof(this->impl.back()));
}
reference front() {
return *Traits::GetParent(std::addressof(this->impl.front()));
}
const_reference front() const {
return *Traits::GetParent(std::addressof(this->impl.front()));
}
iterator erase(iterator it) {
return iterator(this->impl.erase(it.GetImplIterator()));
}
iterator insert(reference ref) {
ImplType::pointer node = Traits::GetNode(std::addressof(ref));
this->InsertImpl(node);
return iterator(node);
}
iterator find(const_reference ref) const {
return iterator(this->FindImpl(Traits::GetNode(std::addressof(ref))));
}
iterator nfind(const_reference ref) const {
return iterator(this->NFindImpl(Traits::GetNode(std::addressof(ref))));
}
iterator find_light(const_light_reference ref) const {
return iterator(this->FindLightImpl(std::addressof(ref)));
}
iterator nfind_light(const_light_reference ref) const {
return iterator(this->NFindLightImpl(std::addressof(ref)));
}
};
template <auto T, class Derived = impl::GetParentType<T>>
class IntrusiveRedBlackTreeMemberTraits;
template <class Parent, IntrusiveRedBlackTreeNode Parent::*Member, class Derived>
class IntrusiveRedBlackTreeMemberTraits<Member, Derived> {
public:
template <class Comparator>
using TreeType = IntrusiveRedBlackTree<Derived, IntrusiveRedBlackTreeMemberTraits, Comparator>;
using TreeTypeImpl = impl::IntrusiveRedBlackTreeImpl;
private:
template <class, class, class>
friend class IntrusiveRedBlackTree;
friend class impl::IntrusiveRedBlackTreeImpl;
static constexpr IntrusiveRedBlackTreeNode* GetNode(Derived* parent) {
return std::addressof(parent->*Member);
}
static constexpr IntrusiveRedBlackTreeNode const* GetNode(Derived const* parent) {
return std::addressof(parent->*Member);
}
static constexpr Derived* GetParent(IntrusiveRedBlackTreeNode* node) {
return GetParentPointer<Member, Derived>(node);
}
static constexpr Derived const* GetParent(const IntrusiveRedBlackTreeNode* node) {
return GetParentPointer<Member, Derived>(node);
}
private:
static constexpr TypedStorage<Derived> DerivedStorage = {};
static_assert(GetParent(GetNode(GetPointer(DerivedStorage))) == GetPointer(DerivedStorage));
};
template <auto T, class Derived = impl::GetParentType<T>>
class IntrusiveRedBlackTreeMemberTraitsDeferredAssert;
template <class Parent, IntrusiveRedBlackTreeNode Parent::*Member, class Derived>
class IntrusiveRedBlackTreeMemberTraitsDeferredAssert<Member, Derived> {
public:
template <class Comparator>
using TreeType =
IntrusiveRedBlackTree<Derived, IntrusiveRedBlackTreeMemberTraitsDeferredAssert, Comparator>;
using TreeTypeImpl = impl::IntrusiveRedBlackTreeImpl;
static constexpr bool IsValid() {
TypedStorage<Derived> DerivedStorage = {};
return GetParent(GetNode(GetPointer(DerivedStorage))) == GetPointer(DerivedStorage);
}
private:
template <class, class, class>
friend class IntrusiveRedBlackTree;
friend class impl::IntrusiveRedBlackTreeImpl;
static constexpr IntrusiveRedBlackTreeNode* GetNode(Derived* parent) {
return std::addressof(parent->*Member);
}
static constexpr IntrusiveRedBlackTreeNode const* GetNode(Derived const* parent) {
return std::addressof(parent->*Member);
}
static constexpr Derived* GetParent(IntrusiveRedBlackTreeNode* node) {
return GetParentPointer<Member, Derived>(node);
}
static constexpr Derived const* GetParent(const IntrusiveRedBlackTreeNode* node) {
return GetParentPointer<Member, Derived>(node);
}
};
template <class Derived>
class IntrusiveRedBlackTreeBaseNode : public IntrusiveRedBlackTreeNode {
public:
constexpr Derived* GetPrev() {
return static_cast<Derived*>(impl::IntrusiveRedBlackTreeImpl::GetPrev(this));
}
constexpr const Derived* GetPrev() const {
return static_cast<const Derived*>(impl::IntrusiveRedBlackTreeImpl::GetPrev(this));
}
constexpr Derived* GetNext() {
return static_cast<Derived*>(impl::IntrusiveRedBlackTreeImpl::GetNext(this));
}
constexpr const Derived* GetNext() const {
return static_cast<const Derived*>(impl::IntrusiveRedBlackTreeImpl::GetNext(this));
}
};
template <class Derived>
class IntrusiveRedBlackTreeBaseTraits {
public:
template <class Comparator>
using TreeType = IntrusiveRedBlackTree<Derived, IntrusiveRedBlackTreeBaseTraits, Comparator>;
using TreeTypeImpl = impl::IntrusiveRedBlackTreeImpl;
private:
template <class, class, class>
friend class IntrusiveRedBlackTree;
friend class impl::IntrusiveRedBlackTreeImpl;
static constexpr IntrusiveRedBlackTreeNode* GetNode(Derived* parent) {
return static_cast<IntrusiveRedBlackTreeNode*>(parent);
}
static constexpr IntrusiveRedBlackTreeNode const* GetNode(Derived const* parent) {
return static_cast<const IntrusiveRedBlackTreeNode*>(parent);
}
static constexpr Derived* GetParent(IntrusiveRedBlackTreeNode* node) {
return static_cast<Derived*>(node);
}
static constexpr Derived const* GetParent(const IntrusiveRedBlackTreeNode* node) {
return static_cast<const Derived*>(node);
}
};
} // namespace Common

View File

@@ -145,10 +145,18 @@ void ColorConsoleBackend::Write(const Entry& entry) {
PrintColoredMessage(entry);
}
// _SH_DENYWR allows read only access to the file for other programs.
// It is #defined to 0 on other platforms
FileBackend::FileBackend(const std::string& filename)
: file(filename, "w", _SH_DENYWR), bytes_written(0) {}
FileBackend::FileBackend(const std::string& filename) : bytes_written(0) {
if (Common::FS::Exists(filename + ".old.txt")) {
Common::FS::Delete(filename + ".old.txt");
}
if (Common::FS::Exists(filename)) {
Common::FS::Rename(filename, filename + ".old.txt");
}
// _SH_DENYWR allows read only access to the file for other programs.
// It is #defined to 0 on other platforms
file = Common::FS::IOFile(filename, "w", _SH_DENYWR);
}
void FileBackend::Write(const Entry& entry) {
// prevent logs from going over the maximum size (in case its spamming and the user doesn't

View File

@@ -1,11 +0,0 @@
// Copyright 2018 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "common/memory_hook.h"
namespace Common {
MemoryHook::~MemoryHook() = default;
} // namespace Common

View File

@@ -1,47 +0,0 @@
// Copyright 2016 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <memory>
#include <optional>
#include "common/common_types.h"
namespace Common {
/**
* Memory hooks have two purposes:
* 1. To allow reads and writes to a region of memory to be intercepted. This is used to implement
* texture forwarding and memory breakpoints for debugging.
* 2. To allow for the implementation of MMIO devices.
*
* A hook may be mapped to multiple regions of memory.
*
* If a std::nullopt or false is returned from a function, the read/write request is passed through
* to the underlying memory region.
*/
class MemoryHook {
public:
virtual ~MemoryHook();
virtual std::optional<bool> IsValidAddress(VAddr addr) = 0;
virtual std::optional<u8> Read8(VAddr addr) = 0;
virtual std::optional<u16> Read16(VAddr addr) = 0;
virtual std::optional<u32> Read32(VAddr addr) = 0;
virtual std::optional<u64> Read64(VAddr addr) = 0;
virtual bool ReadBlock(VAddr src_addr, void* dest_buffer, std::size_t size) = 0;
virtual bool Write8(VAddr addr, u8 data) = 0;
virtual bool Write16(VAddr addr, u16 data) = 0;
virtual bool Write32(VAddr addr, u32 data) = 0;
virtual bool Write64(VAddr addr, u64 data) = 0;
virtual bool WriteBlock(VAddr dest_addr, const void* src_buffer, std::size_t size) = 0;
};
using MemoryHookPointer = std::shared_ptr<MemoryHook>;
} // namespace Common

View File

@@ -10,16 +10,10 @@ PageTable::PageTable() = default;
PageTable::~PageTable() noexcept = default;
void PageTable::Resize(std::size_t address_space_width_in_bits, std::size_t page_size_in_bits,
bool has_attribute) {
const std::size_t num_page_table_entries{1ULL
<< (address_space_width_in_bits - page_size_in_bits)};
void PageTable::Resize(size_t address_space_width_in_bits, size_t page_size_in_bits) {
const size_t num_page_table_entries{1ULL << (address_space_width_in_bits - page_size_in_bits)};
pointers.resize(num_page_table_entries);
backing_addr.resize(num_page_table_entries);
if (has_attribute) {
attributes.resize(num_page_table_entries);
}
}
} // namespace Common

View File

@@ -4,10 +4,10 @@
#pragma once
#include <atomic>
#include <tuple>
#include "common/common_types.h"
#include "common/memory_hook.h"
#include "common/virtual_buffer.h"
namespace Common {
@@ -20,27 +20,6 @@ enum class PageType : u8 {
/// Page is mapped to regular memory, but also needs to check for rasterizer cache flushing and
/// invalidation
RasterizerCachedMemory,
/// Page is mapped to a I/O region. Writing and reading to this page is handled by functions.
Special,
/// Page is allocated for use.
Allocated,
};
struct SpecialRegion {
enum class Type {
DebugHook,
IODevice,
} type;
MemoryHookPointer handler;
[[nodiscard]] bool operator<(const SpecialRegion& other) const {
return std::tie(type, handler) < std::tie(other.type, other.handler);
}
[[nodiscard]] bool operator==(const SpecialRegion& other) const {
return std::tie(type, handler) == std::tie(other.type, other.handler);
}
};
/**
@@ -48,6 +27,59 @@ struct SpecialRegion {
* mimics the way a real CPU page table works.
*/
struct PageTable {
/// Number of bits reserved for attribute tagging.
/// This can be at most the guaranteed alignment of the pointers in the page table.
static constexpr int ATTRIBUTE_BITS = 2;
/**
* Pair of host pointer and page type attribute.
* This uses the lower bits of a given pointer to store the attribute tag.
* Writing and reading the pointer attribute pair is guaranteed to be atomic for the same method
* call. In other words, they are guaranteed to be synchronized at all times.
*/
class PageInfo {
public:
/// Returns the page pointer
[[nodiscard]] u8* Pointer() const noexcept {
return ExtractPointer(raw.load(std::memory_order_relaxed));
}
/// Returns the page type attribute
[[nodiscard]] PageType Type() const noexcept {
return ExtractType(raw.load(std::memory_order_relaxed));
}
/// Returns the page pointer and attribute pair, extracted from the same atomic read
[[nodiscard]] std::pair<u8*, PageType> PointerType() const noexcept {
const uintptr_t non_atomic_raw = raw.load(std::memory_order_relaxed);
return {ExtractPointer(non_atomic_raw), ExtractType(non_atomic_raw)};
}
/// Returns the raw representation of the page information.
/// Use ExtractPointer and ExtractType to unpack the value.
[[nodiscard]] uintptr_t Raw() const noexcept {
return raw.load(std::memory_order_relaxed);
}
/// Write a page pointer and type pair atomically
void Store(u8* pointer, PageType type) noexcept {
raw.store(reinterpret_cast<uintptr_t>(pointer) | static_cast<uintptr_t>(type));
}
/// Unpack a pointer from a page info raw representation
[[nodiscard]] static u8* ExtractPointer(uintptr_t raw) noexcept {
return reinterpret_cast<u8*>(raw & (~uintptr_t{0} << ATTRIBUTE_BITS));
}
/// Unpack a page type from a page info raw representation
[[nodiscard]] static PageType ExtractType(uintptr_t raw) noexcept {
return static_cast<PageType>(raw & ((uintptr_t{1} << ATTRIBUTE_BITS) - 1));
}
private:
std::atomic<uintptr_t> raw;
};
PageTable();
~PageTable() noexcept;
@@ -58,25 +90,21 @@ struct PageTable {
PageTable& operator=(PageTable&&) noexcept = default;
/**
* Resizes the page table to be able to accomodate enough pages within
* Resizes the page table to be able to accommodate enough pages within
* a given address space.
*
* @param address_space_width_in_bits The address size width in bits.
* @param page_size_in_bits The page size in bits.
* @param has_attribute Whether or not this page has any backing attributes.
*/
void Resize(std::size_t address_space_width_in_bits, std::size_t page_size_in_bits,
bool has_attribute);
void Resize(size_t address_space_width_in_bits, size_t page_size_in_bits);
/**
* Vector of memory pointers backing each page. An entry can only be non-null if the
* corresponding entry in the `attributes` vector is of type `Memory`.
* corresponding attribute element is of type `Memory`.
*/
VirtualBuffer<u8*> pointers;
VirtualBuffer<PageInfo> pointers;
VirtualBuffer<u64> backing_addr;
VirtualBuffer<PageType> attributes;
};
} // namespace Common

View File

@@ -0,0 +1,191 @@
// Copyright 2021 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <type_traits>
#include "common/assert.h"
#include "common/common_types.h"
namespace Common {
namespace detail {
template <typename T, size_t Size, size_t Align>
struct TypedStorageImpl {
std::aligned_storage_t<Size, Align> storage_;
};
} // namespace detail
template <typename T>
using TypedStorage = detail::TypedStorageImpl<T, sizeof(T), alignof(T)>;
template <typename T>
static constexpr T* GetPointer(TypedStorage<T>& ts) {
return static_cast<T*>(static_cast<void*>(std::addressof(ts.storage_)));
}
template <typename T>
static constexpr const T* GetPointer(const TypedStorage<T>& ts) {
return static_cast<const T*>(static_cast<const void*>(std::addressof(ts.storage_)));
}
namespace impl {
template <size_t MaxDepth>
struct OffsetOfUnionHolder {
template <typename ParentType, typename MemberType, size_t Offset>
union UnionImpl {
using PaddingMember = char;
static constexpr size_t GetOffset() {
return Offset;
}
#pragma pack(push, 1)
struct {
PaddingMember padding[Offset];
MemberType members[(sizeof(ParentType) / sizeof(MemberType)) + 1];
} data;
#pragma pack(pop)
UnionImpl<ParentType, MemberType, Offset + 1> next_union;
};
template <typename ParentType, typename MemberType>
union UnionImpl<ParentType, MemberType, 0> {
static constexpr size_t GetOffset() {
return 0;
}
struct {
MemberType members[(sizeof(ParentType) / sizeof(MemberType)) + 1];
} data;
UnionImpl<ParentType, MemberType, 1> next_union;
};
template <typename ParentType, typename MemberType>
union UnionImpl<ParentType, MemberType, MaxDepth> {};
};
template <typename ParentType, typename MemberType>
struct OffsetOfCalculator {
using UnionHolder =
typename OffsetOfUnionHolder<sizeof(MemberType)>::template UnionImpl<ParentType, MemberType,
0>;
union Union {
char c{};
UnionHolder first_union;
TypedStorage<ParentType> parent;
constexpr Union() : c() {}
};
static constexpr Union U = {};
static constexpr const MemberType* GetNextAddress(const MemberType* start,
const MemberType* target) {
while (start < target) {
start++;
}
return start;
}
static constexpr std::ptrdiff_t GetDifference(const MemberType* start,
const MemberType* target) {
return (target - start) * sizeof(MemberType);
}
template <typename CurUnion>
static constexpr std::ptrdiff_t OffsetOfImpl(MemberType ParentType::*member,
CurUnion& cur_union) {
constexpr size_t Offset = CurUnion::GetOffset();
const auto target = std::addressof(GetPointer(U.parent)->*member);
const auto start = std::addressof(cur_union.data.members[0]);
const auto next = GetNextAddress(start, target);
if (next != target) {
if constexpr (Offset < sizeof(MemberType) - 1) {
return OffsetOfImpl(member, cur_union.next_union);
} else {
UNREACHABLE();
}
}
return (next - start) * sizeof(MemberType) + Offset;
}
static constexpr std::ptrdiff_t OffsetOf(MemberType ParentType::*member) {
return OffsetOfImpl(member, U.first_union);
}
};
template <typename T>
struct GetMemberPointerTraits;
template <typename P, typename M>
struct GetMemberPointerTraits<M P::*> {
using Parent = P;
using Member = M;
};
template <auto MemberPtr>
using GetParentType = typename GetMemberPointerTraits<decltype(MemberPtr)>::Parent;
template <auto MemberPtr>
using GetMemberType = typename GetMemberPointerTraits<decltype(MemberPtr)>::Member;
template <auto MemberPtr, typename RealParentType = GetParentType<MemberPtr>>
static inline std::ptrdiff_t OffsetOf = [] {
using DeducedParentType = GetParentType<MemberPtr>;
using MemberType = GetMemberType<MemberPtr>;
static_assert(std::is_base_of<DeducedParentType, RealParentType>::value ||
std::is_same<RealParentType, DeducedParentType>::value);
return OffsetOfCalculator<RealParentType, MemberType>::OffsetOf(MemberPtr);
}();
} // namespace impl
template <auto MemberPtr, typename RealParentType = impl::GetParentType<MemberPtr>>
constexpr RealParentType& GetParentReference(impl::GetMemberType<MemberPtr>* member) {
std::ptrdiff_t Offset = impl::OffsetOf<MemberPtr, RealParentType>;
return *static_cast<RealParentType*>(
static_cast<void*>(static_cast<uint8_t*>(static_cast<void*>(member)) - Offset));
}
template <auto MemberPtr, typename RealParentType = impl::GetParentType<MemberPtr>>
constexpr RealParentType const& GetParentReference(impl::GetMemberType<MemberPtr> const* member) {
std::ptrdiff_t Offset = impl::OffsetOf<MemberPtr, RealParentType>;
return *static_cast<const RealParentType*>(static_cast<const void*>(
static_cast<const uint8_t*>(static_cast<const void*>(member)) - Offset));
}
template <auto MemberPtr, typename RealParentType = impl::GetParentType<MemberPtr>>
constexpr RealParentType* GetParentPointer(impl::GetMemberType<MemberPtr>* member) {
return std::addressof(GetParentReference<MemberPtr, RealParentType>(member));
}
template <auto MemberPtr, typename RealParentType = impl::GetParentType<MemberPtr>>
constexpr RealParentType const* GetParentPointer(impl::GetMemberType<MemberPtr> const* member) {
return std::addressof(GetParentReference<MemberPtr, RealParentType>(member));
}
template <auto MemberPtr, typename RealParentType = impl::GetParentType<MemberPtr>>
constexpr RealParentType& GetParentReference(impl::GetMemberType<MemberPtr>& member) {
return GetParentReference<MemberPtr, RealParentType>(std::addressof(member));
}
template <auto MemberPtr, typename RealParentType = impl::GetParentType<MemberPtr>>
constexpr RealParentType const& GetParentReference(impl::GetMemberType<MemberPtr> const& member) {
return GetParentReference<MemberPtr, RealParentType>(std::addressof(member));
}
template <auto MemberPtr, typename RealParentType = impl::GetParentType<MemberPtr>>
constexpr RealParentType* GetParentPointer(impl::GetMemberType<MemberPtr>& member) {
return std::addressof(GetParentReference<MemberPtr, RealParentType>(member));
}
template <auto MemberPtr, typename RealParentType = impl::GetParentType<MemberPtr>>
constexpr RealParentType const* GetParentPointer(impl::GetMemberType<MemberPtr> const& member) {
return std::addressof(GetParentReference<MemberPtr, RealParentType>(member));
}
} // namespace Common

View File

@@ -394,7 +394,7 @@ public:
template <typename S, typename T2, typename F2>
friend S operator%(const S& p, const swapped_t v);
// Arithmetics + assignements
// Arithmetics + assignments
template <typename S, typename T2, typename F2>
friend S operator+=(const S& p, const swapped_t v);
@@ -451,7 +451,7 @@ S operator%(const S& i, const swap_struct_t<T, F> v) {
return i % v.swap();
}
// Arithmetics + assignements
// Arithmetics + assignments
template <typename S, typename T, typename F>
S& operator+=(S& i, const swap_struct_t<T, F> v) {
i += v.swap();

View File

@@ -1,159 +0,0 @@
// Copyright 2013 Dolphin Emulator Project / 2014 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <ctime>
#include <fmt/format.h>
#include "common/common_types.h"
#include "common/string_util.h"
#include "common/timer.h"
namespace Common {
std::chrono::milliseconds Timer::GetTimeMs() {
return std::chrono::duration_cast<std::chrono::milliseconds>(
std::chrono::system_clock::now().time_since_epoch());
}
// --------------------------------------------
// Initiate, Start, Stop, and Update the time
// --------------------------------------------
// Set initial values for the class
Timer::Timer() : m_LastTime(0), m_StartTime(0), m_Running(false) {
Update();
}
// Write the starting time
void Timer::Start() {
m_StartTime = GetTimeMs();
m_Running = true;
}
// Stop the timer
void Timer::Stop() {
// Write the final time
m_LastTime = GetTimeMs();
m_Running = false;
}
// Update the last time variable
void Timer::Update() {
m_LastTime = GetTimeMs();
// TODO(ector) - QPF
}
// -------------------------------------
// Get time difference and elapsed time
// -------------------------------------
// Get the number of milliseconds since the last Update()
std::chrono::milliseconds Timer::GetTimeDifference() {
return GetTimeMs() - m_LastTime;
}
// Add the time difference since the last Update() to the starting time.
// This is used to compensate for a paused game.
void Timer::AddTimeDifference() {
m_StartTime += GetTimeDifference();
}
// Get the time elapsed since the Start()
std::chrono::milliseconds Timer::GetTimeElapsed() {
// If we have not started yet, return 1 (because then I don't
// have to change the FPS calculation in CoreRerecording.cpp .
if (m_StartTime.count() == 0)
return std::chrono::milliseconds(1);
// Return the final timer time if the timer is stopped
if (!m_Running)
return (m_LastTime - m_StartTime);
return (GetTimeMs() - m_StartTime);
}
// Get the formatted time elapsed since the Start()
std::string Timer::GetTimeElapsedFormatted() const {
// If we have not started yet, return zero
if (m_StartTime.count() == 0)
return "00:00:00:000";
// The number of milliseconds since the start.
// Use a different value if the timer is stopped.
std::chrono::milliseconds Milliseconds;
if (m_Running)
Milliseconds = GetTimeMs() - m_StartTime;
else
Milliseconds = m_LastTime - m_StartTime;
// Seconds
std::chrono::seconds Seconds = std::chrono::duration_cast<std::chrono::seconds>(Milliseconds);
// Minutes
std::chrono::minutes Minutes = std::chrono::duration_cast<std::chrono::minutes>(Milliseconds);
// Hours
std::chrono::hours Hours = std::chrono::duration_cast<std::chrono::hours>(Milliseconds);
std::string TmpStr = fmt::format("{:02}:{:02}:{:02}:{:03}", Hours.count(), Minutes.count() % 60,
Seconds.count() % 60, Milliseconds.count() % 1000);
return TmpStr;
}
// Get the number of seconds since January 1 1970
std::chrono::seconds Timer::GetTimeSinceJan1970() {
return std::chrono::duration_cast<std::chrono::seconds>(GetTimeMs());
}
std::chrono::seconds Timer::GetLocalTimeSinceJan1970() {
time_t sysTime, tzDiff, tzDST;
struct tm* gmTime;
time(&sysTime);
// Account for DST where needed
gmTime = localtime(&sysTime);
if (gmTime->tm_isdst == 1)
tzDST = 3600;
else
tzDST = 0;
// Lazy way to get local time in sec
gmTime = gmtime(&sysTime);
tzDiff = sysTime - mktime(gmTime);
return std::chrono::seconds(sysTime + tzDiff + tzDST);
}
// Return the current time formatted as Minutes:Seconds:Milliseconds
// in the form 00:00:000.
std::string Timer::GetTimeFormatted() {
time_t sysTime;
struct tm* gmTime;
char tmp[13];
time(&sysTime);
gmTime = localtime(&sysTime);
strftime(tmp, 6, "%M:%S", gmTime);
u64 milliseconds = static_cast<u64>(GetTimeMs().count()) % 1000;
return fmt::format("{}:{:03}", tmp, milliseconds);
}
// Returns a timestamp with decimals for precise time comparisons
// ----------------
double Timer::GetDoubleTime() {
// Get continuous timestamp
auto tmp_seconds = static_cast<u64>(GetTimeSinceJan1970().count());
const auto ms = static_cast<double>(static_cast<u64>(GetTimeMs().count()) % 1000);
// Remove a few years. We only really want enough seconds to make
// sure that we are detecting actual actions, perhaps 60 seconds is
// enough really, but I leave a year of seconds anyway, in case the
// user's clock is incorrect or something like that.
tmp_seconds = tmp_seconds - (38 * 365 * 24 * 60 * 60);
// Make a smaller integer that fits in the double
const auto seconds = static_cast<u32>(tmp_seconds);
return seconds + ms;
}
} // Namespace Common

View File

@@ -1,41 +0,0 @@
// Copyright 2013 Dolphin Emulator Project / 2014 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <chrono>
#include <string>
#include "common/common_types.h"
namespace Common {
class Timer {
public:
Timer();
void Start();
void Stop();
void Update();
// The time difference is always returned in milliseconds, regardless of alternative internal
// representation
[[nodiscard]] std::chrono::milliseconds GetTimeDifference();
void AddTimeDifference();
[[nodiscard]] static std::chrono::seconds GetTimeSinceJan1970();
[[nodiscard]] static std::chrono::seconds GetLocalTimeSinceJan1970();
[[nodiscard]] static double GetDoubleTime();
[[nodiscard]] static std::string GetTimeFormatted();
[[nodiscard]] std::string GetTimeElapsedFormatted() const;
[[nodiscard]] std::chrono::milliseconds GetTimeElapsed();
[[nodiscard]] static std::chrono::milliseconds GetTimeMs();
private:
std::chrono::milliseconds m_LastTime;
std::chrono::milliseconds m_StartTime;
bool m_Running;
};
} // Namespace Common

674
src/common/tree.h Normal file
View File

@@ -0,0 +1,674 @@
/* $NetBSD: tree.h,v 1.8 2004/03/28 19:38:30 provos Exp $ */
/* $OpenBSD: tree.h,v 1.7 2002/10/17 21:51:54 art Exp $ */
/* $FreeBSD$ */
/*-
* Copyright 2002 Niels Provos <provos@citi.umich.edu>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#pragma once
/*
* This file defines data structures for red-black trees.
*
* A red-black tree is a binary search tree with the node color as an
* extra attribute. It fulfills a set of conditions:
* - every search path from the root to a leaf consists of the
* same number of black nodes,
* - each red node (except for the root) has a black parent,
* - each leaf node is black.
*
* Every operation on a red-black tree is bounded as O(lg n).
* The maximum height of a red-black tree is 2lg (n+1).
*/
namespace Common {
template <typename T>
class RBHead {
public:
[[nodiscard]] T* Root() {
return rbh_root;
}
[[nodiscard]] const T* Root() const {
return rbh_root;
}
void SetRoot(T* root) {
rbh_root = root;
}
[[nodiscard]] bool IsEmpty() const {
return Root() == nullptr;
}
private:
T* rbh_root = nullptr;
};
enum class EntryColor {
Black,
Red,
};
template <typename T>
class RBEntry {
public:
[[nodiscard]] T* Left() {
return rbe_left;
}
[[nodiscard]] const T* Left() const {
return rbe_left;
}
void SetLeft(T* left) {
rbe_left = left;
}
[[nodiscard]] T* Right() {
return rbe_right;
}
[[nodiscard]] const T* Right() const {
return rbe_right;
}
void SetRight(T* right) {
rbe_right = right;
}
[[nodiscard]] T* Parent() {
return rbe_parent;
}
[[nodiscard]] const T* Parent() const {
return rbe_parent;
}
void SetParent(T* parent) {
rbe_parent = parent;
}
[[nodiscard]] bool IsBlack() const {
return rbe_color == EntryColor::Black;
}
[[nodiscard]] bool IsRed() const {
return rbe_color == EntryColor::Red;
}
[[nodiscard]] EntryColor Color() const {
return rbe_color;
}
void SetColor(EntryColor color) {
rbe_color = color;
}
private:
T* rbe_left = nullptr;
T* rbe_right = nullptr;
T* rbe_parent = nullptr;
EntryColor rbe_color{};
};
template <typename Node>
[[nodiscard]] RBEntry<Node>& RB_ENTRY(Node* node) {
return node->GetEntry();
}
template <typename Node>
[[nodiscard]] const RBEntry<Node>& RB_ENTRY(const Node* node) {
return node->GetEntry();
}
template <typename Node>
[[nodiscard]] Node* RB_PARENT(Node* node) {
return RB_ENTRY(node).Parent();
}
template <typename Node>
[[nodiscard]] const Node* RB_PARENT(const Node* node) {
return RB_ENTRY(node).Parent();
}
template <typename Node>
void RB_SET_PARENT(Node* node, Node* parent) {
return RB_ENTRY(node).SetParent(parent);
}
template <typename Node>
[[nodiscard]] Node* RB_LEFT(Node* node) {
return RB_ENTRY(node).Left();
}
template <typename Node>
[[nodiscard]] const Node* RB_LEFT(const Node* node) {
return RB_ENTRY(node).Left();
}
template <typename Node>
void RB_SET_LEFT(Node* node, Node* left) {
return RB_ENTRY(node).SetLeft(left);
}
template <typename Node>
[[nodiscard]] Node* RB_RIGHT(Node* node) {
return RB_ENTRY(node).Right();
}
template <typename Node>
[[nodiscard]] const Node* RB_RIGHT(const Node* node) {
return RB_ENTRY(node).Right();
}
template <typename Node>
void RB_SET_RIGHT(Node* node, Node* right) {
return RB_ENTRY(node).SetRight(right);
}
template <typename Node>
[[nodiscard]] bool RB_IS_BLACK(const Node* node) {
return RB_ENTRY(node).IsBlack();
}
template <typename Node>
[[nodiscard]] bool RB_IS_RED(const Node* node) {
return RB_ENTRY(node).IsRed();
}
template <typename Node>
[[nodiscard]] EntryColor RB_COLOR(const Node* node) {
return RB_ENTRY(node).Color();
}
template <typename Node>
void RB_SET_COLOR(Node* node, EntryColor color) {
return RB_ENTRY(node).SetColor(color);
}
template <typename Node>
void RB_SET(Node* node, Node* parent) {
auto& entry = RB_ENTRY(node);
entry.SetParent(parent);
entry.SetLeft(nullptr);
entry.SetRight(nullptr);
entry.SetColor(EntryColor::Red);
}
template <typename Node>
void RB_SET_BLACKRED(Node* black, Node* red) {
RB_SET_COLOR(black, EntryColor::Black);
RB_SET_COLOR(red, EntryColor::Red);
}
template <typename Node>
void RB_ROTATE_LEFT(RBHead<Node>* head, Node* elm, Node*& tmp) {
tmp = RB_RIGHT(elm);
RB_SET_RIGHT(elm, RB_LEFT(tmp));
if (RB_RIGHT(elm) != nullptr) {
RB_SET_PARENT(RB_LEFT(tmp), elm);
}
RB_SET_PARENT(tmp, RB_PARENT(elm));
if (RB_PARENT(tmp) != nullptr) {
if (elm == RB_LEFT(RB_PARENT(elm))) {
RB_SET_LEFT(RB_PARENT(elm), tmp);
} else {
RB_SET_RIGHT(RB_PARENT(elm), tmp);
}
} else {
head->SetRoot(tmp);
}
RB_SET_LEFT(tmp, elm);
RB_SET_PARENT(elm, tmp);
}
template <typename Node>
void RB_ROTATE_RIGHT(RBHead<Node>* head, Node* elm, Node*& tmp) {
tmp = RB_LEFT(elm);
RB_SET_LEFT(elm, RB_RIGHT(tmp));
if (RB_LEFT(elm) != nullptr) {
RB_SET_PARENT(RB_RIGHT(tmp), elm);
}
RB_SET_PARENT(tmp, RB_PARENT(elm));
if (RB_PARENT(tmp) != nullptr) {
if (elm == RB_LEFT(RB_PARENT(elm))) {
RB_SET_LEFT(RB_PARENT(elm), tmp);
} else {
RB_SET_RIGHT(RB_PARENT(elm), tmp);
}
} else {
head->SetRoot(tmp);
}
RB_SET_RIGHT(tmp, elm);
RB_SET_PARENT(elm, tmp);
}
template <typename Node>
void RB_INSERT_COLOR(RBHead<Node>* head, Node* elm) {
Node* parent = nullptr;
Node* tmp = nullptr;
while ((parent = RB_PARENT(elm)) != nullptr && RB_IS_RED(parent)) {
Node* gparent = RB_PARENT(parent);
if (parent == RB_LEFT(gparent)) {
tmp = RB_RIGHT(gparent);
if (tmp && RB_IS_RED(tmp)) {
RB_SET_COLOR(tmp, EntryColor::Black);
RB_SET_BLACKRED(parent, gparent);
elm = gparent;
continue;
}
if (RB_RIGHT(parent) == elm) {
RB_ROTATE_LEFT(head, parent, tmp);
tmp = parent;
parent = elm;
elm = tmp;
}
RB_SET_BLACKRED(parent, gparent);
RB_ROTATE_RIGHT(head, gparent, tmp);
} else {
tmp = RB_LEFT(gparent);
if (tmp && RB_IS_RED(tmp)) {
RB_SET_COLOR(tmp, EntryColor::Black);
RB_SET_BLACKRED(parent, gparent);
elm = gparent;
continue;
}
if (RB_LEFT(parent) == elm) {
RB_ROTATE_RIGHT(head, parent, tmp);
tmp = parent;
parent = elm;
elm = tmp;
}
RB_SET_BLACKRED(parent, gparent);
RB_ROTATE_LEFT(head, gparent, tmp);
}
}
RB_SET_COLOR(head->Root(), EntryColor::Black);
}
template <typename Node>
void RB_REMOVE_COLOR(RBHead<Node>* head, Node* parent, Node* elm) {
Node* tmp;
while ((elm == nullptr || RB_IS_BLACK(elm)) && elm != head->Root()) {
if (RB_LEFT(parent) == elm) {
tmp = RB_RIGHT(parent);
if (RB_IS_RED(tmp)) {
RB_SET_BLACKRED(tmp, parent);
RB_ROTATE_LEFT(head, parent, tmp);
tmp = RB_RIGHT(parent);
}
if ((RB_LEFT(tmp) == nullptr || RB_IS_BLACK(RB_LEFT(tmp))) &&
(RB_RIGHT(tmp) == nullptr || RB_IS_BLACK(RB_RIGHT(tmp)))) {
RB_SET_COLOR(tmp, EntryColor::Red);
elm = parent;
parent = RB_PARENT(elm);
} else {
if (RB_RIGHT(tmp) == nullptr || RB_IS_BLACK(RB_RIGHT(tmp))) {
Node* oleft;
if ((oleft = RB_LEFT(tmp)) != nullptr) {
RB_SET_COLOR(oleft, EntryColor::Black);
}
RB_SET_COLOR(tmp, EntryColor::Red);
RB_ROTATE_RIGHT(head, tmp, oleft);
tmp = RB_RIGHT(parent);
}
RB_SET_COLOR(tmp, RB_COLOR(parent));
RB_SET_COLOR(parent, EntryColor::Black);
if (RB_RIGHT(tmp)) {
RB_SET_COLOR(RB_RIGHT(tmp), EntryColor::Black);
}
RB_ROTATE_LEFT(head, parent, tmp);
elm = head->Root();
break;
}
} else {
tmp = RB_LEFT(parent);
if (RB_IS_RED(tmp)) {
RB_SET_BLACKRED(tmp, parent);
RB_ROTATE_RIGHT(head, parent, tmp);
tmp = RB_LEFT(parent);
}
if ((RB_LEFT(tmp) == nullptr || RB_IS_BLACK(RB_LEFT(tmp))) &&
(RB_RIGHT(tmp) == nullptr || RB_IS_BLACK(RB_RIGHT(tmp)))) {
RB_SET_COLOR(tmp, EntryColor::Red);
elm = parent;
parent = RB_PARENT(elm);
} else {
if (RB_LEFT(tmp) == nullptr || RB_IS_BLACK(RB_LEFT(tmp))) {
Node* oright;
if ((oright = RB_RIGHT(tmp)) != nullptr) {
RB_SET_COLOR(oright, EntryColor::Black);
}
RB_SET_COLOR(tmp, EntryColor::Red);
RB_ROTATE_LEFT(head, tmp, oright);
tmp = RB_LEFT(parent);
}
RB_SET_COLOR(tmp, RB_COLOR(parent));
RB_SET_COLOR(parent, EntryColor::Black);
if (RB_LEFT(tmp)) {
RB_SET_COLOR(RB_LEFT(tmp), EntryColor::Black);
}
RB_ROTATE_RIGHT(head, parent, tmp);
elm = head->Root();
break;
}
}
}
if (elm) {
RB_SET_COLOR(elm, EntryColor::Black);
}
}
template <typename Node>
Node* RB_REMOVE(RBHead<Node>* head, Node* elm) {
Node* child = nullptr;
Node* parent = nullptr;
Node* old = elm;
EntryColor color{};
const auto finalize = [&] {
if (color == EntryColor::Black) {
RB_REMOVE_COLOR(head, parent, child);
}
return old;
};
if (RB_LEFT(elm) == nullptr) {
child = RB_RIGHT(elm);
} else if (RB_RIGHT(elm) == nullptr) {
child = RB_LEFT(elm);
} else {
Node* left;
elm = RB_RIGHT(elm);
while ((left = RB_LEFT(elm)) != nullptr) {
elm = left;
}
child = RB_RIGHT(elm);
parent = RB_PARENT(elm);
color = RB_COLOR(elm);
if (child) {
RB_SET_PARENT(child, parent);
}
if (parent) {
if (RB_LEFT(parent) == elm) {
RB_SET_LEFT(parent, child);
} else {
RB_SET_RIGHT(parent, child);
}
} else {
head->SetRoot(child);
}
if (RB_PARENT(elm) == old) {
parent = elm;
}
elm->SetEntry(old->GetEntry());
if (RB_PARENT(old)) {
if (RB_LEFT(RB_PARENT(old)) == old) {
RB_SET_LEFT(RB_PARENT(old), elm);
} else {
RB_SET_RIGHT(RB_PARENT(old), elm);
}
} else {
head->SetRoot(elm);
}
RB_SET_PARENT(RB_LEFT(old), elm);
if (RB_RIGHT(old)) {
RB_SET_PARENT(RB_RIGHT(old), elm);
}
if (parent) {
left = parent;
}
return finalize();
}
parent = RB_PARENT(elm);
color = RB_COLOR(elm);
if (child) {
RB_SET_PARENT(child, parent);
}
if (parent) {
if (RB_LEFT(parent) == elm) {
RB_SET_LEFT(parent, child);
} else {
RB_SET_RIGHT(parent, child);
}
} else {
head->SetRoot(child);
}
return finalize();
}
// Inserts a node into the RB tree
template <typename Node, typename CompareFunction>
Node* RB_INSERT(RBHead<Node>* head, Node* elm, CompareFunction cmp) {
Node* parent = nullptr;
Node* tmp = head->Root();
int comp = 0;
while (tmp) {
parent = tmp;
comp = cmp(elm, parent);
if (comp < 0) {
tmp = RB_LEFT(tmp);
} else if (comp > 0) {
tmp = RB_RIGHT(tmp);
} else {
return tmp;
}
}
RB_SET(elm, parent);
if (parent != nullptr) {
if (comp < 0) {
RB_SET_LEFT(parent, elm);
} else {
RB_SET_RIGHT(parent, elm);
}
} else {
head->SetRoot(elm);
}
RB_INSERT_COLOR(head, elm);
return nullptr;
}
// Finds the node with the same key as elm
template <typename Node, typename CompareFunction>
Node* RB_FIND(RBHead<Node>* head, Node* elm, CompareFunction cmp) {
Node* tmp = head->Root();
while (tmp) {
const int comp = cmp(elm, tmp);
if (comp < 0) {
tmp = RB_LEFT(tmp);
} else if (comp > 0) {
tmp = RB_RIGHT(tmp);
} else {
return tmp;
}
}
return nullptr;
}
// Finds the first node greater than or equal to the search key
template <typename Node, typename CompareFunction>
Node* RB_NFIND(RBHead<Node>* head, Node* elm, CompareFunction cmp) {
Node* tmp = head->Root();
Node* res = nullptr;
while (tmp) {
const int comp = cmp(elm, tmp);
if (comp < 0) {
res = tmp;
tmp = RB_LEFT(tmp);
} else if (comp > 0) {
tmp = RB_RIGHT(tmp);
} else {
return tmp;
}
}
return res;
}
// Finds the node with the same key as lelm
template <typename Node, typename CompareFunction>
Node* RB_FIND_LIGHT(RBHead<Node>* head, const void* lelm, CompareFunction lcmp) {
Node* tmp = head->Root();
while (tmp) {
const int comp = lcmp(lelm, tmp);
if (comp < 0) {
tmp = RB_LEFT(tmp);
} else if (comp > 0) {
tmp = RB_RIGHT(tmp);
} else {
return tmp;
}
}
return nullptr;
}
// Finds the first node greater than or equal to the search key
template <typename Node, typename CompareFunction>
Node* RB_NFIND_LIGHT(RBHead<Node>* head, const void* lelm, CompareFunction lcmp) {
Node* tmp = head->Root();
Node* res = nullptr;
while (tmp) {
const int comp = lcmp(lelm, tmp);
if (comp < 0) {
res = tmp;
tmp = RB_LEFT(tmp);
} else if (comp > 0) {
tmp = RB_RIGHT(tmp);
} else {
return tmp;
}
}
return res;
}
template <typename Node>
Node* RB_NEXT(Node* elm) {
if (RB_RIGHT(elm)) {
elm = RB_RIGHT(elm);
while (RB_LEFT(elm)) {
elm = RB_LEFT(elm);
}
} else {
if (RB_PARENT(elm) && (elm == RB_LEFT(RB_PARENT(elm)))) {
elm = RB_PARENT(elm);
} else {
while (RB_PARENT(elm) && (elm == RB_RIGHT(RB_PARENT(elm)))) {
elm = RB_PARENT(elm);
}
elm = RB_PARENT(elm);
}
}
return elm;
}
template <typename Node>
Node* RB_PREV(Node* elm) {
if (RB_LEFT(elm)) {
elm = RB_LEFT(elm);
while (RB_RIGHT(elm)) {
elm = RB_RIGHT(elm);
}
} else {
if (RB_PARENT(elm) && (elm == RB_RIGHT(RB_PARENT(elm)))) {
elm = RB_PARENT(elm);
} else {
while (RB_PARENT(elm) && (elm == RB_LEFT(RB_PARENT(elm)))) {
elm = RB_PARENT(elm);
}
elm = RB_PARENT(elm);
}
}
return elm;
}
template <typename Node>
Node* RB_MINMAX(RBHead<Node>* head, bool is_min) {
Node* tmp = head->Root();
Node* parent = nullptr;
while (tmp) {
parent = tmp;
if (is_min) {
tmp = RB_LEFT(tmp);
} else {
tmp = RB_RIGHT(tmp);
}
}
return parent;
}
template <typename Node>
Node* RB_MIN(RBHead<Node>* head) {
return RB_MINMAX(head, true);
}
template <typename Node>
Node* RB_MAX(RBHead<Node>* head) {
return RB_MINMAX(head, false);
}
} // namespace Common

View File

@@ -14,8 +14,8 @@ constexpr u128 INVALID_UUID{{0, 0}};
struct UUID {
// UUIDs which are 0 are considered invalid!
u128 uuid = INVALID_UUID;
constexpr UUID() = default;
u128 uuid;
UUID() = default;
constexpr explicit UUID(const u128& id) : uuid{id} {}
constexpr explicit UUID(const u64 lo, const u64 hi) : uuid{{lo, hi}} {}

View File

@@ -15,10 +15,12 @@ void FreeMemoryPages(void* base, std::size_t size) noexcept;
template <typename T>
class VirtualBuffer final {
public:
static_assert(
std::is_trivially_constructible_v<T>,
"T must be trivially constructible, as non-trivial constructors will not be executed "
"with the current allocator");
// TODO: Uncomment this and change Common::PageTable::PageInfo to be trivially constructible
// using std::atomic_ref once libc++ has support for it
// static_assert(
// std::is_trivially_constructible_v<T>,
// "T must be trivially constructible, as non-trivial constructors will not be executed "
// "with the current allocator");
constexpr VirtualBuffer() = default;
explicit VirtualBuffer(std::size_t count) : alloc_size{count * sizeof(T)} {

View File

@@ -2,19 +2,74 @@
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <array>
#include <chrono>
#include <limits>
#include <mutex>
#include <thread>
#ifdef _MSC_VER
#include <intrin.h>
#pragma intrinsic(__umulh)
#pragma intrinsic(_udiv128)
#else
#include <x86intrin.h>
#endif
#include "common/atomic_ops.h"
#include "common/uint128.h"
#include "common/x64/native_clock.h"
namespace {
[[nodiscard]] u64 GetFixedPoint64Factor(u64 numerator, u64 divisor) {
#ifdef __SIZEOF_INT128__
const auto base = static_cast<unsigned __int128>(numerator) << 64ULL;
return static_cast<u64>(base / divisor);
#elif defined(_M_X64) || defined(_M_ARM64)
std::array<u64, 2> r = {0, numerator};
u64 remainder;
#if _MSC_VER < 1923
return udiv128(r[1], r[0], divisor, &remainder);
#else
return _udiv128(r[1], r[0], divisor, &remainder);
#endif
#else
// This one is bit more inaccurate.
return MultiplyAndDivide64(std::numeric_limits<u64>::max(), numerator, divisor);
#endif
}
[[nodiscard]] u64 MultiplyHigh(u64 a, u64 b) {
#ifdef __SIZEOF_INT128__
return (static_cast<unsigned __int128>(a) * static_cast<unsigned __int128>(b)) >> 64;
#elif defined(_M_X64) || defined(_M_ARM64)
return __umulh(a, b); // MSVC
#else
// Generic fallback
const u64 a_lo = u32(a);
const u64 a_hi = a >> 32;
const u64 b_lo = u32(b);
const u64 b_hi = b >> 32;
const u64 a_x_b_hi = a_hi * b_hi;
const u64 a_x_b_mid = a_hi * b_lo;
const u64 b_x_a_mid = b_hi * a_lo;
const u64 a_x_b_lo = a_lo * b_lo;
const u64 carry_bit = (static_cast<u64>(static_cast<u32>(a_x_b_mid)) +
static_cast<u64>(static_cast<u32>(b_x_a_mid)) + (a_x_b_lo >> 32)) >>
32;
const u64 multhi = a_x_b_hi + (a_x_b_mid >> 32) + (b_x_a_mid >> 32) + carry_bit;
return multhi;
#endif
}
} // namespace
namespace Common {
u64 EstimateRDTSCFrequency() {
@@ -48,54 +103,71 @@ NativeClock::NativeClock(u64 emulated_cpu_frequency_, u64 emulated_clock_frequen
: WallClock(emulated_cpu_frequency_, emulated_clock_frequency_, true), rtsc_frequency{
rtsc_frequency_} {
_mm_mfence();
last_measure = __rdtsc();
accumulated_ticks = 0U;
time_point.inner.last_measure = __rdtsc();
time_point.inner.accumulated_ticks = 0U;
ns_rtsc_factor = GetFixedPoint64Factor(1000000000, rtsc_frequency);
us_rtsc_factor = GetFixedPoint64Factor(1000000, rtsc_frequency);
ms_rtsc_factor = GetFixedPoint64Factor(1000, rtsc_frequency);
clock_rtsc_factor = GetFixedPoint64Factor(emulated_clock_frequency, rtsc_frequency);
cpu_rtsc_factor = GetFixedPoint64Factor(emulated_cpu_frequency, rtsc_frequency);
}
u64 NativeClock::GetRTSC() {
std::scoped_lock scope{rtsc_serialize};
_mm_mfence();
const u64 current_measure = __rdtsc();
u64 diff = current_measure - last_measure;
diff = diff & ~static_cast<u64>(static_cast<s64>(diff) >> 63); // max(diff, 0)
if (current_measure > last_measure) {
last_measure = current_measure;
}
accumulated_ticks += diff;
TimePoint new_time_point{};
TimePoint current_time_point{};
do {
current_time_point.pack = time_point.pack;
_mm_mfence();
const u64 current_measure = __rdtsc();
u64 diff = current_measure - current_time_point.inner.last_measure;
diff = diff & ~static_cast<u64>(static_cast<s64>(diff) >> 63); // max(diff, 0)
new_time_point.inner.last_measure = current_measure > current_time_point.inner.last_measure
? current_measure
: current_time_point.inner.last_measure;
new_time_point.inner.accumulated_ticks = current_time_point.inner.accumulated_ticks + diff;
} while (!Common::AtomicCompareAndSwap(time_point.pack.data(), new_time_point.pack,
current_time_point.pack));
/// The clock cannot be more precise than the guest timer, remove the lower bits
return accumulated_ticks & inaccuracy_mask;
return new_time_point.inner.accumulated_ticks & inaccuracy_mask;
}
void NativeClock::Pause(bool is_paused) {
if (!is_paused) {
_mm_mfence();
last_measure = __rdtsc();
TimePoint current_time_point{};
TimePoint new_time_point{};
do {
current_time_point.pack = time_point.pack;
new_time_point.pack = current_time_point.pack;
_mm_mfence();
new_time_point.inner.last_measure = __rdtsc();
} while (!Common::AtomicCompareAndSwap(time_point.pack.data(), new_time_point.pack,
current_time_point.pack));
}
}
std::chrono::nanoseconds NativeClock::GetTimeNS() {
const u64 rtsc_value = GetRTSC();
return std::chrono::nanoseconds{MultiplyAndDivide64(rtsc_value, 1000000000, rtsc_frequency)};
return std::chrono::nanoseconds{MultiplyHigh(rtsc_value, ns_rtsc_factor)};
}
std::chrono::microseconds NativeClock::GetTimeUS() {
const u64 rtsc_value = GetRTSC();
return std::chrono::microseconds{MultiplyAndDivide64(rtsc_value, 1000000, rtsc_frequency)};
return std::chrono::microseconds{MultiplyHigh(rtsc_value, us_rtsc_factor)};
}
std::chrono::milliseconds NativeClock::GetTimeMS() {
const u64 rtsc_value = GetRTSC();
return std::chrono::milliseconds{MultiplyAndDivide64(rtsc_value, 1000, rtsc_frequency)};
return std::chrono::milliseconds{MultiplyHigh(rtsc_value, ms_rtsc_factor)};
}
u64 NativeClock::GetClockCycles() {
const u64 rtsc_value = GetRTSC();
return MultiplyAndDivide64(rtsc_value, emulated_clock_frequency, rtsc_frequency);
return MultiplyHigh(rtsc_value, clock_rtsc_factor);
}
u64 NativeClock::GetCPUCycles() {
const u64 rtsc_value = GetRTSC();
return MultiplyAndDivide64(rtsc_value, emulated_cpu_frequency, rtsc_frequency);
return MultiplyHigh(rtsc_value, cpu_rtsc_factor);
}
} // namespace X64

View File

@@ -6,7 +6,6 @@
#include <optional>
#include "common/spin_lock.h"
#include "common/wall_clock.h"
namespace Common {
@@ -32,14 +31,28 @@ public:
private:
u64 GetRTSC();
union alignas(16) TimePoint {
TimePoint() : pack{} {}
u128 pack{};
struct Inner {
u64 last_measure{};
u64 accumulated_ticks{};
} inner;
};
/// value used to reduce the native clocks accuracy as some apss rely on
/// undefined behavior where the level of accuracy in the clock shouldn't
/// be higher.
static constexpr u64 inaccuracy_mask = ~(UINT64_C(0x400) - 1);
SpinLock rtsc_serialize{};
u64 last_measure{};
u64 accumulated_ticks{};
TimePoint time_point;
// factors
u64 clock_rtsc_factor{};
u64 cpu_rtsc_factor{};
u64 ns_rtsc_factor{};
u64 us_rtsc_factor{};
u64 ms_rtsc_factor{};
u64 rtsc_frequency;
};
} // namespace X64

View File

@@ -142,8 +142,6 @@ add_library(core STATIC
hardware_interrupt_manager.h
hle/ipc.h
hle/ipc_helpers.h
hle/kernel/address_arbiter.cpp
hle/kernel/address_arbiter.h
hle/kernel/client_port.cpp
hle/kernel/client_port.h
hle/kernel/client_session.cpp
@@ -157,13 +155,19 @@ add_library(core STATIC
hle/kernel/handle_table.h
hle/kernel/hle_ipc.cpp
hle/kernel/hle_ipc.h
hle/kernel/k_address_arbiter.cpp
hle/kernel/k_address_arbiter.h
hle/kernel/k_affinity_mask.h
hle/kernel/k_condition_variable.cpp
hle/kernel/k_condition_variable.h
hle/kernel/k_priority_queue.h
hle/kernel/k_scheduler.cpp
hle/kernel/k_scheduler.h
hle/kernel/k_scheduler_lock.h
hle/kernel/k_scoped_lock.h
hle/kernel/k_scoped_scheduler_lock_and_sleep.h
hle/kernel/k_synchronization_object.cpp
hle/kernel/k_synchronization_object.h
hle/kernel/kernel.cpp
hle/kernel/kernel.h
hle/kernel/memory/address_space_info.cpp
@@ -183,8 +187,6 @@ add_library(core STATIC
hle/kernel/memory/slab_heap.h
hle/kernel/memory/system_control.cpp
hle/kernel/memory/system_control.h
hle/kernel/mutex.cpp
hle/kernel/mutex.h
hle/kernel/object.cpp
hle/kernel/object.h
hle/kernel/physical_core.cpp
@@ -210,12 +212,10 @@ add_library(core STATIC
hle/kernel/shared_memory.h
hle/kernel/svc.cpp
hle/kernel/svc.h
hle/kernel/svc_common.h
hle/kernel/svc_results.h
hle/kernel/svc_types.h
hle/kernel/svc_wrap.h
hle/kernel/synchronization_object.cpp
hle/kernel/synchronization_object.h
hle/kernel/synchronization.cpp
hle/kernel/synchronization.h
hle/kernel/thread.cpp
hle/kernel/thread.h
hle/kernel/time_manager.cpp
@@ -635,16 +635,17 @@ if (MSVC)
/we4267
# 'context' : truncation from 'type1' to 'type2'
/we4305
# 'function' : not all control paths return a value
/we4715
)
else()
target_compile_options(core PRIVATE
-Werror=conversion
-Werror=ignored-qualifiers
-Werror=implicit-fallthrough
-Werror=reorder
-Werror=sign-compare
-Werror=unused-variable
$<$<CXX_COMPILER_ID:GNU>:-Werror=class-memaccess>
$<$<CXX_COMPILER_ID:GNU>:-Werror=unused-but-set-parameter>
$<$<CXX_COMPILER_ID:GNU>:-Werror=unused-but-set-variable>

View File

@@ -26,9 +26,10 @@ using CPUInterrupts = std::array<CPUInterruptHandler, Core::Hardware::NUM_CPU_CO
/// Generic ARMv8 CPU interface
class ARM_Interface : NonCopyable {
public:
explicit ARM_Interface(System& system_, CPUInterrupts& interrupt_handlers, bool uses_wall_clock)
: system{system_}, interrupt_handlers{interrupt_handlers}, uses_wall_clock{
uses_wall_clock} {}
explicit ARM_Interface(System& system_, CPUInterrupts& interrupt_handlers_,
bool uses_wall_clock_)
: system{system_}, interrupt_handlers{interrupt_handlers_}, uses_wall_clock{
uses_wall_clock_} {}
virtual ~ARM_Interface() = default;
struct ThreadContext32 {

View File

@@ -71,15 +71,8 @@ public:
}
void ExceptionRaised(u32 pc, Dynarmic::A32::Exception exception) override {
switch (exception) {
case Dynarmic::A32::Exception::UndefinedInstruction:
case Dynarmic::A32::Exception::UnpredictableInstruction:
break;
case Dynarmic::A32::Exception::Breakpoint:
break;
}
LOG_CRITICAL(Core_ARM, "ExceptionRaised(exception = {}, pc = {:08X}, code = {:08X})",
static_cast<std::size_t>(exception), pc, MemoryReadCode(pc));
exception, pc, MemoryReadCode(pc));
UNIMPLEMENTED();
}
@@ -133,6 +126,7 @@ std::shared_ptr<Dynarmic::A32::Jit> ARM_Dynarmic_32::MakeJit(Common::PageTable&
config.page_table = reinterpret_cast<std::array<std::uint8_t*, NUM_PAGE_TABLE_ENTRIES>*>(
page_table.pointers.data());
config.absolute_offset_page_table = true;
config.page_table_pointer_mask_bits = Common::PageTable::ATTRIBUTE_BITS;
config.detect_misaligned_access_via_page_table = 16 | 32 | 64 | 128;
config.only_detect_misalignment_via_page_table_on_page_boundary = true;
@@ -180,6 +174,9 @@ std::shared_ptr<Dynarmic::A32::Jit> ARM_Dynarmic_32::MakeJit(Common::PageTable&
if (Settings::values.cpuopt_unsafe_reduce_fp_error) {
config.optimizations |= Dynarmic::OptimizationFlag::Unsafe_ReducedErrorFP;
}
if (Settings::values.cpuopt_unsafe_inaccurate_nan) {
config.optimizations |= Dynarmic::OptimizationFlag::Unsafe_InaccurateNaN;
}
}
return std::make_unique<Dynarmic::A32::Jit>(config);

View File

@@ -152,6 +152,7 @@ std::shared_ptr<Dynarmic::A64::Jit> ARM_Dynarmic_64::MakeJit(Common::PageTable&
// Memory
config.page_table = reinterpret_cast<void**>(page_table.pointers.data());
config.page_table_address_space_bits = address_space_bits;
config.page_table_pointer_mask_bits = Common::PageTable::ATTRIBUTE_BITS;
config.silently_mirror_page_table = false;
config.absolute_offset_page_table = true;
config.detect_misaligned_access_via_page_table = 16 | 32 | 64 | 128;
@@ -211,6 +212,9 @@ std::shared_ptr<Dynarmic::A64::Jit> ARM_Dynarmic_64::MakeJit(Common::PageTable&
if (Settings::values.cpuopt_unsafe_reduce_fp_error) {
config.optimizations |= Dynarmic::OptimizationFlag::Unsafe_ReducedErrorFP;
}
if (Settings::values.cpuopt_unsafe_inaccurate_nan) {
config.optimizations |= Dynarmic::OptimizationFlag::Unsafe_InaccurateNaN;
}
}
return std::make_shared<Dynarmic::A64::Jit>(config);

View File

@@ -49,6 +49,7 @@ void CoreTiming::ThreadEntry(CoreTiming& instance) {
Common::SetCurrentThreadPriority(Common::ThreadPriority::VeryHigh);
instance.on_thread_init();
instance.ThreadLoop();
MicroProfileOnThreadExit();
}
void CoreTiming::Initialize(std::function<void()>&& on_thread_init_) {

View File

@@ -143,6 +143,7 @@ u64 GetSignatureTypeDataSize(SignatureType type) {
return 0x3C;
}
UNREACHABLE();
return 0;
}
u64 GetSignatureTypePaddingSize(SignatureType type) {
@@ -157,6 +158,7 @@ u64 GetSignatureTypePaddingSize(SignatureType type) {
return 0x40;
}
UNREACHABLE();
return 0;
}
SignatureType Ticket::GetSignatureType() const {
@@ -169,8 +171,7 @@ SignatureType Ticket::GetSignatureType() const {
if (const auto* ticket = std::get_if<ECDSATicket>(&data)) {
return ticket->sig_type;
}
UNREACHABLE();
throw std::bad_variant_access{};
}
TicketData& Ticket::GetData() {
@@ -183,8 +184,7 @@ TicketData& Ticket::GetData() {
if (auto* ticket = std::get_if<ECDSATicket>(&data)) {
return ticket->data;
}
UNREACHABLE();
throw std::bad_variant_access{};
}
const TicketData& Ticket::GetData() const {
@@ -197,8 +197,7 @@ const TicketData& Ticket::GetData() const {
if (const auto* ticket = std::get_if<ECDSATicket>(&data)) {
return ticket->data;
}
UNREACHABLE();
throw std::bad_variant_access{};
}
u64 Ticket::GetSize() const {

View File

@@ -43,17 +43,17 @@ static_assert(sizeof(IVFCLevel) == 0x18, "IVFCLevel has incorrect size.");
struct IVFCHeader {
u32_le magic;
u32_le magic_number;
INSERT_UNION_PADDING_BYTES(8);
INSERT_PADDING_BYTES_NOINIT(8);
std::array<IVFCLevel, 6> levels;
INSERT_UNION_PADDING_BYTES(64);
INSERT_PADDING_BYTES_NOINIT(64);
};
static_assert(sizeof(IVFCHeader) == 0xE0, "IVFCHeader has incorrect size.");
struct NCASectionHeaderBlock {
INSERT_UNION_PADDING_BYTES(3);
INSERT_PADDING_BYTES_NOINIT(3);
NCASectionFilesystemType filesystem_type;
NCASectionCryptoType crypto_type;
INSERT_UNION_PADDING_BYTES(3);
INSERT_PADDING_BYTES_NOINIT(3);
};
static_assert(sizeof(NCASectionHeaderBlock) == 0x8, "NCASectionHeaderBlock has incorrect size.");
@@ -61,7 +61,7 @@ struct NCASectionRaw {
NCASectionHeaderBlock header;
std::array<u8, 0x138> block_data;
std::array<u8, 0x8> section_ctr;
INSERT_UNION_PADDING_BYTES(0xB8);
INSERT_PADDING_BYTES_NOINIT(0xB8);
};
static_assert(sizeof(NCASectionRaw) == 0x200, "NCASectionRaw has incorrect size.");
@@ -69,19 +69,19 @@ struct PFS0Superblock {
NCASectionHeaderBlock header_block;
std::array<u8, 0x20> hash;
u32_le size;
INSERT_UNION_PADDING_BYTES(4);
INSERT_PADDING_BYTES_NOINIT(4);
u64_le hash_table_offset;
u64_le hash_table_size;
u64_le pfs0_header_offset;
u64_le pfs0_size;
INSERT_UNION_PADDING_BYTES(0x1B0);
INSERT_PADDING_BYTES_NOINIT(0x1B0);
};
static_assert(sizeof(PFS0Superblock) == 0x200, "PFS0Superblock has incorrect size.");
struct RomFSSuperblock {
NCASectionHeaderBlock header_block;
IVFCHeader ivfc;
INSERT_UNION_PADDING_BYTES(0x118);
INSERT_PADDING_BYTES_NOINIT(0x118);
};
static_assert(sizeof(RomFSSuperblock) == 0x200, "RomFSSuperblock has incorrect size.");
@@ -89,19 +89,19 @@ struct BKTRHeader {
u64_le offset;
u64_le size;
u32_le magic;
INSERT_UNION_PADDING_BYTES(0x4);
INSERT_PADDING_BYTES_NOINIT(0x4);
u32_le number_entries;
INSERT_UNION_PADDING_BYTES(0x4);
INSERT_PADDING_BYTES_NOINIT(0x4);
};
static_assert(sizeof(BKTRHeader) == 0x20, "BKTRHeader has incorrect size.");
struct BKTRSuperblock {
NCASectionHeaderBlock header_block;
IVFCHeader ivfc;
INSERT_UNION_PADDING_BYTES(0x18);
INSERT_PADDING_BYTES_NOINIT(0x18);
BKTRHeader relocation;
BKTRHeader subsection;
INSERT_UNION_PADDING_BYTES(0xC0);
INSERT_PADDING_BYTES_NOINIT(0xC0);
};
static_assert(sizeof(BKTRSuperblock) == 0x200, "BKTRSuperblock has incorrect size.");

View File

@@ -51,8 +51,8 @@ std::pair<std::size_t, std::size_t> SearchBucketEntry(u64 offset, const BlockTyp
low = mid + 1;
}
}
UNREACHABLE_MSG("Offset could not be found in BKTR block.");
return {0, 0};
}
} // Anonymous namespace

View File

@@ -105,7 +105,8 @@ ContentRecordType GetCRTypeFromNCAType(NCAContentType type) {
// TODO(DarkLordZach): Peek at NCA contents to differentiate Manual and Legal.
return ContentRecordType::HtmlDocument;
default:
UNREACHABLE_MSG("Invalid NCAContentType={:02X}", static_cast<u8>(type));
UNREACHABLE_MSG("Invalid NCAContentType={:02X}", type);
return ContentRecordType{};
}
}

View File

@@ -67,18 +67,18 @@ public:
virtual void Refresh() = 0;
virtual bool HasEntry(u64 title_id, ContentRecordType type) const = 0;
virtual bool HasEntry(ContentProviderEntry entry) const;
bool HasEntry(ContentProviderEntry entry) const;
virtual std::optional<u32> GetEntryVersion(u64 title_id) const = 0;
virtual VirtualFile GetEntryUnparsed(u64 title_id, ContentRecordType type) const = 0;
virtual VirtualFile GetEntryUnparsed(ContentProviderEntry entry) const;
VirtualFile GetEntryUnparsed(ContentProviderEntry entry) const;
virtual VirtualFile GetEntryRaw(u64 title_id, ContentRecordType type) const = 0;
virtual VirtualFile GetEntryRaw(ContentProviderEntry entry) const;
VirtualFile GetEntryRaw(ContentProviderEntry entry) const;
virtual std::unique_ptr<NCA> GetEntry(u64 title_id, ContentRecordType type) const = 0;
virtual std::unique_ptr<NCA> GetEntry(ContentProviderEntry entry) const;
std::unique_ptr<NCA> GetEntry(ContentProviderEntry entry) const;
virtual std::vector<ContentProviderEntry> ListEntries() const;

View File

@@ -58,7 +58,7 @@ struct SaveDataAttribute {
SaveDataType type;
SaveDataRank rank;
u16 index;
INSERT_PADDING_BYTES(4);
INSERT_PADDING_BYTES_NOINIT(4);
u64 zero_1;
u64 zero_2;
u64 zero_3;
@@ -72,7 +72,7 @@ struct SaveDataExtraData {
u64 owner_id;
s64 timestamp;
SaveDataFlags flags;
INSERT_PADDING_BYTES(4);
INSERT_PADDING_BYTES_NOINIT(4);
s64 available_size;
s64 journal_size;
s64 commit_id;

View File

@@ -25,6 +25,10 @@ void InputInterpreter::PollInput() {
button_states[current_index] = button_state;
}
bool InputInterpreter::IsButtonPressed(HIDButton button) const {
return (button_states[current_index] & (1U << static_cast<u8>(button))) != 0;
}
bool InputInterpreter::IsButtonPressedOnce(HIDButton button) const {
const bool current_press =
(button_states[current_index] & (1U << static_cast<u8>(button))) != 0;

View File

@@ -66,6 +66,27 @@ public:
/// Gets a button state from HID and inserts it into the array of button states.
void PollInput();
/**
* Checks whether the button is pressed.
*
* @param button The button to check.
*
* @returns True when the button is pressed.
*/
[[nodiscard]] bool IsButtonPressed(HIDButton button) const;
/**
* Checks whether any of the buttons in the parameter list is pressed.
*
* @tparam HIDButton The buttons to check.
*
* @returns True when at least one of the buttons is pressed.
*/
template <HIDButton... T>
[[nodiscard]] bool IsAnyButtonPressed() {
return (IsButtonPressed(T) || ...);
}
/**
* The specified button is considered to be pressed once
* if it is currently pressed and not pressed previously.
@@ -79,12 +100,12 @@ public:
/**
* Checks whether any of the buttons in the parameter list is pressed once.
*
* @tparam HIDButton The buttons to check.
* @tparam T The buttons to check.
*
* @returns True when at least one of the buttons is pressed once.
*/
template <HIDButton... T>
[[nodiscard]] bool IsAnyButtonPressedOnce() {
[[nodiscard]] bool IsAnyButtonPressedOnce() const {
return (IsButtonPressedOnce(T) || ...);
}
@@ -100,12 +121,12 @@ public:
/**
* Checks whether any of the buttons in the parameter list is held down.
*
* @tparam HIDButton The buttons to check.
* @tparam T The buttons to check.
*
* @returns True when at least one of the buttons is held down.
*/
template <HIDButton... T>
[[nodiscard]] bool IsAnyButtonHeld() {
[[nodiscard]] bool IsAnyButtonHeld() const {
return (IsButtonHeld(T) || ...);
}

View File

@@ -146,7 +146,7 @@ static_assert(sizeof(BufferDescriptorC) == 8, "BufferDescriptorC size is incorre
struct DataPayloadHeader {
u32_le magic;
INSERT_PADDING_WORDS(1);
INSERT_PADDING_WORDS_NOINIT(1);
};
static_assert(sizeof(DataPayloadHeader) == 8, "DataPayloadHeader size is incorrect");
@@ -160,7 +160,7 @@ struct DomainMessageHeader {
// Used when responding to an IPC request, Server -> Client.
struct {
u32_le num_objects;
INSERT_UNION_PADDING_WORDS(3);
INSERT_PADDING_WORDS_NOINIT(3);
};
// Used when performing an IPC request, Client -> Server.
@@ -171,10 +171,10 @@ struct DomainMessageHeader {
BitField<16, 16, u32> size;
};
u32_le object_id;
INSERT_UNION_PADDING_WORDS(2);
INSERT_PADDING_WORDS_NOINIT(2);
};
std::array<u32, 4> raw{};
std::array<u32, 4> raw;
};
};
static_assert(sizeof(DomainMessageHeader) == 16, "DomainMessageHeader size is incorrect");

View File

@@ -1,317 +0,0 @@
// Copyright 2018 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <algorithm>
#include <vector>
#include "common/assert.h"
#include "common/common_types.h"
#include "core/arm/exclusive_monitor.h"
#include "core/core.h"
#include "core/hle/kernel/address_arbiter.h"
#include "core/hle/kernel/errors.h"
#include "core/hle/kernel/handle_table.h"
#include "core/hle/kernel/k_scheduler.h"
#include "core/hle/kernel/k_scoped_scheduler_lock_and_sleep.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/thread.h"
#include "core/hle/kernel/time_manager.h"
#include "core/hle/result.h"
#include "core/memory.h"
namespace Kernel {
// Wake up num_to_wake (or all) threads in a vector.
void AddressArbiter::WakeThreads(const std::vector<std::shared_ptr<Thread>>& waiting_threads,
s32 num_to_wake) {
// Only process up to 'target' threads, unless 'target' is <= 0, in which case process
// them all.
std::size_t last = waiting_threads.size();
if (num_to_wake > 0) {
last = std::min(last, static_cast<std::size_t>(num_to_wake));
}
// Signal the waiting threads.
for (std::size_t i = 0; i < last; i++) {
waiting_threads[i]->SetSynchronizationResults(nullptr, RESULT_SUCCESS);
RemoveThread(waiting_threads[i]);
waiting_threads[i]->WaitForArbitration(false);
waiting_threads[i]->ResumeFromWait();
}
}
AddressArbiter::AddressArbiter(Core::System& system) : system{system} {}
AddressArbiter::~AddressArbiter() = default;
ResultCode AddressArbiter::SignalToAddress(VAddr address, SignalType type, s32 value,
s32 num_to_wake) {
switch (type) {
case SignalType::Signal:
return SignalToAddressOnly(address, num_to_wake);
case SignalType::IncrementAndSignalIfEqual:
return IncrementAndSignalToAddressIfEqual(address, value, num_to_wake);
case SignalType::ModifyByWaitingCountAndSignalIfEqual:
return ModifyByWaitingCountAndSignalToAddressIfEqual(address, value, num_to_wake);
default:
return ERR_INVALID_ENUM_VALUE;
}
}
ResultCode AddressArbiter::SignalToAddressOnly(VAddr address, s32 num_to_wake) {
KScopedSchedulerLock lock(system.Kernel());
const std::vector<std::shared_ptr<Thread>> waiting_threads =
GetThreadsWaitingOnAddress(address);
WakeThreads(waiting_threads, num_to_wake);
return RESULT_SUCCESS;
}
ResultCode AddressArbiter::IncrementAndSignalToAddressIfEqual(VAddr address, s32 value,
s32 num_to_wake) {
KScopedSchedulerLock lock(system.Kernel());
auto& memory = system.Memory();
// Ensure that we can write to the address.
if (!memory.IsValidVirtualAddress(address)) {
return ERR_INVALID_ADDRESS_STATE;
}
const std::size_t current_core = system.CurrentCoreIndex();
auto& monitor = system.Monitor();
u32 current_value;
do {
current_value = monitor.ExclusiveRead32(current_core, address);
if (current_value != static_cast<u32>(value)) {
return ERR_INVALID_STATE;
}
current_value++;
} while (!monitor.ExclusiveWrite32(current_core, address, current_value));
return SignalToAddressOnly(address, num_to_wake);
}
ResultCode AddressArbiter::ModifyByWaitingCountAndSignalToAddressIfEqual(VAddr address, s32 value,
s32 num_to_wake) {
KScopedSchedulerLock lock(system.Kernel());
auto& memory = system.Memory();
// Ensure that we can write to the address.
if (!memory.IsValidVirtualAddress(address)) {
return ERR_INVALID_ADDRESS_STATE;
}
// Get threads waiting on the address.
const std::vector<std::shared_ptr<Thread>> waiting_threads =
GetThreadsWaitingOnAddress(address);
const std::size_t current_core = system.CurrentCoreIndex();
auto& monitor = system.Monitor();
s32 updated_value;
do {
updated_value = monitor.ExclusiveRead32(current_core, address);
if (updated_value != value) {
return ERR_INVALID_STATE;
}
// Determine the modified value depending on the waiting count.
if (num_to_wake <= 0) {
if (waiting_threads.empty()) {
updated_value = value + 1;
} else {
updated_value = value - 1;
}
} else {
if (waiting_threads.empty()) {
updated_value = value + 1;
} else if (waiting_threads.size() <= static_cast<u32>(num_to_wake)) {
updated_value = value - 1;
} else {
updated_value = value;
}
}
} while (!monitor.ExclusiveWrite32(current_core, address, updated_value));
WakeThreads(waiting_threads, num_to_wake);
return RESULT_SUCCESS;
}
ResultCode AddressArbiter::WaitForAddress(VAddr address, ArbitrationType type, s32 value,
s64 timeout_ns) {
switch (type) {
case ArbitrationType::WaitIfLessThan:
return WaitForAddressIfLessThan(address, value, timeout_ns, false);
case ArbitrationType::DecrementAndWaitIfLessThan:
return WaitForAddressIfLessThan(address, value, timeout_ns, true);
case ArbitrationType::WaitIfEqual:
return WaitForAddressIfEqual(address, value, timeout_ns);
default:
return ERR_INVALID_ENUM_VALUE;
}
}
ResultCode AddressArbiter::WaitForAddressIfLessThan(VAddr address, s32 value, s64 timeout,
bool should_decrement) {
auto& memory = system.Memory();
auto& kernel = system.Kernel();
Thread* current_thread = kernel.CurrentScheduler()->GetCurrentThread();
Handle event_handle = InvalidHandle;
{
KScopedSchedulerLockAndSleep lock(kernel, event_handle, current_thread, timeout);
if (current_thread->IsPendingTermination()) {
lock.CancelSleep();
return ERR_THREAD_TERMINATING;
}
// Ensure that we can read the address.
if (!memory.IsValidVirtualAddress(address)) {
lock.CancelSleep();
return ERR_INVALID_ADDRESS_STATE;
}
s32 current_value = static_cast<s32>(memory.Read32(address));
if (current_value >= value) {
lock.CancelSleep();
return ERR_INVALID_STATE;
}
current_thread->SetSynchronizationResults(nullptr, RESULT_TIMEOUT);
s32 decrement_value;
const std::size_t current_core = system.CurrentCoreIndex();
auto& monitor = system.Monitor();
do {
current_value = static_cast<s32>(monitor.ExclusiveRead32(current_core, address));
if (should_decrement) {
decrement_value = current_value - 1;
} else {
decrement_value = current_value;
}
} while (
!monitor.ExclusiveWrite32(current_core, address, static_cast<u32>(decrement_value)));
// Short-circuit without rescheduling, if timeout is zero.
if (timeout == 0) {
lock.CancelSleep();
return RESULT_TIMEOUT;
}
current_thread->SetArbiterWaitAddress(address);
InsertThread(SharedFrom(current_thread));
current_thread->SetStatus(ThreadStatus::WaitArb);
current_thread->WaitForArbitration(true);
}
if (event_handle != InvalidHandle) {
auto& time_manager = kernel.TimeManager();
time_manager.UnscheduleTimeEvent(event_handle);
}
{
KScopedSchedulerLock lock(kernel);
if (current_thread->IsWaitingForArbitration()) {
RemoveThread(SharedFrom(current_thread));
current_thread->WaitForArbitration(false);
}
}
return current_thread->GetSignalingResult();
}
ResultCode AddressArbiter::WaitForAddressIfEqual(VAddr address, s32 value, s64 timeout) {
auto& memory = system.Memory();
auto& kernel = system.Kernel();
Thread* current_thread = kernel.CurrentScheduler()->GetCurrentThread();
Handle event_handle = InvalidHandle;
{
KScopedSchedulerLockAndSleep lock(kernel, event_handle, current_thread, timeout);
if (current_thread->IsPendingTermination()) {
lock.CancelSleep();
return ERR_THREAD_TERMINATING;
}
// Ensure that we can read the address.
if (!memory.IsValidVirtualAddress(address)) {
lock.CancelSleep();
return ERR_INVALID_ADDRESS_STATE;
}
s32 current_value = static_cast<s32>(memory.Read32(address));
if (current_value != value) {
lock.CancelSleep();
return ERR_INVALID_STATE;
}
// Short-circuit without rescheduling, if timeout is zero.
if (timeout == 0) {
lock.CancelSleep();
return RESULT_TIMEOUT;
}
current_thread->SetSynchronizationResults(nullptr, RESULT_TIMEOUT);
current_thread->SetArbiterWaitAddress(address);
InsertThread(SharedFrom(current_thread));
current_thread->SetStatus(ThreadStatus::WaitArb);
current_thread->WaitForArbitration(true);
}
if (event_handle != InvalidHandle) {
auto& time_manager = kernel.TimeManager();
time_manager.UnscheduleTimeEvent(event_handle);
}
{
KScopedSchedulerLock lock(kernel);
if (current_thread->IsWaitingForArbitration()) {
RemoveThread(SharedFrom(current_thread));
current_thread->WaitForArbitration(false);
}
}
return current_thread->GetSignalingResult();
}
void AddressArbiter::InsertThread(std::shared_ptr<Thread> thread) {
const VAddr arb_addr = thread->GetArbiterWaitAddress();
std::list<std::shared_ptr<Thread>>& thread_list = arb_threads[arb_addr];
const auto iter =
std::find_if(thread_list.cbegin(), thread_list.cend(), [&thread](const auto& entry) {
return entry->GetPriority() >= thread->GetPriority();
});
if (iter == thread_list.cend()) {
thread_list.push_back(std::move(thread));
} else {
thread_list.insert(iter, std::move(thread));
}
}
void AddressArbiter::RemoveThread(std::shared_ptr<Thread> thread) {
const VAddr arb_addr = thread->GetArbiterWaitAddress();
std::list<std::shared_ptr<Thread>>& thread_list = arb_threads[arb_addr];
const auto iter = std::find_if(thread_list.cbegin(), thread_list.cend(),
[&thread](const auto& entry) { return thread == entry; });
if (iter != thread_list.cend()) {
thread_list.erase(iter);
}
}
std::vector<std::shared_ptr<Thread>> AddressArbiter::GetThreadsWaitingOnAddress(
VAddr address) const {
const auto iter = arb_threads.find(address);
if (iter == arb_threads.cend()) {
return {};
}
const std::list<std::shared_ptr<Thread>>& thread_list = iter->second;
return {thread_list.cbegin(), thread_list.cend()};
}
} // namespace Kernel

View File

@@ -1,91 +0,0 @@
// Copyright 2018 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <list>
#include <memory>
#include <unordered_map>
#include <vector>
#include "common/common_types.h"
union ResultCode;
namespace Core {
class System;
}
namespace Kernel {
class Thread;
class AddressArbiter {
public:
enum class ArbitrationType {
WaitIfLessThan = 0,
DecrementAndWaitIfLessThan = 1,
WaitIfEqual = 2,
};
enum class SignalType {
Signal = 0,
IncrementAndSignalIfEqual = 1,
ModifyByWaitingCountAndSignalIfEqual = 2,
};
explicit AddressArbiter(Core::System& system);
~AddressArbiter();
AddressArbiter(const AddressArbiter&) = delete;
AddressArbiter& operator=(const AddressArbiter&) = delete;
AddressArbiter(AddressArbiter&&) = default;
AddressArbiter& operator=(AddressArbiter&&) = delete;
/// Signals an address being waited on with a particular signaling type.
ResultCode SignalToAddress(VAddr address, SignalType type, s32 value, s32 num_to_wake);
/// Waits on an address with a particular arbitration type.
ResultCode WaitForAddress(VAddr address, ArbitrationType type, s32 value, s64 timeout_ns);
private:
/// Signals an address being waited on.
ResultCode SignalToAddressOnly(VAddr address, s32 num_to_wake);
/// Signals an address being waited on and increments its value if equal to the value argument.
ResultCode IncrementAndSignalToAddressIfEqual(VAddr address, s32 value, s32 num_to_wake);
/// Signals an address being waited on and modifies its value based on waiting thread count if
/// equal to the value argument.
ResultCode ModifyByWaitingCountAndSignalToAddressIfEqual(VAddr address, s32 value,
s32 num_to_wake);
/// Waits on an address if the value passed is less than the argument value,
/// optionally decrementing.
ResultCode WaitForAddressIfLessThan(VAddr address, s32 value, s64 timeout,
bool should_decrement);
/// Waits on an address if the value passed is equal to the argument value.
ResultCode WaitForAddressIfEqual(VAddr address, s32 value, s64 timeout);
/// Wake up num_to_wake (or all) threads in a vector.
void WakeThreads(const std::vector<std::shared_ptr<Thread>>& waiting_threads, s32 num_to_wake);
/// Insert a thread into the address arbiter container
void InsertThread(std::shared_ptr<Thread> thread);
/// Removes a thread from the address arbiter container
void RemoveThread(std::shared_ptr<Thread> thread);
// Gets the threads waiting on an address.
std::vector<std::shared_ptr<Thread>> GetThreadsWaitingOnAddress(VAddr address) const;
/// List of threads waiting for a address arbiter
std::unordered_map<VAddr, std::list<std::shared_ptr<Thread>>> arb_threads;
Core::System& system;
};
} // namespace Kernel

View File

@@ -33,9 +33,6 @@ ResultVal<std::shared_ptr<ClientSession>> ClientPort::Connect() {
server_port->AppendPendingSession(std::move(server));
}
// Wake the threads waiting on the ServerPort
server_port->Signal();
return MakeResult(std::move(client));
}

View File

@@ -12,7 +12,7 @@
namespace Kernel {
ClientSession::ClientSession(KernelCore& kernel) : SynchronizationObject{kernel} {}
ClientSession::ClientSession(KernelCore& kernel) : KSynchronizationObject{kernel} {}
ClientSession::~ClientSession() {
// This destructor will be called automatically when the last ClientSession handle is closed by
@@ -22,15 +22,6 @@ ClientSession::~ClientSession() {
}
}
bool ClientSession::ShouldWait(const Thread* thread) const {
UNIMPLEMENTED();
return {};
}
void ClientSession::Acquire(Thread* thread) {
UNIMPLEMENTED();
}
bool ClientSession::IsSignaled() const {
UNIMPLEMENTED();
return true;

View File

@@ -7,7 +7,7 @@
#include <memory>
#include <string>
#include "core/hle/kernel/synchronization_object.h"
#include "core/hle/kernel/k_synchronization_object.h"
#include "core/hle/result.h"
union ResultCode;
@@ -26,7 +26,7 @@ class KernelCore;
class Session;
class Thread;
class ClientSession final : public SynchronizationObject {
class ClientSession final : public KSynchronizationObject {
public:
explicit ClientSession(KernelCore& kernel);
~ClientSession() override;
@@ -49,10 +49,6 @@ public:
ResultCode SendSyncRequest(std::shared_ptr<Thread> thread, Core::Memory::Memory& memory,
Core::Timing::CoreTiming& core_timing);
bool ShouldWait(const Thread* thread) const override;
void Acquire(Thread* thread) override;
bool IsSignaled() const override;
private:

View File

@@ -13,12 +13,14 @@ namespace Kernel {
constexpr ResultCode ERR_MAX_CONNECTIONS_REACHED{ErrorModule::Kernel, 7};
constexpr ResultCode ERR_INVALID_CAPABILITY_DESCRIPTOR{ErrorModule::Kernel, 14};
constexpr ResultCode ERR_THREAD_TERMINATING{ErrorModule::Kernel, 59};
constexpr ResultCode ERR_TERMINATION_REQUESTED{ErrorModule::Kernel, 59};
constexpr ResultCode ERR_INVALID_SIZE{ErrorModule::Kernel, 101};
constexpr ResultCode ERR_INVALID_ADDRESS{ErrorModule::Kernel, 102};
constexpr ResultCode ERR_OUT_OF_RESOURCES{ErrorModule::Kernel, 103};
constexpr ResultCode ERR_OUT_OF_MEMORY{ErrorModule::Kernel, 104};
constexpr ResultCode ERR_HANDLE_TABLE_FULL{ErrorModule::Kernel, 105};
constexpr ResultCode ERR_INVALID_ADDRESS_STATE{ErrorModule::Kernel, 106};
constexpr ResultCode ERR_INVALID_CURRENT_MEMORY{ErrorModule::Kernel, 106};
constexpr ResultCode ERR_INVALID_MEMORY_PERMISSIONS{ErrorModule::Kernel, 108};
constexpr ResultCode ERR_INVALID_MEMORY_RANGE{ErrorModule::Kernel, 110};
constexpr ResultCode ERR_INVALID_PROCESSOR_ID{ErrorModule::Kernel, 113};
@@ -28,6 +30,7 @@ constexpr ResultCode ERR_INVALID_POINTER{ErrorModule::Kernel, 115};
constexpr ResultCode ERR_INVALID_COMBINATION{ErrorModule::Kernel, 116};
constexpr ResultCode RESULT_TIMEOUT{ErrorModule::Kernel, 117};
constexpr ResultCode ERR_SYNCHRONIZATION_CANCELED{ErrorModule::Kernel, 118};
constexpr ResultCode ERR_CANCELLED{ErrorModule::Kernel, 118};
constexpr ResultCode ERR_OUT_OF_RANGE{ErrorModule::Kernel, 119};
constexpr ResultCode ERR_INVALID_ENUM_VALUE{ErrorModule::Kernel, 120};
constexpr ResultCode ERR_NOT_FOUND{ErrorModule::Kernel, 121};

View File

@@ -0,0 +1,367 @@
// Copyright 2021 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "core/arm/exclusive_monitor.h"
#include "core/core.h"
#include "core/hle/kernel/k_address_arbiter.h"
#include "core/hle/kernel/k_scheduler.h"
#include "core/hle/kernel/k_scoped_scheduler_lock_and_sleep.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/svc_results.h"
#include "core/hle/kernel/thread.h"
#include "core/hle/kernel/time_manager.h"
#include "core/memory.h"
namespace Kernel {
KAddressArbiter::KAddressArbiter(Core::System& system_)
: system{system_}, kernel{system.Kernel()} {}
KAddressArbiter::~KAddressArbiter() = default;
namespace {
bool ReadFromUser(Core::System& system, s32* out, VAddr address) {
*out = system.Memory().Read32(address);
return true;
}
bool DecrementIfLessThan(Core::System& system, s32* out, VAddr address, s32 value) {
auto& monitor = system.Monitor();
const auto current_core = system.CurrentCoreIndex();
// TODO(bunnei): We should disable interrupts here via KScopedInterruptDisable.
// TODO(bunnei): We should call CanAccessAtomic(..) here.
// Load the value from the address.
const s32 current_value = static_cast<s32>(monitor.ExclusiveRead32(current_core, address));
// Compare it to the desired one.
if (current_value < value) {
// If less than, we want to try to decrement.
const s32 decrement_value = current_value - 1;
// Decrement and try to store.
if (!monitor.ExclusiveWrite32(current_core, address, static_cast<u32>(decrement_value))) {
// If we failed to store, try again.
DecrementIfLessThan(system, out, address, value);
}
} else {
// Otherwise, clear our exclusive hold and finish
monitor.ClearExclusive();
}
// We're done.
*out = current_value;
return true;
}
bool UpdateIfEqual(Core::System& system, s32* out, VAddr address, s32 value, s32 new_value) {
auto& monitor = system.Monitor();
const auto current_core = system.CurrentCoreIndex();
// TODO(bunnei): We should disable interrupts here via KScopedInterruptDisable.
// TODO(bunnei): We should call CanAccessAtomic(..) here.
// Load the value from the address.
const s32 current_value = static_cast<s32>(monitor.ExclusiveRead32(current_core, address));
// Compare it to the desired one.
if (current_value == value) {
// If equal, we want to try to write the new value.
// Try to store.
if (!monitor.ExclusiveWrite32(current_core, address, static_cast<u32>(new_value))) {
// If we failed to store, try again.
UpdateIfEqual(system, out, address, value, new_value);
}
} else {
// Otherwise, clear our exclusive hold and finish.
monitor.ClearExclusive();
}
// We're done.
*out = current_value;
return true;
}
} // namespace
ResultCode KAddressArbiter::Signal(VAddr addr, s32 count) {
// Perform signaling.
s32 num_waiters{};
{
KScopedSchedulerLock sl(kernel);
auto it = thread_tree.nfind_light({addr, -1});
while ((it != thread_tree.end()) && (count <= 0 || num_waiters < count) &&
(it->GetAddressArbiterKey() == addr)) {
Thread* target_thread = std::addressof(*it);
target_thread->SetSyncedObject(nullptr, RESULT_SUCCESS);
ASSERT(target_thread->IsWaitingForAddressArbiter());
target_thread->Wakeup();
it = thread_tree.erase(it);
target_thread->ClearAddressArbiter();
++num_waiters;
}
}
return RESULT_SUCCESS;
}
ResultCode KAddressArbiter::SignalAndIncrementIfEqual(VAddr addr, s32 value, s32 count) {
// Perform signaling.
s32 num_waiters{};
{
KScopedSchedulerLock sl(kernel);
// Check the userspace value.
s32 user_value{};
R_UNLESS(UpdateIfEqual(system, std::addressof(user_value), addr, value, value + 1),
Svc::ResultInvalidCurrentMemory);
R_UNLESS(user_value == value, Svc::ResultInvalidState);
auto it = thread_tree.nfind_light({addr, -1});
while ((it != thread_tree.end()) && (count <= 0 || num_waiters < count) &&
(it->GetAddressArbiterKey() == addr)) {
Thread* target_thread = std::addressof(*it);
target_thread->SetSyncedObject(nullptr, RESULT_SUCCESS);
ASSERT(target_thread->IsWaitingForAddressArbiter());
target_thread->Wakeup();
it = thread_tree.erase(it);
target_thread->ClearAddressArbiter();
++num_waiters;
}
}
return RESULT_SUCCESS;
}
ResultCode KAddressArbiter::SignalAndModifyByWaitingCountIfEqual(VAddr addr, s32 value, s32 count) {
// Perform signaling.
s32 num_waiters{};
{
KScopedSchedulerLock sl(kernel);
auto it = thread_tree.nfind_light({addr, -1});
// Determine the updated value.
s32 new_value{};
if (/*GetTargetFirmware() >= TargetFirmware_7_0_0*/ true) {
if (count <= 0) {
if ((it != thread_tree.end()) && (it->GetAddressArbiterKey() == addr)) {
new_value = value - 2;
} else {
new_value = value + 1;
}
} else {
if ((it != thread_tree.end()) && (it->GetAddressArbiterKey() == addr)) {
auto tmp_it = it;
s32 tmp_num_waiters{};
while ((++tmp_it != thread_tree.end()) &&
(tmp_it->GetAddressArbiterKey() == addr)) {
if ((tmp_num_waiters++) >= count) {
break;
}
}
if (tmp_num_waiters < count) {
new_value = value - 1;
} else {
new_value = value;
}
} else {
new_value = value + 1;
}
}
} else {
if (count <= 0) {
if ((it != thread_tree.end()) && (it->GetAddressArbiterKey() == addr)) {
new_value = value - 1;
} else {
new_value = value + 1;
}
} else {
auto tmp_it = it;
s32 tmp_num_waiters{};
while ((tmp_it != thread_tree.end()) && (tmp_it->GetAddressArbiterKey() == addr) &&
(tmp_num_waiters < count + 1)) {
++tmp_num_waiters;
++tmp_it;
}
if (tmp_num_waiters == 0) {
new_value = value + 1;
} else if (tmp_num_waiters <= count) {
new_value = value - 1;
} else {
new_value = value;
}
}
}
// Check the userspace value.
s32 user_value{};
bool succeeded{};
if (value != new_value) {
succeeded = UpdateIfEqual(system, std::addressof(user_value), addr, value, new_value);
} else {
succeeded = ReadFromUser(system, std::addressof(user_value), addr);
}
R_UNLESS(succeeded, Svc::ResultInvalidCurrentMemory);
R_UNLESS(user_value == value, Svc::ResultInvalidState);
while ((it != thread_tree.end()) && (count <= 0 || num_waiters < count) &&
(it->GetAddressArbiterKey() == addr)) {
Thread* target_thread = std::addressof(*it);
target_thread->SetSyncedObject(nullptr, RESULT_SUCCESS);
ASSERT(target_thread->IsWaitingForAddressArbiter());
target_thread->Wakeup();
it = thread_tree.erase(it);
target_thread->ClearAddressArbiter();
++num_waiters;
}
}
return RESULT_SUCCESS;
}
ResultCode KAddressArbiter::WaitIfLessThan(VAddr addr, s32 value, bool decrement, s64 timeout) {
// Prepare to wait.
Thread* cur_thread = kernel.CurrentScheduler()->GetCurrentThread();
Handle timer = InvalidHandle;
{
KScopedSchedulerLockAndSleep slp(kernel, timer, cur_thread, timeout);
// Check that the thread isn't terminating.
if (cur_thread->IsTerminationRequested()) {
slp.CancelSleep();
return Svc::ResultTerminationRequested;
}
// Set the synced object.
cur_thread->SetSyncedObject(nullptr, Svc::ResultTimedOut);
// Read the value from userspace.
s32 user_value{};
bool succeeded{};
if (decrement) {
succeeded = DecrementIfLessThan(system, std::addressof(user_value), addr, value);
} else {
succeeded = ReadFromUser(system, std::addressof(user_value), addr);
}
if (!succeeded) {
slp.CancelSleep();
return Svc::ResultInvalidCurrentMemory;
}
// Check that the value is less than the specified one.
if (user_value >= value) {
slp.CancelSleep();
return Svc::ResultInvalidState;
}
// Check that the timeout is non-zero.
if (timeout == 0) {
slp.CancelSleep();
return Svc::ResultTimedOut;
}
// Set the arbiter.
cur_thread->SetAddressArbiter(std::addressof(thread_tree), addr);
thread_tree.insert(*cur_thread);
cur_thread->SetState(ThreadState::Waiting);
cur_thread->SetWaitReasonForDebugging(ThreadWaitReasonForDebugging::Arbitration);
}
// Cancel the timer wait.
if (timer != InvalidHandle) {
auto& time_manager = kernel.TimeManager();
time_manager.UnscheduleTimeEvent(timer);
}
// Remove from the address arbiter.
{
KScopedSchedulerLock sl(kernel);
if (cur_thread->IsWaitingForAddressArbiter()) {
thread_tree.erase(thread_tree.iterator_to(*cur_thread));
cur_thread->ClearAddressArbiter();
}
}
// Get the result.
KSynchronizationObject* dummy{};
return cur_thread->GetWaitResult(std::addressof(dummy));
}
ResultCode KAddressArbiter::WaitIfEqual(VAddr addr, s32 value, s64 timeout) {
// Prepare to wait.
Thread* cur_thread = kernel.CurrentScheduler()->GetCurrentThread();
Handle timer = InvalidHandle;
{
KScopedSchedulerLockAndSleep slp(kernel, timer, cur_thread, timeout);
// Check that the thread isn't terminating.
if (cur_thread->IsTerminationRequested()) {
slp.CancelSleep();
return Svc::ResultTerminationRequested;
}
// Set the synced object.
cur_thread->SetSyncedObject(nullptr, Svc::ResultTimedOut);
// Read the value from userspace.
s32 user_value{};
if (!ReadFromUser(system, std::addressof(user_value), addr)) {
slp.CancelSleep();
return Svc::ResultInvalidCurrentMemory;
}
// Check that the value is equal.
if (value != user_value) {
slp.CancelSleep();
return Svc::ResultInvalidState;
}
// Check that the timeout is non-zero.
if (timeout == 0) {
slp.CancelSleep();
return Svc::ResultTimedOut;
}
// Set the arbiter.
cur_thread->SetAddressArbiter(std::addressof(thread_tree), addr);
thread_tree.insert(*cur_thread);
cur_thread->SetState(ThreadState::Waiting);
cur_thread->SetWaitReasonForDebugging(ThreadWaitReasonForDebugging::Arbitration);
}
// Cancel the timer wait.
if (timer != InvalidHandle) {
auto& time_manager = kernel.TimeManager();
time_manager.UnscheduleTimeEvent(timer);
}
// Remove from the address arbiter.
{
KScopedSchedulerLock sl(kernel);
if (cur_thread->IsWaitingForAddressArbiter()) {
thread_tree.erase(thread_tree.iterator_to(*cur_thread));
cur_thread->ClearAddressArbiter();
}
}
// Get the result.
KSynchronizationObject* dummy{};
return cur_thread->GetWaitResult(std::addressof(dummy));
}
} // namespace Kernel

View File

@@ -0,0 +1,70 @@
// Copyright 2021 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include "common/assert.h"
#include "common/common_types.h"
#include "core/hle/kernel/k_condition_variable.h"
#include "core/hle/kernel/svc_types.h"
union ResultCode;
namespace Core {
class System;
}
namespace Kernel {
class KernelCore;
class KAddressArbiter {
public:
using ThreadTree = KConditionVariable::ThreadTree;
explicit KAddressArbiter(Core::System& system_);
~KAddressArbiter();
[[nodiscard]] ResultCode SignalToAddress(VAddr addr, Svc::SignalType type, s32 value,
s32 count) {
switch (type) {
case Svc::SignalType::Signal:
return Signal(addr, count);
case Svc::SignalType::SignalAndIncrementIfEqual:
return SignalAndIncrementIfEqual(addr, value, count);
case Svc::SignalType::SignalAndModifyByWaitingCountIfEqual:
return SignalAndModifyByWaitingCountIfEqual(addr, value, count);
}
UNREACHABLE();
return RESULT_UNKNOWN;
}
[[nodiscard]] ResultCode WaitForAddress(VAddr addr, Svc::ArbitrationType type, s32 value,
s64 timeout) {
switch (type) {
case Svc::ArbitrationType::WaitIfLessThan:
return WaitIfLessThan(addr, value, false, timeout);
case Svc::ArbitrationType::DecrementAndWaitIfLessThan:
return WaitIfLessThan(addr, value, true, timeout);
case Svc::ArbitrationType::WaitIfEqual:
return WaitIfEqual(addr, value, timeout);
}
UNREACHABLE();
return RESULT_UNKNOWN;
}
private:
[[nodiscard]] ResultCode Signal(VAddr addr, s32 count);
[[nodiscard]] ResultCode SignalAndIncrementIfEqual(VAddr addr, s32 value, s32 count);
[[nodiscard]] ResultCode SignalAndModifyByWaitingCountIfEqual(VAddr addr, s32 value, s32 count);
[[nodiscard]] ResultCode WaitIfLessThan(VAddr addr, s32 value, bool decrement, s64 timeout);
[[nodiscard]] ResultCode WaitIfEqual(VAddr addr, s32 value, s64 timeout);
ThreadTree thread_tree;
Core::System& system;
KernelCore& kernel;
};
} // namespace Kernel

View File

@@ -0,0 +1,349 @@
// Copyright 2021 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <vector>
#include "core/arm/exclusive_monitor.h"
#include "core/core.h"
#include "core/hle/kernel/k_condition_variable.h"
#include "core/hle/kernel/k_scheduler.h"
#include "core/hle/kernel/k_scoped_scheduler_lock_and_sleep.h"
#include "core/hle/kernel/k_synchronization_object.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/svc_common.h"
#include "core/hle/kernel/svc_results.h"
#include "core/hle/kernel/thread.h"
#include "core/memory.h"
namespace Kernel {
namespace {
bool ReadFromUser(Core::System& system, u32* out, VAddr address) {
*out = system.Memory().Read32(address);
return true;
}
bool WriteToUser(Core::System& system, VAddr address, const u32* p) {
system.Memory().Write32(address, *p);
return true;
}
bool UpdateLockAtomic(Core::System& system, u32* out, VAddr address, u32 if_zero,
u32 new_orr_mask) {
auto& monitor = system.Monitor();
const auto current_core = system.CurrentCoreIndex();
// Load the value from the address.
const auto expected = monitor.ExclusiveRead32(current_core, address);
// Orr in the new mask.
u32 value = expected | new_orr_mask;
// If the value is zero, use the if_zero value, otherwise use the newly orr'd value.
if (!expected) {
value = if_zero;
}
// Try to store.
if (!monitor.ExclusiveWrite32(current_core, address, value)) {
// If we failed to store, try again.
return UpdateLockAtomic(system, out, address, if_zero, new_orr_mask);
}
// We're done.
*out = expected;
return true;
}
} // namespace
KConditionVariable::KConditionVariable(Core::System& system_)
: system{system_}, kernel{system.Kernel()} {}
KConditionVariable::~KConditionVariable() = default;
ResultCode KConditionVariable::SignalToAddress(VAddr addr) {
Thread* owner_thread = kernel.CurrentScheduler()->GetCurrentThread();
// Signal the address.
{
KScopedSchedulerLock sl(kernel);
// Remove waiter thread.
s32 num_waiters{};
Thread* next_owner_thread =
owner_thread->RemoveWaiterByKey(std::addressof(num_waiters), addr);
// Determine the next tag.
u32 next_value{};
if (next_owner_thread) {
next_value = next_owner_thread->GetAddressKeyValue();
if (num_waiters > 1) {
next_value |= Svc::HandleWaitMask;
}
next_owner_thread->SetSyncedObject(nullptr, RESULT_SUCCESS);
next_owner_thread->Wakeup();
}
// Write the value to userspace.
if (!WriteToUser(system, addr, std::addressof(next_value))) {
if (next_owner_thread) {
next_owner_thread->SetSyncedObject(nullptr, Svc::ResultInvalidCurrentMemory);
}
return Svc::ResultInvalidCurrentMemory;
}
}
return RESULT_SUCCESS;
}
ResultCode KConditionVariable::WaitForAddress(Handle handle, VAddr addr, u32 value) {
Thread* cur_thread = kernel.CurrentScheduler()->GetCurrentThread();
// Wait for the address.
{
std::shared_ptr<Thread> owner_thread;
ASSERT(!owner_thread);
{
KScopedSchedulerLock sl(kernel);
cur_thread->SetSyncedObject(nullptr, RESULT_SUCCESS);
// Check if the thread should terminate.
R_UNLESS(!cur_thread->IsTerminationRequested(), Svc::ResultTerminationRequested);
{
// Read the tag from userspace.
u32 test_tag{};
R_UNLESS(ReadFromUser(system, std::addressof(test_tag), addr),
Svc::ResultInvalidCurrentMemory);
// If the tag isn't the handle (with wait mask), we're done.
R_UNLESS(test_tag == (handle | Svc::HandleWaitMask), RESULT_SUCCESS);
// Get the lock owner thread.
owner_thread = kernel.CurrentProcess()->GetHandleTable().Get<Thread>(handle);
R_UNLESS(owner_thread, Svc::ResultInvalidHandle);
// Update the lock.
cur_thread->SetAddressKey(addr, value);
owner_thread->AddWaiter(cur_thread);
cur_thread->SetState(ThreadState::Waiting);
cur_thread->SetWaitReasonForDebugging(ThreadWaitReasonForDebugging::ConditionVar);
cur_thread->SetMutexWaitAddressForDebugging(addr);
}
}
ASSERT(owner_thread);
}
// Remove the thread as a waiter from the lock owner.
{
KScopedSchedulerLock sl(kernel);
Thread* owner_thread = cur_thread->GetLockOwner();
if (owner_thread != nullptr) {
owner_thread->RemoveWaiter(cur_thread);
}
}
// Get the wait result.
KSynchronizationObject* dummy{};
return cur_thread->GetWaitResult(std::addressof(dummy));
}
Thread* KConditionVariable::SignalImpl(Thread* thread) {
// Check pre-conditions.
ASSERT(kernel.GlobalSchedulerContext().IsLocked());
// Update the tag.
VAddr address = thread->GetAddressKey();
u32 own_tag = thread->GetAddressKeyValue();
u32 prev_tag{};
bool can_access{};
{
// TODO(bunnei): We should disable interrupts here via KScopedInterruptDisable.
// TODO(bunnei): We should call CanAccessAtomic(..) here.
can_access = true;
if (can_access) {
UpdateLockAtomic(system, std::addressof(prev_tag), address, own_tag,
Svc::HandleWaitMask);
}
}
Thread* thread_to_close = nullptr;
if (can_access) {
if (prev_tag == InvalidHandle) {
// If nobody held the lock previously, we're all good.
thread->SetSyncedObject(nullptr, RESULT_SUCCESS);
thread->Wakeup();
} else {
// Get the previous owner.
auto owner_thread = kernel.CurrentProcess()->GetHandleTable().Get<Thread>(
prev_tag & ~Svc::HandleWaitMask);
if (owner_thread) {
// Add the thread as a waiter on the owner.
owner_thread->AddWaiter(thread);
thread_to_close = owner_thread.get();
} else {
// The lock was tagged with a thread that doesn't exist.
thread->SetSyncedObject(nullptr, Svc::ResultInvalidState);
thread->Wakeup();
}
}
} else {
// If the address wasn't accessible, note so.
thread->SetSyncedObject(nullptr, Svc::ResultInvalidCurrentMemory);
thread->Wakeup();
}
return thread_to_close;
}
void KConditionVariable::Signal(u64 cv_key, s32 count) {
// Prepare for signaling.
constexpr int MaxThreads = 16;
// TODO(bunnei): This should just be Thread once we implement KAutoObject instead of using
// std::shared_ptr.
std::vector<std::shared_ptr<Thread>> thread_list;
std::array<Thread*, MaxThreads> thread_array;
s32 num_to_close{};
// Perform signaling.
s32 num_waiters{};
{
KScopedSchedulerLock sl(kernel);
auto it = thread_tree.nfind_light({cv_key, -1});
while ((it != thread_tree.end()) && (count <= 0 || num_waiters < count) &&
(it->GetConditionVariableKey() == cv_key)) {
Thread* target_thread = std::addressof(*it);
if (Thread* thread = SignalImpl(target_thread); thread != nullptr) {
if (num_to_close < MaxThreads) {
thread_array[num_to_close++] = thread;
} else {
thread_list.push_back(SharedFrom(thread));
}
}
it = thread_tree.erase(it);
target_thread->ClearConditionVariable();
++num_waiters;
}
// If we have no waiters, clear the has waiter flag.
if (it == thread_tree.end() || it->GetConditionVariableKey() != cv_key) {
const u32 has_waiter_flag{};
WriteToUser(system, cv_key, std::addressof(has_waiter_flag));
}
}
// Close threads in the array.
for (auto i = 0; i < num_to_close; ++i) {
thread_array[i]->Close();
}
// Close threads in the list.
for (auto it = thread_list.begin(); it != thread_list.end(); it = thread_list.erase(it)) {
(*it)->Close();
}
}
ResultCode KConditionVariable::Wait(VAddr addr, u64 key, u32 value, s64 timeout) {
// Prepare to wait.
Thread* cur_thread = kernel.CurrentScheduler()->GetCurrentThread();
Handle timer = InvalidHandle;
{
KScopedSchedulerLockAndSleep slp(kernel, timer, cur_thread, timeout);
// Set the synced object.
cur_thread->SetSyncedObject(nullptr, Svc::ResultTimedOut);
// Check that the thread isn't terminating.
if (cur_thread->IsTerminationRequested()) {
slp.CancelSleep();
return Svc::ResultTerminationRequested;
}
// Update the value and process for the next owner.
{
// Remove waiter thread.
s32 num_waiters{};
Thread* next_owner_thread =
cur_thread->RemoveWaiterByKey(std::addressof(num_waiters), addr);
// Update for the next owner thread.
u32 next_value{};
if (next_owner_thread != nullptr) {
// Get the next tag value.
next_value = next_owner_thread->GetAddressKeyValue();
if (num_waiters > 1) {
next_value |= Svc::HandleWaitMask;
}
// Wake up the next owner.
next_owner_thread->SetSyncedObject(nullptr, RESULT_SUCCESS);
next_owner_thread->Wakeup();
}
// Write to the cv key.
{
const u32 has_waiter_flag = 1;
WriteToUser(system, key, std::addressof(has_waiter_flag));
// TODO(bunnei): We should call DataMemoryBarrier(..) here.
}
// Write the value to userspace.
if (!WriteToUser(system, addr, std::addressof(next_value))) {
slp.CancelSleep();
return Svc::ResultInvalidCurrentMemory;
}
}
// Update condition variable tracking.
{
cur_thread->SetConditionVariable(std::addressof(thread_tree), addr, key, value);
thread_tree.insert(*cur_thread);
}
// If the timeout is non-zero, set the thread as waiting.
if (timeout != 0) {
cur_thread->SetState(ThreadState::Waiting);
cur_thread->SetWaitReasonForDebugging(ThreadWaitReasonForDebugging::ConditionVar);
cur_thread->SetMutexWaitAddressForDebugging(addr);
}
}
// Cancel the timer wait.
if (timer != InvalidHandle) {
auto& time_manager = kernel.TimeManager();
time_manager.UnscheduleTimeEvent(timer);
}
// Remove from the condition variable.
{
KScopedSchedulerLock sl(kernel);
if (Thread* owner = cur_thread->GetLockOwner(); owner != nullptr) {
owner->RemoveWaiter(cur_thread);
}
if (cur_thread->IsWaitingForConditionVariable()) {
thread_tree.erase(thread_tree.iterator_to(*cur_thread));
cur_thread->ClearConditionVariable();
}
}
// Get the result.
KSynchronizationObject* dummy{};
return cur_thread->GetWaitResult(std::addressof(dummy));
}
} // namespace Kernel

View File

@@ -0,0 +1,59 @@
// Copyright 2021 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include "common/assert.h"
#include "common/common_types.h"
#include "core/hle/kernel/k_scheduler.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/thread.h"
#include "core/hle/result.h"
namespace Core {
class System;
}
namespace Kernel {
class KConditionVariable {
public:
using ThreadTree = typename Thread::ConditionVariableThreadTreeType;
explicit KConditionVariable(Core::System& system_);
~KConditionVariable();
// Arbitration
[[nodiscard]] ResultCode SignalToAddress(VAddr addr);
[[nodiscard]] ResultCode WaitForAddress(Handle handle, VAddr addr, u32 value);
// Condition variable
void Signal(u64 cv_key, s32 count);
[[nodiscard]] ResultCode Wait(VAddr addr, u64 key, u32 value, s64 timeout);
private:
[[nodiscard]] Thread* SignalImpl(Thread* thread);
ThreadTree thread_tree;
Core::System& system;
KernelCore& kernel;
};
inline void BeforeUpdatePriority(const KernelCore& kernel, KConditionVariable::ThreadTree* tree,
Thread* thread) {
ASSERT(kernel.GlobalSchedulerContext().IsLocked());
tree->erase(tree->iterator_to(*thread));
}
inline void AfterUpdatePriority(const KernelCore& kernel, KConditionVariable::ThreadTree* tree,
Thread* thread) {
ASSERT(kernel.GlobalSchedulerContext().IsLocked());
tree->insert(*thread);
}
} // namespace Kernel

View File

@@ -8,11 +8,11 @@
#pragma once
#include <array>
#include <bit>
#include <concepts>
#include "common/assert.h"
#include "common/bit_set.h"
#include "common/bit_util.h"
#include "common/common_types.h"
#include "common/concepts.h"
@@ -268,7 +268,7 @@ private:
}
constexpr s32 GetNextCore(u64& affinity) {
const s32 core = Common::CountTrailingZeroes64(affinity);
const s32 core = std::countr_zero(affinity);
ClearAffinityBit(affinity, core);
return core;
}

View File

@@ -5,6 +5,8 @@
// This file references various implementation details from Atmosphere, an open-source firmware for
// the Nintendo Switch. Copyright 2018-2020 Atmosphere-NX.
#include <bit>
#include "common/assert.h"
#include "common/bit_util.h"
#include "common/fiber.h"
@@ -31,12 +33,12 @@ static void IncrementScheduledCount(Kernel::Thread* thread) {
void KScheduler::RescheduleCores(KernelCore& kernel, u64 cores_pending_reschedule,
Core::EmuThreadHandle global_thread) {
u32 current_core = global_thread.host_handle;
const u32 current_core = global_thread.host_handle;
bool must_context_switch = global_thread.guest_handle != InvalidHandle &&
(current_core < Core::Hardware::NUM_CPU_CORES);
while (cores_pending_reschedule != 0) {
u32 core = Common::CountTrailingZeroes64(cores_pending_reschedule);
const auto core = static_cast<u32>(std::countr_zero(cores_pending_reschedule));
ASSERT(core < Core::Hardware::NUM_CPU_CORES);
if (!must_context_switch || core != current_core) {
auto& phys_core = kernel.PhysicalCore(core);
@@ -109,7 +111,7 @@ u64 KScheduler::UpdateHighestPriorityThreadsImpl(KernelCore& kernel) {
// Idle cores are bad. We're going to try to migrate threads to each idle core in turn.
while (idle_cores != 0) {
u32 core_id = Common::CountTrailingZeroes64(idle_cores);
const auto core_id = static_cast<u32>(std::countr_zero(idle_cores));
if (Thread* suggested = priority_queue.GetSuggestedFront(core_id); suggested != nullptr) {
s32 migration_candidates[Core::Hardware::NUM_CPU_CORES];
size_t num_candidates = 0;
@@ -180,22 +182,22 @@ u64 KScheduler::UpdateHighestPriorityThreadsImpl(KernelCore& kernel) {
return cores_needing_scheduling;
}
void KScheduler::OnThreadStateChanged(KernelCore& kernel, Thread* thread, u32 old_state) {
void KScheduler::OnThreadStateChanged(KernelCore& kernel, Thread* thread, ThreadState old_state) {
ASSERT(kernel.GlobalSchedulerContext().IsLocked());
// Check if the state has changed, because if it hasn't there's nothing to do.
const auto cur_state = thread->scheduling_state;
const auto cur_state = thread->GetRawState();
if (cur_state == old_state) {
return;
}
// Update the priority queues.
if (old_state == static_cast<u32>(ThreadSchedStatus::Runnable)) {
if (old_state == ThreadState::Runnable) {
// If we were previously runnable, then we're not runnable now, and we should remove.
GetPriorityQueue(kernel).Remove(thread);
IncrementScheduledCount(thread);
SetSchedulerUpdateNeeded(kernel);
} else if (cur_state == static_cast<u32>(ThreadSchedStatus::Runnable)) {
} else if (cur_state == ThreadState::Runnable) {
// If we're now runnable, then we weren't previously, and we should add.
GetPriorityQueue(kernel).PushBack(thread);
IncrementScheduledCount(thread);
@@ -203,13 +205,11 @@ void KScheduler::OnThreadStateChanged(KernelCore& kernel, Thread* thread, u32 ol
}
}
void KScheduler::OnThreadPriorityChanged(KernelCore& kernel, Thread* thread, Thread* current_thread,
u32 old_priority) {
void KScheduler::OnThreadPriorityChanged(KernelCore& kernel, Thread* thread, s32 old_priority) {
ASSERT(kernel.GlobalSchedulerContext().IsLocked());
// If the thread is runnable, we want to change its priority in the queue.
if (thread->scheduling_state == static_cast<u32>(ThreadSchedStatus::Runnable)) {
if (thread->GetRawState() == ThreadState::Runnable) {
GetPriorityQueue(kernel).ChangePriority(
old_priority, thread == kernel.CurrentScheduler()->GetCurrentThread(), thread);
IncrementScheduledCount(thread);
@@ -222,7 +222,7 @@ void KScheduler::OnThreadAffinityMaskChanged(KernelCore& kernel, Thread* thread,
ASSERT(kernel.GlobalSchedulerContext().IsLocked());
// If the thread is runnable, we want to change its affinity in the queue.
if (thread->scheduling_state == static_cast<u32>(ThreadSchedStatus::Runnable)) {
if (thread->GetRawState() == ThreadState::Runnable) {
GetPriorityQueue(kernel).ChangeAffinityMask(old_core, old_affinity, thread);
IncrementScheduledCount(thread);
SetSchedulerUpdateNeeded(kernel);
@@ -292,7 +292,7 @@ void KScheduler::RotateScheduledQueue(s32 core_id, s32 priority) {
// If the best thread we can choose has a priority the same or worse than ours, try to
// migrate a higher priority thread.
if (best_thread != nullptr && best_thread->GetPriority() >= static_cast<u32>(priority)) {
if (best_thread != nullptr && best_thread->GetPriority() >= priority) {
Thread* suggested = priority_queue.GetSuggestedFront(core_id);
while (suggested != nullptr) {
// If the suggestion's priority is the same as ours, don't bother.
@@ -395,8 +395,8 @@ void KScheduler::YieldWithoutCoreMigration() {
{
KScopedSchedulerLock lock(kernel);
const auto cur_state = cur_thread.scheduling_state;
if (cur_state == static_cast<u32>(ThreadSchedStatus::Runnable)) {
const auto cur_state = cur_thread.GetRawState();
if (cur_state == ThreadState::Runnable) {
// Put the current thread at the back of the queue.
Thread* next_thread = priority_queue.MoveToScheduledBack(std::addressof(cur_thread));
IncrementScheduledCount(std::addressof(cur_thread));
@@ -436,8 +436,8 @@ void KScheduler::YieldWithCoreMigration() {
{
KScopedSchedulerLock lock(kernel);
const auto cur_state = cur_thread.scheduling_state;
if (cur_state == static_cast<u32>(ThreadSchedStatus::Runnable)) {
const auto cur_state = cur_thread.GetRawState();
if (cur_state == ThreadState::Runnable) {
// Get the current active core.
const s32 core_id = cur_thread.GetActiveCore();
@@ -526,8 +526,8 @@ void KScheduler::YieldToAnyThread() {
{
KScopedSchedulerLock lock(kernel);
const auto cur_state = cur_thread.scheduling_state;
if (cur_state == static_cast<u32>(ThreadSchedStatus::Runnable)) {
const auto cur_state = cur_thread.GetRawState();
if (cur_state == ThreadState::Runnable) {
// Get the current active core.
const s32 core_id = cur_thread.GetActiveCore();
@@ -645,8 +645,7 @@ void KScheduler::Unload(Thread* thread) {
void KScheduler::Reload(Thread* thread) {
if (thread) {
ASSERT_MSG(thread->GetSchedulingStatus() == ThreadSchedStatus::Runnable,
"Thread must be runnable.");
ASSERT_MSG(thread->GetState() == ThreadState::Runnable, "Thread must be runnable.");
// Cancel any outstanding wakeup events for this thread
thread->SetIsRunning(true);
@@ -725,7 +724,7 @@ void KScheduler::SwitchToCurrent() {
do {
if (current_thread != nullptr && !current_thread->IsHLEThread()) {
current_thread->context_guard.lock();
if (!current_thread->IsRunnable()) {
if (current_thread->GetRawState() != ThreadState::Runnable) {
current_thread->context_guard.unlock();
break;
}
@@ -772,7 +771,7 @@ void KScheduler::Initialize() {
{
KScopedSchedulerLock lock{system.Kernel()};
idle_thread->SetStatus(ThreadStatus::Ready);
idle_thread->SetState(ThreadState::Runnable);
}
}

View File

@@ -100,11 +100,10 @@ public:
void YieldToAnyThread();
/// Notify the scheduler a thread's status has changed.
static void OnThreadStateChanged(KernelCore& kernel, Thread* thread, u32 old_state);
static void OnThreadStateChanged(KernelCore& kernel, Thread* thread, ThreadState old_state);
/// Notify the scheduler a thread's priority has changed.
static void OnThreadPriorityChanged(KernelCore& kernel, Thread* thread, Thread* current_thread,
u32 old_priority);
static void OnThreadPriorityChanged(KernelCore& kernel, Thread* thread, s32 old_priority);
/// Notify the scheduler a thread's core and/or affinity mask has changed.
static void OnThreadAffinityMaskChanged(KernelCore& kernel, Thread* thread,

View File

@@ -19,7 +19,7 @@ class KernelCore;
template <typename SchedulerType>
class KAbstractSchedulerLock {
public:
explicit KAbstractSchedulerLock(KernelCore& kernel) : kernel{kernel} {}
explicit KAbstractSchedulerLock(KernelCore& kernel_) : kernel{kernel_} {}
bool IsLockedByCurrentThread() const {
return this->owner_thread == kernel.GetCurrentEmuThreadID();

View File

@@ -0,0 +1,172 @@
// Copyright 2021 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "common/assert.h"
#include "common/common_types.h"
#include "core/hle/kernel/k_scheduler.h"
#include "core/hle/kernel/k_scoped_scheduler_lock_and_sleep.h"
#include "core/hle/kernel/k_synchronization_object.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/svc_results.h"
#include "core/hle/kernel/thread.h"
namespace Kernel {
ResultCode KSynchronizationObject::Wait(KernelCore& kernel, s32* out_index,
KSynchronizationObject** objects, const s32 num_objects,
s64 timeout) {
// Allocate space on stack for thread nodes.
std::vector<ThreadListNode> thread_nodes(num_objects);
// Prepare for wait.
Thread* thread = kernel.CurrentScheduler()->GetCurrentThread();
Handle timer = InvalidHandle;
{
// Setup the scheduling lock and sleep.
KScopedSchedulerLockAndSleep slp(kernel, timer, thread, timeout);
// Check if any of the objects are already signaled.
for (auto i = 0; i < num_objects; ++i) {
ASSERT(objects[i] != nullptr);
if (objects[i]->IsSignaled()) {
*out_index = i;
slp.CancelSleep();
return RESULT_SUCCESS;
}
}
// Check if the timeout is zero.
if (timeout == 0) {
slp.CancelSleep();
return Svc::ResultTimedOut;
}
// Check if the thread should terminate.
if (thread->IsTerminationRequested()) {
slp.CancelSleep();
return Svc::ResultTerminationRequested;
}
// Check if waiting was canceled.
if (thread->IsWaitCancelled()) {
slp.CancelSleep();
thread->ClearWaitCancelled();
return Svc::ResultCancelled;
}
// Add the waiters.
for (auto i = 0; i < num_objects; ++i) {
thread_nodes[i].thread = thread;
thread_nodes[i].next = nullptr;
if (objects[i]->thread_list_tail == nullptr) {
objects[i]->thread_list_head = std::addressof(thread_nodes[i]);
} else {
objects[i]->thread_list_tail->next = std::addressof(thread_nodes[i]);
}
objects[i]->thread_list_tail = std::addressof(thread_nodes[i]);
}
// For debugging only
thread->SetWaitObjectsForDebugging({objects, static_cast<std::size_t>(num_objects)});
// Mark the thread as waiting.
thread->SetCancellable();
thread->SetSyncedObject(nullptr, Svc::ResultTimedOut);
thread->SetState(ThreadState::Waiting);
thread->SetWaitReasonForDebugging(ThreadWaitReasonForDebugging::Synchronization);
}
// The lock/sleep is done, so we should be able to get our result.
// Thread is no longer cancellable.
thread->ClearCancellable();
// For debugging only
thread->SetWaitObjectsForDebugging({});
// Cancel the timer as needed.
if (timer != InvalidHandle) {
auto& time_manager = kernel.TimeManager();
time_manager.UnscheduleTimeEvent(timer);
}
// Get the wait result.
ResultCode wait_result{RESULT_SUCCESS};
s32 sync_index = -1;
{
KScopedSchedulerLock lock(kernel);
KSynchronizationObject* synced_obj;
wait_result = thread->GetWaitResult(std::addressof(synced_obj));
for (auto i = 0; i < num_objects; ++i) {
// Unlink the object from the list.
ThreadListNode* prev_ptr =
reinterpret_cast<ThreadListNode*>(std::addressof(objects[i]->thread_list_head));
ThreadListNode* prev_val = nullptr;
ThreadListNode *prev, *tail_prev;
do {
prev = prev_ptr;
prev_ptr = prev_ptr->next;
tail_prev = prev_val;
prev_val = prev_ptr;
} while (prev_ptr != std::addressof(thread_nodes[i]));
if (objects[i]->thread_list_tail == std::addressof(thread_nodes[i])) {
objects[i]->thread_list_tail = tail_prev;
}
prev->next = thread_nodes[i].next;
if (objects[i] == synced_obj) {
sync_index = i;
}
}
}
// Set output.
*out_index = sync_index;
return wait_result;
}
KSynchronizationObject::KSynchronizationObject(KernelCore& kernel) : Object{kernel} {}
KSynchronizationObject ::~KSynchronizationObject() = default;
void KSynchronizationObject::NotifyAvailable(ResultCode result) {
KScopedSchedulerLock lock(kernel);
// If we're not signaled, we've nothing to notify.
if (!this->IsSignaled()) {
return;
}
// Iterate over each thread.
for (auto* cur_node = thread_list_head; cur_node != nullptr; cur_node = cur_node->next) {
Thread* thread = cur_node->thread;
if (thread->GetState() == ThreadState::Waiting) {
thread->SetSyncedObject(this, result);
thread->SetState(ThreadState::Runnable);
}
}
}
std::vector<Thread*> KSynchronizationObject::GetWaitingThreadsForDebugging() const {
std::vector<Thread*> threads;
// If debugging, dump the list of waiters.
{
KScopedSchedulerLock lock(kernel);
for (auto* cur_node = thread_list_head; cur_node != nullptr; cur_node = cur_node->next) {
threads.emplace_back(cur_node->thread);
}
}
return threads;
}
} // namespace Kernel

View File

@@ -0,0 +1,58 @@
// Copyright 2021 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <vector>
#include "core/hle/kernel/object.h"
#include "core/hle/result.h"
namespace Kernel {
class KernelCore;
class Synchronization;
class Thread;
/// Class that represents a Kernel object that a thread can be waiting on
class KSynchronizationObject : public Object {
public:
struct ThreadListNode {
ThreadListNode* next{};
Thread* thread{};
};
[[nodiscard]] static ResultCode Wait(KernelCore& kernel, s32* out_index,
KSynchronizationObject** objects, const s32 num_objects,
s64 timeout);
[[nodiscard]] virtual bool IsSignaled() const = 0;
[[nodiscard]] std::vector<Thread*> GetWaitingThreadsForDebugging() const;
protected:
explicit KSynchronizationObject(KernelCore& kernel);
virtual ~KSynchronizationObject();
void NotifyAvailable(ResultCode result);
void NotifyAvailable() {
return this->NotifyAvailable(RESULT_SUCCESS);
}
private:
ThreadListNode* thread_list_head{};
ThreadListNode* thread_list_tail{};
};
// Specialization of DynamicObjectCast for KSynchronizationObjects
template <>
inline std::shared_ptr<KSynchronizationObject> DynamicObjectCast<KSynchronizationObject>(
std::shared_ptr<Object> object) {
if (object != nullptr && object->IsWaitable()) {
return std::static_pointer_cast<KSynchronizationObject>(object);
}
return nullptr;
}
} // namespace Kernel

View File

@@ -38,7 +38,6 @@
#include "core/hle/kernel/resource_limit.h"
#include "core/hle/kernel/service_thread.h"
#include "core/hle/kernel/shared_memory.h"
#include "core/hle/kernel/synchronization.h"
#include "core/hle/kernel/thread.h"
#include "core/hle/kernel/time_manager.h"
#include "core/hle/lock.h"
@@ -51,8 +50,7 @@ namespace Kernel {
struct KernelCore::Impl {
explicit Impl(Core::System& system, KernelCore& kernel)
: synchronization{system}, time_manager{system}, global_handle_table{kernel}, system{
system} {}
: time_manager{system}, global_handle_table{kernel}, system{system} {}
void SetMulticore(bool is_multicore) {
this->is_multicore = is_multicore;
@@ -307,7 +305,6 @@ struct KernelCore::Impl {
std::vector<std::shared_ptr<Process>> process_list;
Process* current_process = nullptr;
std::unique_ptr<Kernel::GlobalSchedulerContext> global_scheduler_context;
Kernel::Synchronization synchronization;
Kernel::TimeManager time_manager;
std::shared_ptr<ResourceLimit> system_resource_limit;
@@ -461,14 +458,6 @@ const std::array<Core::CPUInterruptHandler, Core::Hardware::NUM_CPU_CORES>& Kern
return impl->interrupts;
}
Kernel::Synchronization& KernelCore::Synchronization() {
return impl->synchronization;
}
const Kernel::Synchronization& KernelCore::Synchronization() const {
return impl->synchronization;
}
Kernel::TimeManager& KernelCore::TimeManager() {
return impl->time_manager;
}
@@ -613,9 +602,11 @@ void KernelCore::Suspend(bool in_suspention) {
const bool should_suspend = exception_exited || in_suspention;
{
KScopedSchedulerLock lock(*this);
ThreadStatus status = should_suspend ? ThreadStatus::Ready : ThreadStatus::WaitSleep;
const auto state = should_suspend ? ThreadState::Runnable : ThreadState::Waiting;
for (std::size_t i = 0; i < Core::Hardware::NUM_CPU_CORES; i++) {
impl->suspend_threads[i]->SetStatus(status);
impl->suspend_threads[i]->SetState(state);
impl->suspend_threads[i]->SetWaitReasonForDebugging(
ThreadWaitReasonForDebugging::Suspended);
}
}
}

View File

@@ -33,7 +33,6 @@ template <typename T>
class SlabHeap;
} // namespace Memory
class AddressArbiter;
class ClientPort;
class GlobalSchedulerContext;
class HandleTable;
@@ -129,12 +128,6 @@ public:
/// Gets the an instance of the current physical CPU core.
const Kernel::PhysicalCore& CurrentPhysicalCore() const;
/// Gets the an instance of the Synchronization Interface.
Kernel::Synchronization& Synchronization();
/// Gets the an instance of the Synchronization Interface.
const Kernel::Synchronization& Synchronization() const;
/// Gets the an instance of the TimeManager Interface.
Kernel::TimeManager& TimeManager();

View File

@@ -96,6 +96,7 @@ u64 AddressSpaceInfo::GetAddressSpaceStart(std::size_t width, Type type) {
return AddressSpaceInfos[AddressSpaceIndices39Bit[index]].address;
}
UNREACHABLE();
return 0;
}
std::size_t AddressSpaceInfo::GetAddressSpaceSize(std::size_t width, Type type) {
@@ -112,6 +113,7 @@ std::size_t AddressSpaceInfo::GetAddressSpaceSize(std::size_t width, Type type)
return AddressSpaceInfos[AddressSpaceIndices39Bit[index]].size;
}
UNREACHABLE();
return 0;
}
} // namespace Kernel::Memory

View File

@@ -73,12 +73,12 @@ enum class MemoryState : u32 {
ThreadLocal =
static_cast<u32>(Svc::MemoryState::ThreadLocal) | FlagMapped | FlagReferenceCounted,
Transfered = static_cast<u32>(Svc::MemoryState::Transfered) | FlagsMisc |
FlagCanAlignedDeviceMap | FlagCanChangeAttribute | FlagCanUseIpc |
FlagCanUseNonSecureIpc | FlagCanUseNonDeviceIpc,
Transferred = static_cast<u32>(Svc::MemoryState::Transferred) | FlagsMisc |
FlagCanAlignedDeviceMap | FlagCanChangeAttribute | FlagCanUseIpc |
FlagCanUseNonSecureIpc | FlagCanUseNonDeviceIpc,
SharedTransfered = static_cast<u32>(Svc::MemoryState::SharedTransfered) | FlagsMisc |
FlagCanAlignedDeviceMap | FlagCanUseNonSecureIpc | FlagCanUseNonDeviceIpc,
SharedTransferred = static_cast<u32>(Svc::MemoryState::SharedTransferred) | FlagsMisc |
FlagCanAlignedDeviceMap | FlagCanUseNonSecureIpc | FlagCanUseNonDeviceIpc,
SharedCode = static_cast<u32>(Svc::MemoryState::SharedCode) | FlagMapped |
FlagReferenceCounted | FlagCanUseNonSecureIpc | FlagCanUseNonDeviceIpc,
@@ -111,8 +111,8 @@ static_assert(static_cast<u32>(MemoryState::AliasCodeData) == 0x03FFBD09);
static_assert(static_cast<u32>(MemoryState::Ipc) == 0x005C3C0A);
static_assert(static_cast<u32>(MemoryState::Stack) == 0x005C3C0B);
static_assert(static_cast<u32>(MemoryState::ThreadLocal) == 0x0040200C);
static_assert(static_cast<u32>(MemoryState::Transfered) == 0x015C3C0D);
static_assert(static_cast<u32>(MemoryState::SharedTransfered) == 0x005C380E);
static_assert(static_cast<u32>(MemoryState::Transferred) == 0x015C3C0D);
static_assert(static_cast<u32>(MemoryState::SharedTransferred) == 0x005C380E);
static_assert(static_cast<u32>(MemoryState::SharedCode) == 0x0040380F);
static_assert(static_cast<u32>(MemoryState::Inaccessible) == 0x00000010);
static_assert(static_cast<u32>(MemoryState::NonSecureIpc) == 0x005C3811);

View File

@@ -5,9 +5,28 @@
#pragma once
#include "common/common_types.h"
#include "core/device_memory.h"
namespace Kernel::Memory {
constexpr std::size_t KernelAslrAlignment = 2 * 1024 * 1024;
constexpr std::size_t KernelVirtualAddressSpaceWidth = 1ULL << 39;
constexpr std::size_t KernelPhysicalAddressSpaceWidth = 1ULL << 48;
constexpr std::size_t KernelVirtualAddressSpaceBase = 0ULL - KernelVirtualAddressSpaceWidth;
constexpr std::size_t KernelVirtualAddressSpaceEnd =
KernelVirtualAddressSpaceBase + (KernelVirtualAddressSpaceWidth - KernelAslrAlignment);
constexpr std::size_t KernelVirtualAddressSpaceLast = KernelVirtualAddressSpaceEnd - 1;
constexpr std::size_t KernelVirtualAddressSpaceSize =
KernelVirtualAddressSpaceEnd - KernelVirtualAddressSpaceBase;
constexpr bool IsKernelAddressKey(VAddr key) {
return KernelVirtualAddressSpaceBase <= key && key <= KernelVirtualAddressSpaceLast;
}
constexpr bool IsKernelAddress(VAddr address) {
return KernelVirtualAddressSpaceBase <= address && address < KernelVirtualAddressSpaceEnd;
}
class MemoryRegion final {
friend class MemoryLayout;

View File

@@ -8,11 +8,11 @@
#pragma once
#include <array>
#include <bit>
#include <vector>
#include "common/alignment.h"
#include "common/assert.h"
#include "common/bit_util.h"
#include "common/common_funcs.h"
#include "common/common_types.h"
#include "core/hle/kernel/memory/memory_types.h"
@@ -105,7 +105,7 @@ private:
ASSERT(depth == 0);
return -1;
}
offset = offset * 64 + Common::CountTrailingZeroes64(v);
offset = offset * 64 + static_cast<u32>(std::countr_zero(v));
++depth;
} while (depth < static_cast<s32>(used_depths));

View File

@@ -265,7 +265,7 @@ ResultCode PageTable::InitializeForProcess(FileSys::ProgramAddressSpaceType as_t
physical_memory_usage = 0;
memory_pool = pool;
page_table_impl.Resize(address_space_width, PageBits, true);
page_table_impl.Resize(address_space_width, PageBits);
return InitializeMemoryLayout(start, end);
}
@@ -1007,8 +1007,8 @@ constexpr VAddr PageTable::GetRegionAddress(MemoryState state) const {
case MemoryState::Shared:
case MemoryState::AliasCode:
case MemoryState::AliasCodeData:
case MemoryState::Transfered:
case MemoryState::SharedTransfered:
case MemoryState::Transferred:
case MemoryState::SharedTransferred:
case MemoryState::SharedCode:
case MemoryState::GeneratedCode:
case MemoryState::CodeOut:
@@ -1042,8 +1042,8 @@ constexpr std::size_t PageTable::GetRegionSize(MemoryState state) const {
case MemoryState::Shared:
case MemoryState::AliasCode:
case MemoryState::AliasCodeData:
case MemoryState::Transfered:
case MemoryState::SharedTransfered:
case MemoryState::Transferred:
case MemoryState::SharedTransferred:
case MemoryState::SharedCode:
case MemoryState::GeneratedCode:
case MemoryState::CodeOut:
@@ -1080,8 +1080,8 @@ constexpr bool PageTable::CanContain(VAddr addr, std::size_t size, MemoryState s
case MemoryState::AliasCodeData:
case MemoryState::Stack:
case MemoryState::ThreadLocal:
case MemoryState::Transfered:
case MemoryState::SharedTransfered:
case MemoryState::Transferred:
case MemoryState::SharedTransferred:
case MemoryState::SharedCode:
case MemoryState::GeneratedCode:
case MemoryState::CodeOut:

View File

@@ -1,170 +0,0 @@
// Copyright 2014 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <memory>
#include <utility>
#include <vector>
#include "common/assert.h"
#include "common/logging/log.h"
#include "core/core.h"
#include "core/hle/kernel/errors.h"
#include "core/hle/kernel/handle_table.h"
#include "core/hle/kernel/k_scheduler.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/mutex.h"
#include "core/hle/kernel/object.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/thread.h"
#include "core/hle/result.h"
#include "core/memory.h"
namespace Kernel {
/// Returns the number of threads that are waiting for a mutex, and the highest priority one among
/// those.
static std::pair<std::shared_ptr<Thread>, u32> GetHighestPriorityMutexWaitingThread(
const std::shared_ptr<Thread>& current_thread, VAddr mutex_addr) {
std::shared_ptr<Thread> highest_priority_thread;
u32 num_waiters = 0;
for (const auto& thread : current_thread->GetMutexWaitingThreads()) {
if (thread->GetMutexWaitAddress() != mutex_addr)
continue;
++num_waiters;
if (highest_priority_thread == nullptr ||
thread->GetPriority() < highest_priority_thread->GetPriority()) {
highest_priority_thread = thread;
}
}
return {highest_priority_thread, num_waiters};
}
/// Update the mutex owner field of all threads waiting on the mutex to point to the new owner.
static void TransferMutexOwnership(VAddr mutex_addr, std::shared_ptr<Thread> current_thread,
std::shared_ptr<Thread> new_owner) {
current_thread->RemoveMutexWaiter(new_owner);
const auto threads = current_thread->GetMutexWaitingThreads();
for (const auto& thread : threads) {
if (thread->GetMutexWaitAddress() != mutex_addr)
continue;
ASSERT(thread->GetLockOwner() == current_thread.get());
current_thread->RemoveMutexWaiter(thread);
if (new_owner != thread)
new_owner->AddMutexWaiter(thread);
}
}
Mutex::Mutex(Core::System& system) : system{system} {}
Mutex::~Mutex() = default;
ResultCode Mutex::TryAcquire(VAddr address, Handle holding_thread_handle,
Handle requesting_thread_handle) {
// The mutex address must be 4-byte aligned
if ((address % sizeof(u32)) != 0) {
LOG_ERROR(Kernel, "Address is not 4-byte aligned! address={:016X}", address);
return ERR_INVALID_ADDRESS;
}
auto& kernel = system.Kernel();
std::shared_ptr<Thread> current_thread =
SharedFrom(kernel.CurrentScheduler()->GetCurrentThread());
{
KScopedSchedulerLock lock(kernel);
// The mutex address must be 4-byte aligned
if ((address % sizeof(u32)) != 0) {
return ERR_INVALID_ADDRESS;
}
const auto& handle_table = kernel.CurrentProcess()->GetHandleTable();
std::shared_ptr<Thread> holding_thread = handle_table.Get<Thread>(holding_thread_handle);
std::shared_ptr<Thread> requesting_thread =
handle_table.Get<Thread>(requesting_thread_handle);
// TODO(Subv): It is currently unknown if it is possible to lock a mutex in behalf of
// another thread.
ASSERT(requesting_thread == current_thread);
current_thread->SetSynchronizationResults(nullptr, RESULT_SUCCESS);
const u32 addr_value = system.Memory().Read32(address);
// If the mutex isn't being held, just return success.
if (addr_value != (holding_thread_handle | Mutex::MutexHasWaitersFlag)) {
return RESULT_SUCCESS;
}
if (holding_thread == nullptr) {
return ERR_INVALID_HANDLE;
}
// Wait until the mutex is released
current_thread->SetMutexWaitAddress(address);
current_thread->SetWaitHandle(requesting_thread_handle);
current_thread->SetStatus(ThreadStatus::WaitMutex);
// Update the lock holder thread's priority to prevent priority inversion.
holding_thread->AddMutexWaiter(current_thread);
}
{
KScopedSchedulerLock lock(kernel);
auto* owner = current_thread->GetLockOwner();
if (owner != nullptr) {
owner->RemoveMutexWaiter(current_thread);
}
}
return current_thread->GetSignalingResult();
}
std::pair<ResultCode, std::shared_ptr<Thread>> Mutex::Unlock(std::shared_ptr<Thread> owner,
VAddr address) {
// The mutex address must be 4-byte aligned
if ((address % sizeof(u32)) != 0) {
LOG_ERROR(Kernel, "Address is not 4-byte aligned! address={:016X}", address);
return {ERR_INVALID_ADDRESS, nullptr};
}
auto [new_owner, num_waiters] = GetHighestPriorityMutexWaitingThread(owner, address);
if (new_owner == nullptr) {
system.Memory().Write32(address, 0);
return {RESULT_SUCCESS, nullptr};
}
// Transfer the ownership of the mutex from the previous owner to the new one.
TransferMutexOwnership(address, owner, new_owner);
u32 mutex_value = new_owner->GetWaitHandle();
if (num_waiters >= 2) {
// Notify the guest that there are still some threads waiting for the mutex
mutex_value |= Mutex::MutexHasWaitersFlag;
}
new_owner->SetSynchronizationResults(nullptr, RESULT_SUCCESS);
new_owner->SetLockOwner(nullptr);
new_owner->ResumeFromWait();
system.Memory().Write32(address, mutex_value);
return {RESULT_SUCCESS, new_owner};
}
ResultCode Mutex::Release(VAddr address) {
auto& kernel = system.Kernel();
KScopedSchedulerLock lock(kernel);
std::shared_ptr<Thread> current_thread =
SharedFrom(kernel.CurrentScheduler()->GetCurrentThread());
auto [result, new_owner] = Unlock(current_thread, address);
if (result != RESULT_SUCCESS && new_owner != nullptr) {
new_owner->SetSynchronizationResults(nullptr, result);
}
return result;
}
} // namespace Kernel

View File

@@ -1,42 +0,0 @@
// Copyright 2014 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include "common/common_types.h"
union ResultCode;
namespace Core {
class System;
}
namespace Kernel {
class Mutex final {
public:
explicit Mutex(Core::System& system);
~Mutex();
/// Flag that indicates that a mutex still has threads waiting for it.
static constexpr u32 MutexHasWaitersFlag = 0x40000000;
/// Mask of the bits in a mutex address value that contain the mutex owner.
static constexpr u32 MutexOwnerMask = 0xBFFFFFFF;
/// Attempts to acquire a mutex at the specified address.
ResultCode TryAcquire(VAddr address, Handle holding_thread_handle,
Handle requesting_thread_handle);
/// Unlocks a mutex for owner at address
std::pair<ResultCode, std::shared_ptr<Thread>> Unlock(std::shared_ptr<Thread> owner,
VAddr address);
/// Releases the mutex at the specified address.
ResultCode Release(VAddr address);
private:
Core::System& system;
};
} // namespace Kernel

View File

@@ -50,6 +50,11 @@ public:
}
virtual HandleType GetHandleType() const = 0;
void Close() {
// TODO(bunnei): This is a placeholder to decrement the reference count, which we will use
// when we implement KAutoObject instead of using shared_ptr.
}
/**
* Check if a thread can wait on the object
* @return True if a thread can wait on the object, otherwise false

View File

@@ -55,7 +55,7 @@ void SetupMainThread(Core::System& system, Process& owner_process, u32 priority,
// Threads by default are dormant, wake up the main thread so it runs when the scheduler fires
{
KScopedSchedulerLock lock{kernel};
thread->SetStatus(ThreadStatus::Ready);
thread->SetState(ThreadState::Runnable);
}
}
} // Anonymous namespace
@@ -162,48 +162,6 @@ u64 Process::GetTotalPhysicalMemoryUsedWithoutSystemResource() const {
return GetTotalPhysicalMemoryUsed() - GetSystemResourceUsage();
}
void Process::InsertConditionVariableThread(std::shared_ptr<Thread> thread) {
VAddr cond_var_addr = thread->GetCondVarWaitAddress();
std::list<std::shared_ptr<Thread>>& thread_list = cond_var_threads[cond_var_addr];
auto it = thread_list.begin();
while (it != thread_list.end()) {
const std::shared_ptr<Thread> current_thread = *it;
if (current_thread->GetPriority() > thread->GetPriority()) {
thread_list.insert(it, thread);
return;
}
++it;
}
thread_list.push_back(thread);
}
void Process::RemoveConditionVariableThread(std::shared_ptr<Thread> thread) {
VAddr cond_var_addr = thread->GetCondVarWaitAddress();
std::list<std::shared_ptr<Thread>>& thread_list = cond_var_threads[cond_var_addr];
auto it = thread_list.begin();
while (it != thread_list.end()) {
const std::shared_ptr<Thread> current_thread = *it;
if (current_thread.get() == thread.get()) {
thread_list.erase(it);
return;
}
++it;
}
}
std::vector<std::shared_ptr<Thread>> Process::GetConditionVariableThreads(
const VAddr cond_var_addr) {
std::vector<std::shared_ptr<Thread>> result{};
std::list<std::shared_ptr<Thread>>& thread_list = cond_var_threads[cond_var_addr];
auto it = thread_list.begin();
while (it != thread_list.end()) {
std::shared_ptr<Thread> current_thread = *it;
result.push_back(current_thread);
++it;
}
return result;
}
void Process::RegisterThread(const Thread* thread) {
thread_list.push_back(thread);
}
@@ -318,7 +276,7 @@ void Process::PrepareForTermination() {
continue;
// TODO(Subv): When are the other running/ready threads terminated?
ASSERT_MSG(thread->GetStatus() == ThreadStatus::WaitSynch,
ASSERT_MSG(thread->GetState() == ThreadState::Waiting,
"Exiting processes with non-waiting threads is currently unimplemented");
thread->Stop();
@@ -406,21 +364,18 @@ void Process::LoadModule(CodeSet code_set, VAddr base_addr) {
ReprotectSegment(code_set.DataSegment(), Memory::MemoryPermission::ReadAndWrite);
}
bool Process::IsSignaled() const {
ASSERT(kernel.GlobalSchedulerContext().IsLocked());
return is_signaled;
}
Process::Process(Core::System& system)
: SynchronizationObject{system.Kernel()}, page_table{std::make_unique<Memory::PageTable>(
system)},
handle_table{system.Kernel()}, address_arbiter{system}, mutex{system}, system{system} {}
: KSynchronizationObject{system.Kernel()},
page_table{std::make_unique<Memory::PageTable>(system)}, handle_table{system.Kernel()},
address_arbiter{system}, condition_var{system}, system{system} {}
Process::~Process() = default;
void Process::Acquire(Thread* thread) {
ASSERT_MSG(!ShouldWait(thread), "Object unavailable!");
}
bool Process::ShouldWait(const Thread* thread) const {
return !is_signaled;
}
void Process::ChangeStatus(ProcessStatus new_status) {
if (status == new_status) {
return;
@@ -428,7 +383,7 @@ void Process::ChangeStatus(ProcessStatus new_status) {
status = new_status;
is_signaled = true;
Signal();
NotifyAvailable();
}
ResultCode Process::AllocateMainThreadStack(std::size_t stack_size) {

View File

@@ -11,11 +11,11 @@
#include <unordered_map>
#include <vector>
#include "common/common_types.h"
#include "core/hle/kernel/address_arbiter.h"
#include "core/hle/kernel/handle_table.h"
#include "core/hle/kernel/mutex.h"
#include "core/hle/kernel/k_address_arbiter.h"
#include "core/hle/kernel/k_condition_variable.h"
#include "core/hle/kernel/k_synchronization_object.h"
#include "core/hle/kernel/process_capability.h"
#include "core/hle/kernel/synchronization_object.h"
#include "core/hle/result.h"
namespace Core {
@@ -63,7 +63,7 @@ enum class ProcessStatus {
DebugBreak,
};
class Process final : public SynchronizationObject {
class Process final : public KSynchronizationObject {
public:
explicit Process(Core::System& system);
~Process() override;
@@ -123,24 +123,30 @@ public:
return handle_table;
}
/// Gets a reference to the process' address arbiter.
AddressArbiter& GetAddressArbiter() {
return address_arbiter;
ResultCode SignalToAddress(VAddr address) {
return condition_var.SignalToAddress(address);
}
/// Gets a const reference to the process' address arbiter.
const AddressArbiter& GetAddressArbiter() const {
return address_arbiter;
ResultCode WaitForAddress(Handle handle, VAddr address, u32 tag) {
return condition_var.WaitForAddress(handle, address, tag);
}
/// Gets a reference to the process' mutex lock.
Mutex& GetMutex() {
return mutex;
void SignalConditionVariable(u64 cv_key, int32_t count) {
return condition_var.Signal(cv_key, count);
}
/// Gets a const reference to the process' mutex lock
const Mutex& GetMutex() const {
return mutex;
ResultCode WaitConditionVariable(VAddr address, u64 cv_key, u32 tag, s64 ns) {
return condition_var.Wait(address, cv_key, tag, ns);
}
ResultCode SignalAddressArbiter(VAddr address, Svc::SignalType signal_type, s32 value,
s32 count) {
return address_arbiter.SignalToAddress(address, signal_type, value, count);
}
ResultCode WaitAddressArbiter(VAddr address, Svc::ArbitrationType arb_type, s32 value,
s64 timeout) {
return address_arbiter.WaitForAddress(address, arb_type, value, timeout);
}
/// Gets the address to the process' dedicated TLS region.
@@ -250,15 +256,6 @@ public:
return thread_list;
}
/// Insert a thread into the condition variable wait container
void InsertConditionVariableThread(std::shared_ptr<Thread> thread);
/// Remove a thread from the condition variable wait container
void RemoveConditionVariableThread(std::shared_ptr<Thread> thread);
/// Obtain all condition variable threads waiting for some address
std::vector<std::shared_ptr<Thread>> GetConditionVariableThreads(VAddr cond_var_addr);
/// Registers a thread as being created under this process,
/// adding it to this process' thread list.
void RegisterThread(const Thread* thread);
@@ -304,6 +301,8 @@ public:
void LoadModule(CodeSet code_set, VAddr base_addr);
bool IsSignaled() const override;
///////////////////////////////////////////////////////////////////////////////////////////////
// Thread-local storage management
@@ -314,12 +313,6 @@ public:
void FreeTLSRegion(VAddr tls_address);
private:
/// Checks if the specified thread should wait until this process is available.
bool ShouldWait(const Thread* thread) const override;
/// Acquires/locks this process for the specified thread if it's available.
void Acquire(Thread* thread) override;
/// Changes the process status. If the status is different
/// from the current process status, then this will trigger
/// a process signal.
@@ -373,12 +366,12 @@ private:
HandleTable handle_table;
/// Per-process address arbiter.
AddressArbiter address_arbiter;
KAddressArbiter address_arbiter;
/// The per-process mutex lock instance used for handling various
/// forms of services, such as lock arbitration, and condition
/// variable related facilities.
Mutex mutex;
KConditionVariable condition_var;
/// Address indicating the location of the process' dedicated TLS region.
VAddr tls_region_address = 0;
@@ -389,9 +382,6 @@ private:
/// List of threads that are running with this process as their owner.
std::list<const Thread*> thread_list;
/// List of threads waiting for a condition variable
std::unordered_map<VAddr, std::list<std::shared_ptr<Thread>>> cond_var_threads;
/// Address of the top of the main thread's stack
VAddr main_thread_stack_top{};
@@ -410,6 +400,8 @@ private:
/// Schedule count of this process
s64 schedule_count{};
bool is_signaled{};
/// System context
Core::System& system;
};

View File

@@ -2,6 +2,8 @@
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <bit>
#include "common/bit_util.h"
#include "common/logging/log.h"
#include "core/hle/kernel/errors.h"
@@ -60,7 +62,7 @@ constexpr CapabilityType GetCapabilityType(u32 value) {
u32 GetFlagBitOffset(CapabilityType type) {
const auto value = static_cast<u32>(type);
return static_cast<u32>(Common::BitSize<u32>() - Common::CountLeadingZeroes32(value));
return static_cast<u32>(Common::BitSize<u32>() - static_cast<u32>(std::countl_zero(value)));
}
} // Anonymous namespace

View File

@@ -14,24 +14,22 @@
namespace Kernel {
ReadableEvent::ReadableEvent(KernelCore& kernel) : SynchronizationObject{kernel} {}
ReadableEvent::ReadableEvent(KernelCore& kernel) : KSynchronizationObject{kernel} {}
ReadableEvent::~ReadableEvent() = default;
bool ReadableEvent::ShouldWait(const Thread* thread) const {
return !is_signaled;
}
void ReadableEvent::Acquire(Thread* thread) {
ASSERT_MSG(IsSignaled(), "object unavailable!");
}
void ReadableEvent::Signal() {
if (is_signaled) {
return;
}
is_signaled = true;
SynchronizationObject::Signal();
NotifyAvailable();
}
bool ReadableEvent::IsSignaled() const {
ASSERT(kernel.GlobalSchedulerContext().IsLocked());
return is_signaled;
}
void ReadableEvent::Clear() {

View File

@@ -4,8 +4,8 @@
#pragma once
#include "core/hle/kernel/k_synchronization_object.h"
#include "core/hle/kernel/object.h"
#include "core/hle/kernel/synchronization_object.h"
union ResultCode;
@@ -14,7 +14,7 @@ namespace Kernel {
class KernelCore;
class WritableEvent;
class ReadableEvent final : public SynchronizationObject {
class ReadableEvent final : public KSynchronizationObject {
friend class WritableEvent;
public:
@@ -32,9 +32,6 @@ public:
return HANDLE_TYPE;
}
bool ShouldWait(const Thread* thread) const override;
void Acquire(Thread* thread) override;
/// Unconditionally clears the readable event's state.
void Clear();
@@ -46,11 +43,14 @@ public:
/// then ERR_INVALID_STATE will be returned.
ResultCode Reset();
void Signal() override;
void Signal();
bool IsSignaled() const override;
private:
explicit ReadableEvent(KernelCore& kernel);
bool is_signaled{};
std::string name; ///< Name of event (optional)
};

View File

@@ -13,7 +13,7 @@
namespace Kernel {
ServerPort::ServerPort(KernelCore& kernel) : SynchronizationObject{kernel} {}
ServerPort::ServerPort(KernelCore& kernel) : KSynchronizationObject{kernel} {}
ServerPort::~ServerPort() = default;
ResultVal<std::shared_ptr<ServerSession>> ServerPort::Accept() {
@@ -28,15 +28,9 @@ ResultVal<std::shared_ptr<ServerSession>> ServerPort::Accept() {
void ServerPort::AppendPendingSession(std::shared_ptr<ServerSession> pending_session) {
pending_sessions.push_back(std::move(pending_session));
}
bool ServerPort::ShouldWait(const Thread* thread) const {
// If there are no pending sessions, we wait until a new one is added.
return pending_sessions.empty();
}
void ServerPort::Acquire(Thread* thread) {
ASSERT_MSG(!ShouldWait(thread), "object unavailable!");
if (pending_sessions.size() == 1) {
NotifyAvailable();
}
}
bool ServerPort::IsSignaled() const {

View File

@@ -9,8 +9,8 @@
#include <utility>
#include <vector>
#include "common/common_types.h"
#include "core/hle/kernel/k_synchronization_object.h"
#include "core/hle/kernel/object.h"
#include "core/hle/kernel/synchronization_object.h"
#include "core/hle/result.h"
namespace Kernel {
@@ -20,7 +20,7 @@ class KernelCore;
class ServerSession;
class SessionRequestHandler;
class ServerPort final : public SynchronizationObject {
class ServerPort final : public KSynchronizationObject {
public:
explicit ServerPort(KernelCore& kernel);
~ServerPort() override;
@@ -79,9 +79,6 @@ public:
/// waiting to be accepted by this port.
void AppendPendingSession(std::shared_ptr<ServerSession> pending_session);
bool ShouldWait(const Thread* thread) const override;
void Acquire(Thread* thread) override;
bool IsSignaled() const override;
private:

View File

@@ -24,7 +24,7 @@
namespace Kernel {
ServerSession::ServerSession(KernelCore& kernel) : SynchronizationObject{kernel} {}
ServerSession::ServerSession(KernelCore& kernel) : KSynchronizationObject{kernel} {}
ServerSession::~ServerSession() {
kernel.ReleaseServiceThread(service_thread);
@@ -42,16 +42,6 @@ ResultVal<std::shared_ptr<ServerSession>> ServerSession::Create(KernelCore& kern
return MakeResult(std::move(session));
}
bool ServerSession::ShouldWait(const Thread* thread) const {
// Closed sessions should never wait, an error will be returned from svcReplyAndReceive.
if (!parent->Client()) {
return false;
}
// Wait if we have no pending requests, or if we're currently handling a request.
return pending_requesting_threads.empty() || currently_handling != nullptr;
}
bool ServerSession::IsSignaled() const {
// Closed sessions should never wait, an error will be returned from svcReplyAndReceive.
if (!parent->Client()) {
@@ -62,15 +52,6 @@ bool ServerSession::IsSignaled() const {
return !pending_requesting_threads.empty() && currently_handling == nullptr;
}
void ServerSession::Acquire(Thread* thread) {
ASSERT_MSG(!ShouldWait(thread), "object unavailable!");
// We are now handling a request, pop it from the stack.
// TODO(Subv): What happens if the client endpoint is closed before any requests are made?
ASSERT(!pending_requesting_threads.empty());
currently_handling = pending_requesting_threads.back();
pending_requesting_threads.pop_back();
}
void ServerSession::ClientDisconnected() {
// We keep a shared pointer to the hle handler to keep it alive throughout
// the call to ClientDisconnected, as ClientDisconnected invalidates the
@@ -172,7 +153,7 @@ ResultCode ServerSession::CompleteSyncRequest(HLERequestContext& context) {
{
KScopedSchedulerLock lock(kernel);
if (!context.IsThreadWaiting()) {
context.GetThread().ResumeFromWait();
context.GetThread().Wakeup();
context.GetThread().SetSynchronizationResults(nullptr, result);
}
}

View File

@@ -10,8 +10,8 @@
#include <vector>
#include "common/threadsafe_queue.h"
#include "core/hle/kernel/k_synchronization_object.h"
#include "core/hle/kernel/service_thread.h"
#include "core/hle/kernel/synchronization_object.h"
#include "core/hle/result.h"
namespace Core::Memory {
@@ -43,7 +43,7 @@ class Thread;
* After the server replies to the request, the response is marshalled back to the caller's
* TLS buffer and control is transferred back to it.
*/
class ServerSession final : public SynchronizationObject {
class ServerSession final : public KSynchronizationObject {
friend class ServiceThread;
public:
@@ -77,8 +77,6 @@ public:
return parent.get();
}
bool IsSignaled() const override;
/**
* Sets the HLE handler for the session. This handler will be called to service IPC requests
* instead of the regular IPC machinery. (The regular IPC machinery is currently not
@@ -100,10 +98,6 @@ public:
ResultCode HandleSyncRequest(std::shared_ptr<Thread> thread, Core::Memory::Memory& memory,
Core::Timing::CoreTiming& core_timing);
bool ShouldWait(const Thread* thread) const override;
void Acquire(Thread* thread) override;
/// Called when a client disconnection occurs.
void ClientDisconnected();
@@ -130,6 +124,8 @@ public:
convert_to_domain = true;
}
bool IsSignaled() const override;
private:
/// Queues a sync request from the emulated application.
ResultCode QueueSyncRequest(std::shared_ptr<Thread> thread, Core::Memory::Memory& memory);

View File

@@ -9,7 +9,7 @@
namespace Kernel {
Session::Session(KernelCore& kernel) : SynchronizationObject{kernel} {}
Session::Session(KernelCore& kernel) : KSynchronizationObject{kernel} {}
Session::~Session() = default;
Session::SessionPair Session::Create(KernelCore& kernel, std::string name) {
@@ -24,18 +24,9 @@ Session::SessionPair Session::Create(KernelCore& kernel, std::string name) {
return std::make_pair(std::move(client_session), std::move(server_session));
}
bool Session::ShouldWait(const Thread* thread) const {
UNIMPLEMENTED();
return {};
}
bool Session::IsSignaled() const {
UNIMPLEMENTED();
return true;
}
void Session::Acquire(Thread* thread) {
UNIMPLEMENTED();
}
} // namespace Kernel

View File

@@ -8,7 +8,7 @@
#include <string>
#include <utility>
#include "core/hle/kernel/synchronization_object.h"
#include "core/hle/kernel/k_synchronization_object.h"
namespace Kernel {
@@ -19,7 +19,7 @@ class ServerSession;
* Parent structure to link the client and server endpoints of a session with their associated
* client port.
*/
class Session final : public SynchronizationObject {
class Session final : public KSynchronizationObject {
public:
explicit Session(KernelCore& kernel);
~Session() override;
@@ -37,12 +37,8 @@ public:
return HANDLE_TYPE;
}
bool ShouldWait(const Thread* thread) const override;
bool IsSignaled() const override;
void Acquire(Thread* thread) override;
std::shared_ptr<ClientSession> Client() {
if (auto result{client.lock()}) {
return result;

View File

@@ -10,6 +10,7 @@
#include "common/alignment.h"
#include "common/assert.h"
#include "common/common_funcs.h"
#include "common/fiber.h"
#include "common/logging/log.h"
#include "common/microprofile.h"
@@ -19,26 +20,28 @@
#include "core/core_timing.h"
#include "core/core_timing_util.h"
#include "core/cpu_manager.h"
#include "core/hle/kernel/address_arbiter.h"
#include "core/hle/kernel/client_port.h"
#include "core/hle/kernel/client_session.h"
#include "core/hle/kernel/errors.h"
#include "core/hle/kernel/handle_table.h"
#include "core/hle/kernel/k_address_arbiter.h"
#include "core/hle/kernel/k_condition_variable.h"
#include "core/hle/kernel/k_scheduler.h"
#include "core/hle/kernel/k_scoped_scheduler_lock_and_sleep.h"
#include "core/hle/kernel/k_synchronization_object.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/memory/memory_block.h"
#include "core/hle/kernel/memory/memory_layout.h"
#include "core/hle/kernel/memory/page_table.h"
#include "core/hle/kernel/mutex.h"
#include "core/hle/kernel/physical_core.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/readable_event.h"
#include "core/hle/kernel/resource_limit.h"
#include "core/hle/kernel/shared_memory.h"
#include "core/hle/kernel/svc.h"
#include "core/hle/kernel/svc_results.h"
#include "core/hle/kernel/svc_types.h"
#include "core/hle/kernel/svc_wrap.h"
#include "core/hle/kernel/synchronization.h"
#include "core/hle/kernel/thread.h"
#include "core/hle/kernel/time_manager.h"
#include "core/hle/kernel/transfer_memory.h"
@@ -343,27 +346,11 @@ static ResultCode SendSyncRequest(Core::System& system, Handle handle) {
auto thread = kernel.CurrentScheduler()->GetCurrentThread();
{
KScopedSchedulerLock lock(kernel);
thread->InvalidateHLECallback();
thread->SetStatus(ThreadStatus::WaitIPC);
thread->SetState(ThreadState::Waiting);
thread->SetWaitReasonForDebugging(ThreadWaitReasonForDebugging::IPC);
session->SendSyncRequest(SharedFrom(thread), system.Memory(), system.CoreTiming());
}
if (thread->HasHLECallback()) {
Handle event_handle = thread->GetHLETimeEvent();
if (event_handle != InvalidHandle) {
auto& time_manager = kernel.TimeManager();
time_manager.UnscheduleTimeEvent(event_handle);
}
{
KScopedSchedulerLock lock(kernel);
auto* sync_object = thread->GetHLESyncObject();
sync_object->RemoveWaitingThread(SharedFrom(thread));
}
thread->InvokeHLECallback(SharedFrom(thread));
}
return thread->GetSignalingResult();
}
@@ -436,7 +423,7 @@ static ResultCode GetProcessId32(Core::System& system, u32* process_id_low, u32*
}
/// Wait for the given handles to synchronize, timeout after the specified nanoseconds
static ResultCode WaitSynchronization(Core::System& system, Handle* index, VAddr handles_address,
static ResultCode WaitSynchronization(Core::System& system, s32* index, VAddr handles_address,
u64 handle_count, s64 nano_seconds) {
LOG_TRACE(Kernel_SVC, "called handles_address=0x{:X}, handle_count={}, nano_seconds={}",
handles_address, handle_count, nano_seconds);
@@ -458,28 +445,26 @@ static ResultCode WaitSynchronization(Core::System& system, Handle* index, VAddr
}
auto& kernel = system.Kernel();
Thread::ThreadSynchronizationObjects objects(handle_count);
std::vector<KSynchronizationObject*> objects(handle_count);
const auto& handle_table = kernel.CurrentProcess()->GetHandleTable();
for (u64 i = 0; i < handle_count; ++i) {
const Handle handle = memory.Read32(handles_address + i * sizeof(Handle));
const auto object = handle_table.Get<SynchronizationObject>(handle);
const auto object = handle_table.Get<KSynchronizationObject>(handle);
if (object == nullptr) {
LOG_ERROR(Kernel_SVC, "Object is a nullptr");
return ERR_INVALID_HANDLE;
}
objects[i] = object;
objects[i] = object.get();
}
auto& synchronization = kernel.Synchronization();
const auto [result, handle_result] = synchronization.WaitFor(objects, nano_seconds);
*index = handle_result;
return result;
return KSynchronizationObject::Wait(kernel, index, objects.data(),
static_cast<s32>(objects.size()), nano_seconds);
}
static ResultCode WaitSynchronization32(Core::System& system, u32 timeout_low, u32 handles_address,
s32 handle_count, u32 timeout_high, Handle* index) {
s32 handle_count, u32 timeout_high, s32* index) {
const s64 nano_seconds{(static_cast<s64>(timeout_high) << 32) | static_cast<s64>(timeout_low)};
return WaitSynchronization(system, index, handles_address, handle_count, nano_seconds);
}
@@ -504,56 +489,37 @@ static ResultCode CancelSynchronization32(Core::System& system, Handle thread_ha
return CancelSynchronization(system, thread_handle);
}
/// Attempts to locks a mutex, creating it if it does not already exist
static ResultCode ArbitrateLock(Core::System& system, Handle holding_thread_handle,
VAddr mutex_addr, Handle requesting_thread_handle) {
LOG_TRACE(Kernel_SVC,
"called holding_thread_handle=0x{:08X}, mutex_addr=0x{:X}, "
"requesting_current_thread_handle=0x{:08X}",
holding_thread_handle, mutex_addr, requesting_thread_handle);
/// Attempts to locks a mutex
static ResultCode ArbitrateLock(Core::System& system, Handle thread_handle, VAddr address,
u32 tag) {
LOG_TRACE(Kernel_SVC, "called thread_handle=0x{:08X}, address=0x{:X}, tag=0x{:08X}",
thread_handle, address, tag);
if (Core::Memory::IsKernelVirtualAddress(mutex_addr)) {
LOG_ERROR(Kernel_SVC, "Mutex Address is a kernel virtual address, mutex_addr={:016X}",
mutex_addr);
return ERR_INVALID_ADDRESS_STATE;
}
// Validate the input address.
R_UNLESS(!Memory::IsKernelAddress(address), Svc::ResultInvalidCurrentMemory);
R_UNLESS(Common::IsAligned(address, sizeof(u32)), Svc::ResultInvalidAddress);
if (!Common::IsWordAligned(mutex_addr)) {
LOG_ERROR(Kernel_SVC, "Mutex Address is not word aligned, mutex_addr={:016X}", mutex_addr);
return ERR_INVALID_ADDRESS;
}
auto* const current_process = system.Kernel().CurrentProcess();
return current_process->GetMutex().TryAcquire(mutex_addr, holding_thread_handle,
requesting_thread_handle);
return system.Kernel().CurrentProcess()->WaitForAddress(thread_handle, address, tag);
}
static ResultCode ArbitrateLock32(Core::System& system, Handle holding_thread_handle,
u32 mutex_addr, Handle requesting_thread_handle) {
return ArbitrateLock(system, holding_thread_handle, mutex_addr, requesting_thread_handle);
static ResultCode ArbitrateLock32(Core::System& system, Handle thread_handle, u32 address,
u32 tag) {
return ArbitrateLock(system, thread_handle, address, tag);
}
/// Unlock a mutex
static ResultCode ArbitrateUnlock(Core::System& system, VAddr mutex_addr) {
LOG_TRACE(Kernel_SVC, "called mutex_addr=0x{:X}", mutex_addr);
static ResultCode ArbitrateUnlock(Core::System& system, VAddr address) {
LOG_TRACE(Kernel_SVC, "called address=0x{:X}", address);
if (Core::Memory::IsKernelVirtualAddress(mutex_addr)) {
LOG_ERROR(Kernel_SVC, "Mutex Address is a kernel virtual address, mutex_addr={:016X}",
mutex_addr);
return ERR_INVALID_ADDRESS_STATE;
}
// Validate the input address.
R_UNLESS(!Memory::IsKernelAddress(address), Svc::ResultInvalidCurrentMemory);
R_UNLESS(Common::IsAligned(address, sizeof(u32)), Svc::ResultInvalidAddress);
if (!Common::IsWordAligned(mutex_addr)) {
LOG_ERROR(Kernel_SVC, "Mutex Address is not word aligned, mutex_addr={:016X}", mutex_addr);
return ERR_INVALID_ADDRESS;
}
auto* const current_process = system.Kernel().CurrentProcess();
return current_process->GetMutex().Release(mutex_addr);
return system.Kernel().CurrentProcess()->SignalToAddress(address);
}
static ResultCode ArbitrateUnlock32(Core::System& system, u32 mutex_addr) {
return ArbitrateUnlock(system, mutex_addr);
static ResultCode ArbitrateUnlock32(Core::System& system, u32 address) {
return ArbitrateUnlock(system, address);
}
enum class BreakType : u32 {
@@ -1180,7 +1146,7 @@ static ResultCode SetThreadPriority(Core::System& system, Handle handle, u32 pri
return ERR_INVALID_HANDLE;
}
thread->SetPriority(priority);
thread->SetBasePriority(priority);
return RESULT_SUCCESS;
}
@@ -1559,7 +1525,7 @@ static ResultCode StartThread(Core::System& system, Handle thread_handle) {
return ERR_INVALID_HANDLE;
}
ASSERT(thread->GetStatus() == ThreadStatus::Dormant);
ASSERT(thread->GetState() == ThreadState::Initialized);
return thread->Start();
}
@@ -1620,224 +1586,135 @@ static void SleepThread32(Core::System& system, u32 nanoseconds_low, u32 nanosec
}
/// Wait process wide key atomic
static ResultCode WaitProcessWideKeyAtomic(Core::System& system, VAddr mutex_addr,
VAddr condition_variable_addr, Handle thread_handle,
s64 nano_seconds) {
LOG_TRACE(
Kernel_SVC,
"called mutex_addr={:X}, condition_variable_addr={:X}, thread_handle=0x{:08X}, timeout={}",
mutex_addr, condition_variable_addr, thread_handle, nano_seconds);
static ResultCode WaitProcessWideKeyAtomic(Core::System& system, VAddr address, VAddr cv_key,
u32 tag, s64 timeout_ns) {
LOG_TRACE(Kernel_SVC, "called address={:X}, cv_key={:X}, tag=0x{:08X}, timeout_ns={}", address,
cv_key, tag, timeout_ns);
if (Core::Memory::IsKernelVirtualAddress(mutex_addr)) {
LOG_ERROR(
Kernel_SVC,
"Given mutex address must not be within the kernel address space. address=0x{:016X}",
mutex_addr);
return ERR_INVALID_ADDRESS_STATE;
}
// Validate input.
R_UNLESS(!Memory::IsKernelAddress(address), Svc::ResultInvalidCurrentMemory);
R_UNLESS(Common::IsAligned(address, sizeof(int32_t)), Svc::ResultInvalidAddress);
if (!Common::IsWordAligned(mutex_addr)) {
LOG_ERROR(Kernel_SVC, "Given mutex address must be word-aligned. address=0x{:016X}",
mutex_addr);
return ERR_INVALID_ADDRESS;
}
ASSERT(condition_variable_addr == Common::AlignDown(condition_variable_addr, 4));
auto& kernel = system.Kernel();
Handle event_handle;
Thread* current_thread = kernel.CurrentScheduler()->GetCurrentThread();
auto* const current_process = kernel.CurrentProcess();
{
KScopedSchedulerLockAndSleep lock(kernel, event_handle, current_thread, nano_seconds);
const auto& handle_table = current_process->GetHandleTable();
std::shared_ptr<Thread> thread = handle_table.Get<Thread>(thread_handle);
ASSERT(thread);
current_thread->SetSynchronizationResults(nullptr, RESULT_TIMEOUT);
if (thread->IsPendingTermination()) {
lock.CancelSleep();
return ERR_THREAD_TERMINATING;
// Convert timeout from nanoseconds to ticks.
s64 timeout{};
if (timeout_ns > 0) {
const s64 offset_tick(timeout_ns);
if (offset_tick > 0) {
timeout = offset_tick + 2;
if (timeout <= 0) {
timeout = std::numeric_limits<s64>::max();
}
} else {
timeout = std::numeric_limits<s64>::max();
}
const auto release_result = current_process->GetMutex().Release(mutex_addr);
if (release_result.IsError()) {
lock.CancelSleep();
return release_result;
}
if (nano_seconds == 0) {
lock.CancelSleep();
return RESULT_TIMEOUT;
}
current_thread->SetCondVarWaitAddress(condition_variable_addr);
current_thread->SetMutexWaitAddress(mutex_addr);
current_thread->SetWaitHandle(thread_handle);
current_thread->SetStatus(ThreadStatus::WaitCondVar);
current_process->InsertConditionVariableThread(SharedFrom(current_thread));
} else {
timeout = timeout_ns;
}
if (event_handle != InvalidHandle) {
auto& time_manager = kernel.TimeManager();
time_manager.UnscheduleTimeEvent(event_handle);
}
{
KScopedSchedulerLock lock(kernel);
auto* owner = current_thread->GetLockOwner();
if (owner != nullptr) {
owner->RemoveMutexWaiter(SharedFrom(current_thread));
}
current_process->RemoveConditionVariableThread(SharedFrom(current_thread));
}
// Note: Deliberately don't attempt to inherit the lock owner's priority.
return current_thread->GetSignalingResult();
// Wait on the condition variable.
return system.Kernel().CurrentProcess()->WaitConditionVariable(
address, Common::AlignDown(cv_key, sizeof(u32)), tag, timeout);
}
static ResultCode WaitProcessWideKeyAtomic32(Core::System& system, u32 mutex_addr,
u32 condition_variable_addr, Handle thread_handle,
u32 nanoseconds_low, u32 nanoseconds_high) {
const auto nanoseconds = static_cast<s64>(nanoseconds_low | (u64{nanoseconds_high} << 32));
return WaitProcessWideKeyAtomic(system, mutex_addr, condition_variable_addr, thread_handle,
nanoseconds);
static ResultCode WaitProcessWideKeyAtomic32(Core::System& system, u32 address, u32 cv_key, u32 tag,
u32 timeout_ns_low, u32 timeout_ns_high) {
const auto timeout_ns = static_cast<s64>(timeout_ns_low | (u64{timeout_ns_high} << 32));
return WaitProcessWideKeyAtomic(system, address, cv_key, tag, timeout_ns);
}
/// Signal process wide key
static void SignalProcessWideKey(Core::System& system, VAddr condition_variable_addr, s32 target) {
LOG_TRACE(Kernel_SVC, "called, condition_variable_addr=0x{:X}, target=0x{:08X}",
condition_variable_addr, target);
static void SignalProcessWideKey(Core::System& system, VAddr cv_key, s32 count) {
LOG_TRACE(Kernel_SVC, "called, cv_key=0x{:X}, count=0x{:08X}", cv_key, count);
ASSERT(condition_variable_addr == Common::AlignDown(condition_variable_addr, 4));
// Signal the condition variable.
return system.Kernel().CurrentProcess()->SignalConditionVariable(
Common::AlignDown(cv_key, sizeof(u32)), count);
}
// Retrieve a list of all threads that are waiting for this condition variable.
auto& kernel = system.Kernel();
KScopedSchedulerLock lock(kernel);
auto* const current_process = kernel.CurrentProcess();
std::vector<std::shared_ptr<Thread>> waiting_threads =
current_process->GetConditionVariableThreads(condition_variable_addr);
static void SignalProcessWideKey32(Core::System& system, u32 cv_key, s32 count) {
SignalProcessWideKey(system, cv_key, count);
}
// Only process up to 'target' threads, unless 'target' is less equal 0, in which case process
// them all.
std::size_t last = waiting_threads.size();
if (target > 0) {
last = std::min(waiting_threads.size(), static_cast<std::size_t>(target));
}
for (std::size_t index = 0; index < last; ++index) {
auto& thread = waiting_threads[index];
namespace {
ASSERT(thread->GetCondVarWaitAddress() == condition_variable_addr);
// liberate Cond Var Thread.
current_process->RemoveConditionVariableThread(thread);
const std::size_t current_core = system.CurrentCoreIndex();
auto& monitor = system.Monitor();
// Atomically read the value of the mutex.
u32 mutex_val = 0;
u32 update_val = 0;
const VAddr mutex_address = thread->GetMutexWaitAddress();
do {
// If the mutex is not yet acquired, acquire it.
mutex_val = monitor.ExclusiveRead32(current_core, mutex_address);
if (mutex_val != 0) {
update_val = mutex_val | Mutex::MutexHasWaitersFlag;
} else {
update_val = thread->GetWaitHandle();
}
} while (!monitor.ExclusiveWrite32(current_core, mutex_address, update_val));
monitor.ClearExclusive();
if (mutex_val == 0) {
// We were able to acquire the mutex, resume this thread.
auto* const lock_owner = thread->GetLockOwner();
if (lock_owner != nullptr) {
lock_owner->RemoveMutexWaiter(thread);
}
thread->SetLockOwner(nullptr);
thread->SetSynchronizationResults(nullptr, RESULT_SUCCESS);
thread->ResumeFromWait();
} else {
// The mutex is already owned by some other thread, make this thread wait on it.
const Handle owner_handle = static_cast<Handle>(mutex_val & Mutex::MutexOwnerMask);
const auto& handle_table = system.Kernel().CurrentProcess()->GetHandleTable();
auto owner = handle_table.Get<Thread>(owner_handle);
ASSERT(owner);
if (thread->GetStatus() == ThreadStatus::WaitCondVar) {
thread->SetStatus(ThreadStatus::WaitMutex);
}
owner->AddMutexWaiter(thread);
}
constexpr bool IsValidSignalType(Svc::SignalType type) {
switch (type) {
case Svc::SignalType::Signal:
case Svc::SignalType::SignalAndIncrementIfEqual:
case Svc::SignalType::SignalAndModifyByWaitingCountIfEqual:
return true;
default:
return false;
}
}
static void SignalProcessWideKey32(Core::System& system, u32 condition_variable_addr, s32 target) {
SignalProcessWideKey(system, condition_variable_addr, target);
constexpr bool IsValidArbitrationType(Svc::ArbitrationType type) {
switch (type) {
case Svc::ArbitrationType::WaitIfLessThan:
case Svc::ArbitrationType::DecrementAndWaitIfLessThan:
case Svc::ArbitrationType::WaitIfEqual:
return true;
default:
return false;
}
}
} // namespace
// Wait for an address (via Address Arbiter)
static ResultCode WaitForAddress(Core::System& system, VAddr address, u32 type, s32 value,
s64 timeout) {
LOG_TRACE(Kernel_SVC, "called, address=0x{:X}, type=0x{:X}, value=0x{:X}, timeout={}", address,
type, value, timeout);
static ResultCode WaitForAddress(Core::System& system, VAddr address, Svc::ArbitrationType arb_type,
s32 value, s64 timeout_ns) {
LOG_TRACE(Kernel_SVC, "called, address=0x{:X}, arb_type=0x{:X}, value=0x{:X}, timeout_ns={}",
address, arb_type, value, timeout_ns);
// If the passed address is a kernel virtual address, return invalid memory state.
if (Core::Memory::IsKernelVirtualAddress(address)) {
LOG_ERROR(Kernel_SVC, "Address is a kernel virtual address, address={:016X}", address);
return ERR_INVALID_ADDRESS_STATE;
// Validate input.
R_UNLESS(!Memory::IsKernelAddress(address), Svc::ResultInvalidCurrentMemory);
R_UNLESS(Common::IsAligned(address, sizeof(int32_t)), Svc::ResultInvalidAddress);
R_UNLESS(IsValidArbitrationType(arb_type), Svc::ResultInvalidEnumValue);
// Convert timeout from nanoseconds to ticks.
s64 timeout{};
if (timeout_ns > 0) {
const s64 offset_tick(timeout_ns);
if (offset_tick > 0) {
timeout = offset_tick + 2;
if (timeout <= 0) {
timeout = std::numeric_limits<s64>::max();
}
} else {
timeout = std::numeric_limits<s64>::max();
}
} else {
timeout = timeout_ns;
}
// If the address is not properly aligned to 4 bytes, return invalid address.
if (!Common::IsWordAligned(address)) {
LOG_ERROR(Kernel_SVC, "Address is not word aligned, address={:016X}", address);
return ERR_INVALID_ADDRESS;
}
const auto arbitration_type = static_cast<AddressArbiter::ArbitrationType>(type);
auto& address_arbiter = system.Kernel().CurrentProcess()->GetAddressArbiter();
const ResultCode result =
address_arbiter.WaitForAddress(address, arbitration_type, value, timeout);
return result;
return system.Kernel().CurrentProcess()->WaitAddressArbiter(address, arb_type, value, timeout);
}
static ResultCode WaitForAddress32(Core::System& system, u32 address, u32 type, s32 value,
u32 timeout_low, u32 timeout_high) {
const auto timeout = static_cast<s64>(timeout_low | (u64{timeout_high} << 32));
return WaitForAddress(system, address, type, value, timeout);
static ResultCode WaitForAddress32(Core::System& system, u32 address, Svc::ArbitrationType arb_type,
s32 value, u32 timeout_ns_low, u32 timeout_ns_high) {
const auto timeout = static_cast<s64>(timeout_ns_low | (u64{timeout_ns_high} << 32));
return WaitForAddress(system, address, arb_type, value, timeout);
}
// Signals to an address (via Address Arbiter)
static ResultCode SignalToAddress(Core::System& system, VAddr address, u32 type, s32 value,
s32 num_to_wake) {
LOG_TRACE(Kernel_SVC, "called, address=0x{:X}, type=0x{:X}, value=0x{:X}, num_to_wake=0x{:X}",
address, type, value, num_to_wake);
static ResultCode SignalToAddress(Core::System& system, VAddr address, Svc::SignalType signal_type,
s32 value, s32 count) {
LOG_TRACE(Kernel_SVC, "called, address=0x{:X}, signal_type=0x{:X}, value=0x{:X}, count=0x{:X}",
address, signal_type, value, count);
// If the passed address is a kernel virtual address, return invalid memory state.
if (Core::Memory::IsKernelVirtualAddress(address)) {
LOG_ERROR(Kernel_SVC, "Address is a kernel virtual address, address={:016X}", address);
return ERR_INVALID_ADDRESS_STATE;
}
// Validate input.
R_UNLESS(!Memory::IsKernelAddress(address), Svc::ResultInvalidCurrentMemory);
R_UNLESS(Common::IsAligned(address, sizeof(s32)), Svc::ResultInvalidAddress);
R_UNLESS(IsValidSignalType(signal_type), Svc::ResultInvalidEnumValue);
// If the address is not properly aligned to 4 bytes, return invalid address.
if (!Common::IsWordAligned(address)) {
LOG_ERROR(Kernel_SVC, "Address is not word aligned, address={:016X}", address);
return ERR_INVALID_ADDRESS;
}
const auto signal_type = static_cast<AddressArbiter::SignalType>(type);
auto& address_arbiter = system.Kernel().CurrentProcess()->GetAddressArbiter();
return address_arbiter.SignalToAddress(address, signal_type, value, num_to_wake);
return system.Kernel().CurrentProcess()->SignalAddressArbiter(address, signal_type, value,
count);
}
static ResultCode SignalToAddress32(Core::System& system, u32 address, u32 type, s32 value,
s32 num_to_wake) {
return SignalToAddress(system, address, type, value, num_to_wake);
static ResultCode SignalToAddress32(Core::System& system, u32 address, Svc::SignalType signal_type,
s32 value, s32 count) {
return SignalToAddress(system, address, signal_type, value, count);
}
static void KernelDebug([[maybe_unused]] Core::System& system,

View File

@@ -0,0 +1,14 @@
// Copyright 2020 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include "common/common_types.h"
namespace Kernel::Svc {
constexpr s32 ArgumentHandleCountMax = 0x40;
constexpr u32 HandleWaitMask{1u << 30};
} // namespace Kernel::Svc

View File

@@ -0,0 +1,20 @@
// Copyright 2020 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include "core/hle/result.h"
namespace Kernel::Svc {
constexpr ResultCode ResultTerminationRequested{ErrorModule::Kernel, 59};
constexpr ResultCode ResultInvalidAddress{ErrorModule::Kernel, 102};
constexpr ResultCode ResultInvalidCurrentMemory{ErrorModule::Kernel, 106};
constexpr ResultCode ResultInvalidHandle{ErrorModule::Kernel, 114};
constexpr ResultCode ResultTimedOut{ErrorModule::Kernel, 117};
constexpr ResultCode ResultCancelled{ErrorModule::Kernel, 118};
constexpr ResultCode ResultInvalidEnumValue{ErrorModule::Kernel, 120};
constexpr ResultCode ResultInvalidState{ErrorModule::Kernel, 125};
} // namespace Kernel::Svc

View File

@@ -23,8 +23,8 @@ enum class MemoryState : u32 {
Ipc = 0x0A,
Stack = 0x0B,
ThreadLocal = 0x0C,
Transfered = 0x0D,
SharedTransfered = 0x0E,
Transferred = 0x0D,
SharedTransferred = 0x0E,
SharedCode = 0x0F,
Inaccessible = 0x10,
NonSecureIpc = 0x11,
@@ -65,4 +65,16 @@ struct MemoryInfo {
u32 padding{};
};
enum class SignalType : u32 {
Signal = 0,
SignalAndIncrementIfEqual = 1,
SignalAndModifyByWaitingCountIfEqual = 2,
};
enum class ArbitrationType : u32 {
WaitIfLessThan = 0,
DecrementAndWaitIfLessThan = 1,
WaitIfEqual = 2,
};
} // namespace Kernel::Svc

View File

@@ -7,6 +7,7 @@
#include "common/common_types.h"
#include "core/arm/arm_interface.h"
#include "core/core.h"
#include "core/hle/kernel/svc_types.h"
#include "core/hle/result.h"
namespace Kernel {
@@ -215,9 +216,10 @@ void SvcWrap64(Core::System& system) {
func(system, static_cast<u32>(Param(system, 0)), Param(system, 1), Param(system, 2)).raw);
}
template <ResultCode func(Core::System&, u32*, u64, u64, s64)>
// Used by WaitSynchronization
template <ResultCode func(Core::System&, s32*, u64, u64, s64)>
void SvcWrap64(Core::System& system) {
u32 param_1 = 0;
s32 param_1 = 0;
const u32 retval = func(system, &param_1, Param(system, 1), static_cast<u32>(Param(system, 2)),
static_cast<s64>(Param(system, 3)))
.raw;
@@ -276,18 +278,22 @@ void SvcWrap64(Core::System& system) {
FuncReturn(system, retval);
}
template <ResultCode func(Core::System&, u64, u32, s32, s64)>
// Used by WaitForAddress
template <ResultCode func(Core::System&, u64, Svc::ArbitrationType, s32, s64)>
void SvcWrap64(Core::System& system) {
FuncReturn(system, func(system, Param(system, 0), static_cast<u32>(Param(system, 1)),
static_cast<s32>(Param(system, 2)), static_cast<s64>(Param(system, 3)))
.raw);
FuncReturn(system,
func(system, Param(system, 0), static_cast<Svc::ArbitrationType>(Param(system, 1)),
static_cast<s32>(Param(system, 2)), static_cast<s64>(Param(system, 3)))
.raw);
}
template <ResultCode func(Core::System&, u64, u32, s32, s32)>
// Used by SignalToAddress
template <ResultCode func(Core::System&, u64, Svc::SignalType, s32, s32)>
void SvcWrap64(Core::System& system) {
FuncReturn(system, func(system, Param(system, 0), static_cast<u32>(Param(system, 1)),
static_cast<s32>(Param(system, 2)), static_cast<s32>(Param(system, 3)))
.raw);
FuncReturn(system,
func(system, Param(system, 0), static_cast<Svc::SignalType>(Param(system, 1)),
static_cast<s32>(Param(system, 2)), static_cast<s32>(Param(system, 3)))
.raw);
}
////////////////////////////////////////////////////////////////////////////////////////////////////
@@ -503,22 +509,23 @@ void SvcWrap32(Core::System& system) {
}
// Used by WaitForAddress32
template <ResultCode func(Core::System&, u32, u32, s32, u32, u32)>
template <ResultCode func(Core::System&, u32, Svc::ArbitrationType, s32, u32, u32)>
void SvcWrap32(Core::System& system) {
const u32 retval = func(system, static_cast<u32>(Param(system, 0)),
static_cast<u32>(Param(system, 1)), static_cast<s32>(Param(system, 2)),
static_cast<u32>(Param(system, 3)), static_cast<u32>(Param(system, 4)))
static_cast<Svc::ArbitrationType>(Param(system, 1)),
static_cast<s32>(Param(system, 2)), static_cast<u32>(Param(system, 3)),
static_cast<u32>(Param(system, 4)))
.raw;
FuncReturn(system, retval);
}
// Used by SignalToAddress32
template <ResultCode func(Core::System&, u32, u32, s32, s32)>
template <ResultCode func(Core::System&, u32, Svc::SignalType, s32, s32)>
void SvcWrap32(Core::System& system) {
const u32 retval =
func(system, static_cast<u32>(Param(system, 0)), static_cast<u32>(Param(system, 1)),
static_cast<s32>(Param(system, 2)), static_cast<s32>(Param(system, 3)))
.raw;
const u32 retval = func(system, static_cast<u32>(Param(system, 0)),
static_cast<Svc::SignalType>(Param(system, 1)),
static_cast<s32>(Param(system, 2)), static_cast<s32>(Param(system, 3)))
.raw;
FuncReturn(system, retval);
}
@@ -539,9 +546,9 @@ void SvcWrap32(Core::System& system) {
}
// Used by WaitSynchronization32
template <ResultCode func(Core::System&, u32, u32, s32, u32, Handle*)>
template <ResultCode func(Core::System&, u32, u32, s32, u32, s32*)>
void SvcWrap32(Core::System& system) {
u32 param_1 = 0;
s32 param_1 = 0;
const u32 retval = func(system, Param32(system, 0), Param32(system, 1), Param32(system, 2),
Param32(system, 3), &param_1)
.raw;

View File

@@ -1,116 +0,0 @@
// Copyright 2020 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "core/core.h"
#include "core/hle/kernel/errors.h"
#include "core/hle/kernel/handle_table.h"
#include "core/hle/kernel/k_scheduler.h"
#include "core/hle/kernel/k_scoped_scheduler_lock_and_sleep.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/synchronization.h"
#include "core/hle/kernel/synchronization_object.h"
#include "core/hle/kernel/thread.h"
#include "core/hle/kernel/time_manager.h"
namespace Kernel {
Synchronization::Synchronization(Core::System& system) : system{system} {}
void Synchronization::SignalObject(SynchronizationObject& obj) const {
auto& kernel = system.Kernel();
KScopedSchedulerLock lock(kernel);
if (obj.IsSignaled()) {
for (auto thread : obj.GetWaitingThreads()) {
if (thread->GetSchedulingStatus() == ThreadSchedStatus::Paused) {
if (thread->GetStatus() != ThreadStatus::WaitHLEEvent) {
ASSERT(thread->GetStatus() == ThreadStatus::WaitSynch);
ASSERT(thread->IsWaitingSync());
}
thread->SetSynchronizationResults(&obj, RESULT_SUCCESS);
thread->ResumeFromWait();
}
}
obj.ClearWaitingThreads();
}
}
std::pair<ResultCode, Handle> Synchronization::WaitFor(
std::vector<std::shared_ptr<SynchronizationObject>>& sync_objects, s64 nano_seconds) {
auto& kernel = system.Kernel();
auto* const thread = kernel.CurrentScheduler()->GetCurrentThread();
Handle event_handle = InvalidHandle;
{
KScopedSchedulerLockAndSleep lock(kernel, event_handle, thread, nano_seconds);
const auto itr =
std::find_if(sync_objects.begin(), sync_objects.end(),
[thread](const std::shared_ptr<SynchronizationObject>& object) {
return object->IsSignaled();
});
if (itr != sync_objects.end()) {
// We found a ready object, acquire it and set the result value
SynchronizationObject* object = itr->get();
object->Acquire(thread);
const u32 index = static_cast<s32>(std::distance(sync_objects.begin(), itr));
lock.CancelSleep();
return {RESULT_SUCCESS, index};
}
if (nano_seconds == 0) {
lock.CancelSleep();
return {RESULT_TIMEOUT, InvalidHandle};
}
if (thread->IsPendingTermination()) {
lock.CancelSleep();
return {ERR_THREAD_TERMINATING, InvalidHandle};
}
if (thread->IsSyncCancelled()) {
thread->SetSyncCancelled(false);
lock.CancelSleep();
return {ERR_SYNCHRONIZATION_CANCELED, InvalidHandle};
}
for (auto& object : sync_objects) {
object->AddWaitingThread(SharedFrom(thread));
}
thread->SetSynchronizationObjects(&sync_objects);
thread->SetSynchronizationResults(nullptr, RESULT_TIMEOUT);
thread->SetStatus(ThreadStatus::WaitSynch);
thread->SetWaitingSync(true);
}
thread->SetWaitingSync(false);
if (event_handle != InvalidHandle) {
auto& time_manager = kernel.TimeManager();
time_manager.UnscheduleTimeEvent(event_handle);
}
{
KScopedSchedulerLock lock(kernel);
ResultCode signaling_result = thread->GetSignalingResult();
SynchronizationObject* signaling_object = thread->GetSignalingObject();
thread->SetSynchronizationObjects(nullptr);
auto shared_thread = SharedFrom(thread);
for (auto& obj : sync_objects) {
obj->RemoveWaitingThread(shared_thread);
}
if (signaling_object != nullptr) {
const auto itr = std::find_if(
sync_objects.begin(), sync_objects.end(),
[signaling_object](const std::shared_ptr<SynchronizationObject>& object) {
return object.get() == signaling_object;
});
ASSERT(itr != sync_objects.end());
signaling_object->Acquire(thread);
const u32 index = static_cast<s32>(std::distance(sync_objects.begin(), itr));
return {signaling_result, index};
}
return {signaling_result, -1};
}
}
} // namespace Kernel

View File

@@ -1,44 +0,0 @@
// Copyright 2020 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <memory>
#include <utility>
#include <vector>
#include "core/hle/kernel/object.h"
#include "core/hle/result.h"
namespace Core {
class System;
} // namespace Core
namespace Kernel {
class SynchronizationObject;
/**
* The 'Synchronization' class is an interface for handling synchronization methods
* used by Synchronization objects and synchronization SVCs. This centralizes processing of
* such
*/
class Synchronization {
public:
explicit Synchronization(Core::System& system);
/// Signals a synchronization object, waking up all its waiting threads
void SignalObject(SynchronizationObject& obj) const;
/// Tries to see if waiting for any of the sync_objects is necessary, if not
/// it returns Success and the handle index of the signaled sync object. In
/// case not, the current thread will be locked and wait for nano_seconds or
/// for a synchronization object to signal.
std::pair<ResultCode, Handle> WaitFor(
std::vector<std::shared_ptr<SynchronizationObject>>& sync_objects, s64 nano_seconds);
private:
Core::System& system;
};
} // namespace Kernel

View File

@@ -1,49 +0,0 @@
// Copyright 2014 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <algorithm>
#include "common/assert.h"
#include "common/common_types.h"
#include "common/logging/log.h"
#include "core/core.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/object.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/synchronization.h"
#include "core/hle/kernel/synchronization_object.h"
#include "core/hle/kernel/thread.h"
namespace Kernel {
SynchronizationObject::SynchronizationObject(KernelCore& kernel) : Object{kernel} {}
SynchronizationObject::~SynchronizationObject() = default;
void SynchronizationObject::Signal() {
kernel.Synchronization().SignalObject(*this);
}
void SynchronizationObject::AddWaitingThread(std::shared_ptr<Thread> thread) {
auto itr = std::find(waiting_threads.begin(), waiting_threads.end(), thread);
if (itr == waiting_threads.end())
waiting_threads.push_back(std::move(thread));
}
void SynchronizationObject::RemoveWaitingThread(std::shared_ptr<Thread> thread) {
auto itr = std::find(waiting_threads.begin(), waiting_threads.end(), thread);
// If a thread passed multiple handles to the same object,
// the kernel might attempt to remove the thread from the object's
// waiting threads list multiple times.
if (itr != waiting_threads.end())
waiting_threads.erase(itr);
}
void SynchronizationObject::ClearWaitingThreads() {
waiting_threads.clear();
}
const std::vector<std::shared_ptr<Thread>>& SynchronizationObject::GetWaitingThreads() const {
return waiting_threads;
}
} // namespace Kernel

View File

@@ -1,77 +0,0 @@
// Copyright 2014 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <atomic>
#include <memory>
#include <vector>
#include "core/hle/kernel/object.h"
namespace Kernel {
class KernelCore;
class Synchronization;
class Thread;
/// Class that represents a Kernel object that a thread can be waiting on
class SynchronizationObject : public Object {
public:
explicit SynchronizationObject(KernelCore& kernel);
~SynchronizationObject() override;
/**
* Check if the specified thread should wait until the object is available
* @param thread The thread about which we're deciding.
* @return True if the current thread should wait due to this object being unavailable
*/
virtual bool ShouldWait(const Thread* thread) const = 0;
/// Acquire/lock the object for the specified thread if it is available
virtual void Acquire(Thread* thread) = 0;
/// Signal this object
virtual void Signal();
virtual bool IsSignaled() const {
return is_signaled;
}
/**
* Add a thread to wait on this object
* @param thread Pointer to thread to add
*/
void AddWaitingThread(std::shared_ptr<Thread> thread);
/**
* Removes a thread from waiting on this object (e.g. if it was resumed already)
* @param thread Pointer to thread to remove
*/
void RemoveWaitingThread(std::shared_ptr<Thread> thread);
/// Get a const reference to the waiting threads list for debug use
const std::vector<std::shared_ptr<Thread>>& GetWaitingThreads() const;
void ClearWaitingThreads();
protected:
std::atomic_bool is_signaled{}; // Tells if this sync object is signaled
private:
/// Threads waiting for this object to become available
std::vector<std::shared_ptr<Thread>> waiting_threads;
};
// Specialization of DynamicObjectCast for SynchronizationObjects
template <>
inline std::shared_ptr<SynchronizationObject> DynamicObjectCast<SynchronizationObject>(
std::shared_ptr<Object> object) {
if (object != nullptr && object->IsWaitable()) {
return std::static_pointer_cast<SynchronizationObject>(object);
}
return nullptr;
}
} // namespace Kernel

View File

@@ -17,9 +17,11 @@
#include "core/hardware_properties.h"
#include "core/hle/kernel/errors.h"
#include "core/hle/kernel/handle_table.h"
#include "core/hle/kernel/k_condition_variable.h"
#include "core/hle/kernel/k_scheduler.h"
#include "core/hle/kernel/k_scoped_scheduler_lock_and_sleep.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/memory/memory_layout.h"
#include "core/hle/kernel/object.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/thread.h"
@@ -34,26 +36,19 @@
namespace Kernel {
bool Thread::ShouldWait(const Thread* thread) const {
return status != ThreadStatus::Dead;
}
bool Thread::IsSignaled() const {
return status == ThreadStatus::Dead;
return signaled;
}
void Thread::Acquire(Thread* thread) {
ASSERT_MSG(!ShouldWait(thread), "object unavailable!");
}
Thread::Thread(KernelCore& kernel) : SynchronizationObject{kernel} {}
Thread::Thread(KernelCore& kernel) : KSynchronizationObject{kernel} {}
Thread::~Thread() = default;
void Thread::Stop() {
{
KScopedSchedulerLock lock(kernel);
SetStatus(ThreadStatus::Dead);
Signal();
SetState(ThreadState::Terminated);
signaled = true;
NotifyAvailable();
kernel.GlobalHandleTable().Close(global_handle);
if (owner_process) {
@@ -67,59 +62,27 @@ void Thread::Stop() {
global_handle = 0;
}
void Thread::ResumeFromWait() {
void Thread::Wakeup() {
KScopedSchedulerLock lock(kernel);
switch (status) {
case ThreadStatus::Paused:
case ThreadStatus::WaitSynch:
case ThreadStatus::WaitHLEEvent:
case ThreadStatus::WaitSleep:
case ThreadStatus::WaitIPC:
case ThreadStatus::WaitMutex:
case ThreadStatus::WaitCondVar:
case ThreadStatus::WaitArb:
case ThreadStatus::Dormant:
break;
case ThreadStatus::Ready:
// The thread's wakeup callback must have already been cleared when the thread was first
// awoken.
ASSERT(hle_callback == nullptr);
// If the thread is waiting on multiple wait objects, it might be awoken more than once
// before actually resuming. We can ignore subsequent wakeups if the thread status has
// already been set to ThreadStatus::Ready.
return;
case ThreadStatus::Dead:
// This should never happen, as threads must complete before being stopped.
DEBUG_ASSERT_MSG(false, "Thread with object id {} cannot be resumed because it's DEAD.",
GetObjectId());
return;
}
SetStatus(ThreadStatus::Ready);
}
void Thread::OnWakeUp() {
KScopedSchedulerLock lock(kernel);
SetStatus(ThreadStatus::Ready);
SetState(ThreadState::Runnable);
}
ResultCode Thread::Start() {
KScopedSchedulerLock lock(kernel);
SetStatus(ThreadStatus::Ready);
SetState(ThreadState::Runnable);
return RESULT_SUCCESS;
}
void Thread::CancelWait() {
KScopedSchedulerLock lock(kernel);
if (GetSchedulingStatus() != ThreadSchedStatus::Paused || !is_waiting_on_sync) {
if (GetState() != ThreadState::Waiting || !is_cancellable) {
is_sync_cancelled = true;
return;
}
// TODO(Blinkhawk): Implement cancel of server session
is_sync_cancelled = false;
SetSynchronizationResults(nullptr, ERR_SYNCHRONIZATION_CANCELED);
SetStatus(ThreadStatus::Ready);
SetState(ThreadState::Runnable);
}
static void ResetThreadContext32(Core::ARM_Interface::ThreadContext32& context, u32 stack_top,
@@ -183,25 +146,24 @@ ResultVal<std::shared_ptr<Thread>> Thread::Create(Core::System& system, ThreadTy
std::shared_ptr<Thread> thread = std::make_shared<Thread>(kernel);
thread->thread_id = kernel.CreateNewThreadID();
thread->status = ThreadStatus::Dormant;
thread->thread_state = ThreadState::Initialized;
thread->entry_point = entry_point;
thread->stack_top = stack_top;
thread->disable_count = 1;
thread->tpidr_el0 = 0;
thread->nominal_priority = thread->current_priority = priority;
thread->current_priority = priority;
thread->base_priority = priority;
thread->lock_owner = nullptr;
thread->schedule_count = -1;
thread->last_scheduled_tick = 0;
thread->processor_id = processor_id;
thread->ideal_core = processor_id;
thread->affinity_mask.SetAffinity(processor_id, true);
thread->wait_objects = nullptr;
thread->mutex_wait_address = 0;
thread->condvar_wait_address = 0;
thread->wait_handle = 0;
thread->name = std::move(name);
thread->global_handle = kernel.GlobalHandleTable().Create(thread).Unwrap();
thread->owner_process = owner_process;
thread->type = type_flags;
thread->signaled = false;
if ((type_flags & THREADTYPE_IDLE) == 0) {
auto& scheduler = kernel.GlobalSchedulerContext();
scheduler.AddThread(thread);
@@ -226,153 +188,185 @@ ResultVal<std::shared_ptr<Thread>> Thread::Create(Core::System& system, ThreadTy
return MakeResult<std::shared_ptr<Thread>>(std::move(thread));
}
void Thread::SetPriority(u32 priority) {
KScopedSchedulerLock lock(kernel);
void Thread::SetBasePriority(u32 priority) {
ASSERT_MSG(priority <= THREADPRIO_LOWEST && priority >= THREADPRIO_HIGHEST,
"Invalid priority value.");
nominal_priority = priority;
UpdatePriority();
KScopedSchedulerLock lock(kernel);
// Change our base priority.
base_priority = priority;
// Perform a priority restoration.
RestorePriority(kernel, this);
}
void Thread::SetSynchronizationResults(SynchronizationObject* object, ResultCode result) {
void Thread::SetSynchronizationResults(KSynchronizationObject* object, ResultCode result) {
signaling_object = object;
signaling_result = result;
}
s32 Thread::GetSynchronizationObjectIndex(std::shared_ptr<SynchronizationObject> object) const {
ASSERT_MSG(!wait_objects->empty(), "Thread is not waiting for anything");
const auto match = std::find(wait_objects->rbegin(), wait_objects->rend(), object);
return static_cast<s32>(std::distance(match, wait_objects->rend()) - 1);
}
VAddr Thread::GetCommandBufferAddress() const {
// Offset from the start of TLS at which the IPC command buffer begins.
constexpr u64 command_header_offset = 0x80;
return GetTLSAddress() + command_header_offset;
}
void Thread::SetStatus(ThreadStatus new_status) {
if (new_status == status) {
return;
}
void Thread::SetState(ThreadState state) {
KScopedSchedulerLock sl(kernel);
switch (new_status) {
case ThreadStatus::Ready:
SetSchedulingStatus(ThreadSchedStatus::Runnable);
break;
case ThreadStatus::Dormant:
SetSchedulingStatus(ThreadSchedStatus::None);
break;
case ThreadStatus::Dead:
SetSchedulingStatus(ThreadSchedStatus::Exited);
break;
default:
SetSchedulingStatus(ThreadSchedStatus::Paused);
break;
}
// Clear debugging state
SetMutexWaitAddressForDebugging({});
SetWaitReasonForDebugging({});
status = new_status;
const ThreadState old_state = thread_state;
thread_state =
static_cast<ThreadState>((old_state & ~ThreadState::Mask) | (state & ThreadState::Mask));
if (thread_state != old_state) {
KScheduler::OnThreadStateChanged(kernel, this, old_state);
}
}
void Thread::AddMutexWaiter(std::shared_ptr<Thread> thread) {
if (thread->lock_owner.get() == this) {
// If the thread is already waiting for this thread to release the mutex, ensure that the
// waiters list is consistent and return without doing anything.
const auto iter = std::find(wait_mutex_threads.begin(), wait_mutex_threads.end(), thread);
ASSERT(iter != wait_mutex_threads.end());
return;
void Thread::AddWaiterImpl(Thread* thread) {
ASSERT(kernel.GlobalSchedulerContext().IsLocked());
// Find the right spot to insert the waiter.
auto it = waiter_list.begin();
while (it != waiter_list.end()) {
if (it->GetPriority() > thread->GetPriority()) {
break;
}
it++;
}
// A thread can't wait on two different mutexes at the same time.
ASSERT(thread->lock_owner == nullptr);
// Keep track of how many kernel waiters we have.
if (Memory::IsKernelAddressKey(thread->GetAddressKey())) {
ASSERT((num_kernel_waiters++) >= 0);
}
// Ensure that the thread is not already in the list of mutex waiters
const auto iter = std::find(wait_mutex_threads.begin(), wait_mutex_threads.end(), thread);
ASSERT(iter == wait_mutex_threads.end());
// Keep the list in an ordered fashion
const auto insertion_point = std::find_if(
wait_mutex_threads.begin(), wait_mutex_threads.end(),
[&thread](const auto& entry) { return entry->GetPriority() > thread->GetPriority(); });
wait_mutex_threads.insert(insertion_point, thread);
thread->lock_owner = SharedFrom(this);
UpdatePriority();
// Insert the waiter.
waiter_list.insert(it, *thread);
thread->SetLockOwner(this);
}
void Thread::RemoveMutexWaiter(std::shared_ptr<Thread> thread) {
ASSERT(thread->lock_owner.get() == this);
void Thread::RemoveWaiterImpl(Thread* thread) {
ASSERT(kernel.GlobalSchedulerContext().IsLocked());
// Ensure that the thread is in the list of mutex waiters
const auto iter = std::find(wait_mutex_threads.begin(), wait_mutex_threads.end(), thread);
ASSERT(iter != wait_mutex_threads.end());
// Keep track of how many kernel waiters we have.
if (Memory::IsKernelAddressKey(thread->GetAddressKey())) {
ASSERT((num_kernel_waiters--) > 0);
}
wait_mutex_threads.erase(iter);
thread->lock_owner = nullptr;
UpdatePriority();
// Remove the waiter.
waiter_list.erase(waiter_list.iterator_to(*thread));
thread->SetLockOwner(nullptr);
}
void Thread::UpdatePriority() {
// If any of the threads waiting on the mutex have a higher priority
// (taking into account priority inheritance), then this thread inherits
// that thread's priority.
u32 new_priority = nominal_priority;
if (!wait_mutex_threads.empty()) {
if (wait_mutex_threads.front()->current_priority < new_priority) {
new_priority = wait_mutex_threads.front()->current_priority;
void Thread::RestorePriority(KernelCore& kernel, Thread* thread) {
ASSERT(kernel.GlobalSchedulerContext().IsLocked());
while (true) {
// We want to inherit priority where possible.
s32 new_priority = thread->GetBasePriority();
if (thread->HasWaiters()) {
new_priority = std::min(new_priority, thread->waiter_list.front().GetPriority());
}
// If the priority we would inherit is not different from ours, don't do anything.
if (new_priority == thread->GetPriority()) {
return;
}
// Ensure we don't violate condition variable red black tree invariants.
if (auto* cv_tree = thread->GetConditionVariableTree(); cv_tree != nullptr) {
BeforeUpdatePriority(kernel, cv_tree, thread);
}
// Change the priority.
const s32 old_priority = thread->GetPriority();
thread->SetPriority(new_priority);
// Restore the condition variable, if relevant.
if (auto* cv_tree = thread->GetConditionVariableTree(); cv_tree != nullptr) {
AfterUpdatePriority(kernel, cv_tree, thread);
}
// Update the scheduler.
KScheduler::OnThreadPriorityChanged(kernel, thread, old_priority);
// Keep the lock owner up to date.
Thread* lock_owner = thread->GetLockOwner();
if (lock_owner == nullptr) {
return;
}
// Update the thread in the lock owner's sorted list, and continue inheriting.
lock_owner->RemoveWaiterImpl(thread);
lock_owner->AddWaiterImpl(thread);
thread = lock_owner;
}
}
void Thread::AddWaiter(Thread* thread) {
AddWaiterImpl(thread);
RestorePriority(kernel, this);
}
void Thread::RemoveWaiter(Thread* thread) {
RemoveWaiterImpl(thread);
RestorePriority(kernel, this);
}
Thread* Thread::RemoveWaiterByKey(s32* out_num_waiters, VAddr key) {
ASSERT(kernel.GlobalSchedulerContext().IsLocked());
s32 num_waiters{};
Thread* next_lock_owner{};
auto it = waiter_list.begin();
while (it != waiter_list.end()) {
if (it->GetAddressKey() == key) {
Thread* thread = std::addressof(*it);
// Keep track of how many kernel waiters we have.
if (Memory::IsKernelAddressKey(thread->GetAddressKey())) {
ASSERT((num_kernel_waiters--) > 0);
}
it = waiter_list.erase(it);
// Update the next lock owner.
if (next_lock_owner == nullptr) {
next_lock_owner = thread;
next_lock_owner->SetLockOwner(nullptr);
} else {
next_lock_owner->AddWaiterImpl(thread);
}
num_waiters++;
} else {
it++;
}
}
if (new_priority == current_priority) {
return;
// Do priority updates, if we have a next owner.
if (next_lock_owner) {
RestorePriority(kernel, this);
RestorePriority(kernel, next_lock_owner);
}
if (GetStatus() == ThreadStatus::WaitCondVar) {
owner_process->RemoveConditionVariableThread(SharedFrom(this));
}
SetCurrentPriority(new_priority);
if (GetStatus() == ThreadStatus::WaitCondVar) {
owner_process->InsertConditionVariableThread(SharedFrom(this));
}
if (!lock_owner) {
return;
}
// Ensure that the thread is within the correct location in the waiting list.
auto old_owner = lock_owner;
lock_owner->RemoveMutexWaiter(SharedFrom(this));
old_owner->AddMutexWaiter(SharedFrom(this));
// Recursively update the priority of the thread that depends on the priority of this one.
lock_owner->UpdatePriority();
}
bool Thread::AllSynchronizationObjectsReady() const {
return std::none_of(wait_objects->begin(), wait_objects->end(),
[this](const std::shared_ptr<SynchronizationObject>& object) {
return object->ShouldWait(this);
});
}
bool Thread::InvokeHLECallback(std::shared_ptr<Thread> thread) {
ASSERT(hle_callback);
return hle_callback(std::move(thread));
// Return output.
*out_num_waiters = num_waiters;
return next_lock_owner;
}
ResultCode Thread::SetActivity(ThreadActivity value) {
KScopedSchedulerLock lock(kernel);
auto sched_status = GetSchedulingStatus();
auto sched_status = GetState();
if (sched_status != ThreadSchedStatus::Runnable && sched_status != ThreadSchedStatus::Paused) {
if (sched_status != ThreadState::Runnable && sched_status != ThreadState::Waiting) {
return ERR_INVALID_STATE;
}
if (IsPendingTermination()) {
if (IsTerminationRequested()) {
return RESULT_SUCCESS;
}
@@ -394,7 +388,8 @@ ResultCode Thread::Sleep(s64 nanoseconds) {
Handle event_handle{};
{
KScopedSchedulerLockAndSleep lock(kernel, event_handle, this, nanoseconds);
SetStatus(ThreadStatus::WaitSleep);
SetState(ThreadState::Waiting);
SetWaitReasonForDebugging(ThreadWaitReasonForDebugging::Sleep);
}
if (event_handle != InvalidHandle) {
@@ -405,34 +400,21 @@ ResultCode Thread::Sleep(s64 nanoseconds) {
}
void Thread::AddSchedulingFlag(ThreadSchedFlags flag) {
const u32 old_state = scheduling_state;
const auto old_state = GetRawState();
pausing_state |= static_cast<u32>(flag);
const u32 base_scheduling = static_cast<u32>(GetSchedulingStatus());
scheduling_state = base_scheduling | pausing_state;
const auto base_scheduling = GetState();
thread_state = base_scheduling | static_cast<ThreadState>(pausing_state);
KScheduler::OnThreadStateChanged(kernel, this, old_state);
}
void Thread::RemoveSchedulingFlag(ThreadSchedFlags flag) {
const u32 old_state = scheduling_state;
const auto old_state = GetRawState();
pausing_state &= ~static_cast<u32>(flag);
const u32 base_scheduling = static_cast<u32>(GetSchedulingStatus());
scheduling_state = base_scheduling | pausing_state;
const auto base_scheduling = GetState();
thread_state = base_scheduling | static_cast<ThreadState>(pausing_state);
KScheduler::OnThreadStateChanged(kernel, this, old_state);
}
void Thread::SetSchedulingStatus(ThreadSchedStatus new_status) {
const u32 old_state = scheduling_state;
scheduling_state = (scheduling_state & static_cast<u32>(ThreadSchedMasks::HighMask)) |
static_cast<u32>(new_status);
KScheduler::OnThreadStateChanged(kernel, this, old_state);
}
void Thread::SetCurrentPriority(u32 new_priority) {
const u32 old_priority = std::exchange(current_priority, new_priority);
KScheduler::OnThreadPriorityChanged(kernel, this, kernel.CurrentScheduler()->GetCurrentThread(),
old_priority);
}
ResultCode Thread::SetCoreAndAffinityMask(s32 new_core, u64 new_affinity_mask) {
KScopedSchedulerLock lock(kernel);
const auto HighestSetCore = [](u64 mask, u32 max_cores) {

View File

@@ -6,16 +6,21 @@
#include <array>
#include <functional>
#include <span>
#include <string>
#include <utility>
#include <vector>
#include <boost/intrusive/list.hpp>
#include "common/common_types.h"
#include "common/intrusive_red_black_tree.h"
#include "common/spin_lock.h"
#include "core/arm/arm_interface.h"
#include "core/hle/kernel/k_affinity_mask.h"
#include "core/hle/kernel/k_synchronization_object.h"
#include "core/hle/kernel/object.h"
#include "core/hle/kernel/synchronization_object.h"
#include "core/hle/kernel/svc_common.h"
#include "core/hle/result.h"
namespace Common {
@@ -73,19 +78,24 @@ enum ThreadProcessorId : s32 {
(1 << THREADPROCESSORID_2) | (1 << THREADPROCESSORID_3)
};
enum class ThreadStatus {
Ready, ///< Ready to run
Paused, ///< Paused by SetThreadActivity or debug
WaitHLEEvent, ///< Waiting for hle event to finish
WaitSleep, ///< Waiting due to a SleepThread SVC
WaitIPC, ///< Waiting for the reply from an IPC request
WaitSynch, ///< Waiting due to WaitSynchronization
WaitMutex, ///< Waiting due to an ArbitrateLock svc
WaitCondVar, ///< Waiting due to an WaitProcessWideKey svc
WaitArb, ///< Waiting due to a SignalToAddress/WaitForAddress svc
Dormant, ///< Created but not yet made ready
Dead ///< Run to completion, or forcefully terminated
enum class ThreadState : u16 {
Initialized = 0,
Waiting = 1,
Runnable = 2,
Terminated = 3,
SuspendShift = 4,
Mask = (1 << SuspendShift) - 1,
ProcessSuspended = (1 << (0 + SuspendShift)),
ThreadSuspended = (1 << (1 + SuspendShift)),
DebugSuspended = (1 << (2 + SuspendShift)),
BacktraceSuspended = (1 << (3 + SuspendShift)),
InitSuspended = (1 << (4 + SuspendShift)),
SuspendFlagMask = ((1 << 5) - 1) << SuspendShift,
};
DECLARE_ENUM_FLAG_OPERATORS(ThreadState);
enum class ThreadWakeupReason {
Signal, // The thread was woken up by WakeupAllWaitingThreads due to an object signal.
@@ -97,13 +107,6 @@ enum class ThreadActivity : u32 {
Paused = 1,
};
enum class ThreadSchedStatus : u32 {
None = 0,
Paused = 1,
Runnable = 2,
Exited = 3,
};
enum class ThreadSchedFlags : u32 {
ProcessPauseFlag = 1 << 4,
ThreadPauseFlag = 1 << 5,
@@ -111,13 +114,20 @@ enum class ThreadSchedFlags : u32 {
KernelInitPauseFlag = 1 << 8,
};
enum class ThreadSchedMasks : u32 {
LowMask = 0x000f,
HighMask = 0xfff0,
ForcePauseMask = 0x0070,
enum class ThreadWaitReasonForDebugging : u32 {
None, ///< Thread is not waiting
Sleep, ///< Thread is waiting due to a SleepThread SVC
IPC, ///< Thread is waiting for the reply from an IPC request
Synchronization, ///< Thread is waiting due to a WaitSynchronization SVC
ConditionVar, ///< Thread is waiting due to a WaitProcessWideKey SVC
Arbitration, ///< Thread is waiting due to a SignalToAddress/WaitForAddress SVC
Suspended, ///< Thread is waiting due to process suspension
};
class Thread final : public SynchronizationObject {
class Thread final : public KSynchronizationObject, public boost::intrusive::list_base_hook<> {
friend class KScheduler;
friend class Process;
public:
explicit Thread(KernelCore& kernel);
~Thread() override;
@@ -127,10 +137,6 @@ public:
using ThreadContext32 = Core::ARM_Interface::ThreadContext32;
using ThreadContext64 = Core::ARM_Interface::ThreadContext64;
using ThreadSynchronizationObjects = std::vector<std::shared_ptr<SynchronizationObject>>;
using HLECallback = std::function<bool(std::shared_ptr<Thread> thread)>;
/**
* Creates and returns a new thread. The new thread is immediately scheduled
* @param system The instance of the whole system
@@ -186,59 +192,54 @@ public:
return HANDLE_TYPE;
}
bool ShouldWait(const Thread* thread) const override;
void Acquire(Thread* thread) override;
bool IsSignaled() const override;
/**
* Gets the thread's current priority
* @return The current thread's priority
*/
u32 GetPriority() const {
[[nodiscard]] s32 GetPriority() const {
return current_priority;
}
/**
* Sets the thread's current priority.
* @param priority The new priority.
*/
void SetPriority(s32 priority) {
current_priority = priority;
}
/**
* Gets the thread's nominal priority.
* @return The current thread's nominal priority.
*/
u32 GetNominalPriority() const {
return nominal_priority;
[[nodiscard]] s32 GetBasePriority() const {
return base_priority;
}
/**
* Sets the thread's current priority
* @param priority The new priority
* Sets the thread's nominal priority.
* @param priority The new priority.
*/
void SetPriority(u32 priority);
/// Adds a thread to the list of threads that are waiting for a lock held by this thread.
void AddMutexWaiter(std::shared_ptr<Thread> thread);
/// Removes a thread from the list of threads that are waiting for a lock held by this thread.
void RemoveMutexWaiter(std::shared_ptr<Thread> thread);
/// Recalculates the current priority taking into account priority inheritance.
void UpdatePriority();
void SetBasePriority(u32 priority);
/// Changes the core that the thread is running or scheduled to run on.
ResultCode SetCoreAndAffinityMask(s32 new_core, u64 new_affinity_mask);
[[nodiscard]] ResultCode SetCoreAndAffinityMask(s32 new_core, u64 new_affinity_mask);
/**
* Gets the thread's thread ID
* @return The thread's ID
*/
u64 GetThreadID() const {
[[nodiscard]] u64 GetThreadID() const {
return thread_id;
}
/// Resumes a thread from waiting
void ResumeFromWait();
void OnWakeUp();
void Wakeup();
ResultCode Start();
virtual bool IsSignaled() const override;
/// Cancels a waiting operation that this thread may or may not be within.
///
/// When the thread is within a waiting state, this will set the thread's
@@ -247,30 +248,21 @@ public:
///
void CancelWait();
void SetSynchronizationResults(SynchronizationObject* object, ResultCode result);
void SetSynchronizationResults(KSynchronizationObject* object, ResultCode result);
SynchronizationObject* GetSignalingObject() const {
return signaling_object;
void SetSyncedObject(KSynchronizationObject* object, ResultCode result) {
SetSynchronizationResults(object, result);
}
ResultCode GetWaitResult(KSynchronizationObject** out) const {
*out = signaling_object;
return signaling_result;
}
ResultCode GetSignalingResult() const {
return signaling_result;
}
/**
* Retrieves the index that this particular object occupies in the list of objects
* that the thread passed to WaitSynchronization, starting the search from the last element.
*
* It is used to set the output index of WaitSynchronization when the thread is awakened.
*
* When a thread wakes up due to an object signal, the kernel will use the index of the last
* matching object in the wait objects list in case of having multiple instances of the same
* object in the list.
*
* @param object Object to query the index of.
*/
s32 GetSynchronizationObjectIndex(std::shared_ptr<SynchronizationObject> object) const;
/**
* Stops a thread, invalidating it from further use
*/
@@ -341,18 +333,22 @@ public:
std::shared_ptr<Common::Fiber>& GetHostContext();
ThreadStatus GetStatus() const {
return status;
ThreadState GetState() const {
return thread_state & ThreadState::Mask;
}
void SetStatus(ThreadStatus new_status);
ThreadState GetRawState() const {
return thread_state;
}
void SetState(ThreadState state);
s64 GetLastScheduledTick() const {
return this->last_scheduled_tick;
return last_scheduled_tick;
}
void SetLastScheduledTick(s64 tick) {
this->last_scheduled_tick = tick;
last_scheduled_tick = tick;
}
u64 GetTotalCPUTimeTicks() const {
@@ -387,98 +383,18 @@ public:
return owner_process;
}
const ThreadSynchronizationObjects& GetSynchronizationObjects() const {
return *wait_objects;
}
void SetSynchronizationObjects(ThreadSynchronizationObjects* objects) {
wait_objects = objects;
}
void ClearSynchronizationObjects() {
for (const auto& waiting_object : *wait_objects) {
waiting_object->RemoveWaitingThread(SharedFrom(this));
}
wait_objects->clear();
}
/// Determines whether all the objects this thread is waiting on are ready.
bool AllSynchronizationObjectsReady() const;
const MutexWaitingThreads& GetMutexWaitingThreads() const {
return wait_mutex_threads;
}
Thread* GetLockOwner() const {
return lock_owner.get();
return lock_owner;
}
void SetLockOwner(std::shared_ptr<Thread> owner) {
lock_owner = std::move(owner);
void SetLockOwner(Thread* owner) {
lock_owner = owner;
}
VAddr GetCondVarWaitAddress() const {
return condvar_wait_address;
}
void SetCondVarWaitAddress(VAddr address) {
condvar_wait_address = address;
}
VAddr GetMutexWaitAddress() const {
return mutex_wait_address;
}
void SetMutexWaitAddress(VAddr address) {
mutex_wait_address = address;
}
Handle GetWaitHandle() const {
return wait_handle;
}
void SetWaitHandle(Handle handle) {
wait_handle = handle;
}
VAddr GetArbiterWaitAddress() const {
return arb_wait_address;
}
void SetArbiterWaitAddress(VAddr address) {
arb_wait_address = address;
}
bool HasHLECallback() const {
return hle_callback != nullptr;
}
void SetHLECallback(HLECallback callback) {
hle_callback = std::move(callback);
}
void SetHLETimeEvent(Handle time_event) {
hle_time_event = time_event;
}
void SetHLESyncObject(SynchronizationObject* object) {
hle_object = object;
}
Handle GetHLETimeEvent() const {
return hle_time_event;
}
SynchronizationObject* GetHLESyncObject() const {
return hle_object;
}
void InvalidateHLECallback() {
SetHLECallback(nullptr);
}
bool InvokeHLECallback(std::shared_ptr<Thread> thread);
u32 GetIdealCore() const {
return ideal_core;
}
@@ -493,20 +409,11 @@ public:
ResultCode Sleep(s64 nanoseconds);
s64 GetYieldScheduleCount() const {
return this->schedule_count;
return schedule_count;
}
void SetYieldScheduleCount(s64 count) {
this->schedule_count = count;
}
ThreadSchedStatus GetSchedulingStatus() const {
return static_cast<ThreadSchedStatus>(scheduling_state &
static_cast<u32>(ThreadSchedMasks::LowMask));
}
bool IsRunnable() const {
return scheduling_state == static_cast<u32>(ThreadSchedStatus::Runnable);
schedule_count = count;
}
bool IsRunning() const {
@@ -517,36 +424,32 @@ public:
is_running = value;
}
bool IsSyncCancelled() const {
bool IsWaitCancelled() const {
return is_sync_cancelled;
}
void SetSyncCancelled(bool value) {
is_sync_cancelled = value;
void ClearWaitCancelled() {
is_sync_cancelled = false;
}
Handle GetGlobalHandle() const {
return global_handle;
}
bool IsWaitingForArbitration() const {
return waiting_for_arbitration;
bool IsCancellable() const {
return is_cancellable;
}
void WaitForArbitration(bool set) {
waiting_for_arbitration = set;
void SetCancellable() {
is_cancellable = true;
}
bool IsWaitingSync() const {
return is_waiting_on_sync;
void ClearCancellable() {
is_cancellable = false;
}
void SetWaitingSync(bool is_waiting) {
is_waiting_on_sync = is_waiting;
}
bool IsPendingTermination() const {
return will_be_terminated || GetSchedulingStatus() == ThreadSchedStatus::Exited;
bool IsTerminationRequested() const {
return will_be_terminated || GetRawState() == ThreadState::Terminated;
}
bool IsPaused() const {
@@ -578,21 +481,21 @@ public:
constexpr QueueEntry() = default;
constexpr void Initialize() {
this->prev = nullptr;
this->next = nullptr;
prev = nullptr;
next = nullptr;
}
constexpr Thread* GetPrev() const {
return this->prev;
return prev;
}
constexpr Thread* GetNext() const {
return this->next;
return next;
}
constexpr void SetPrev(Thread* thread) {
this->prev = thread;
prev = thread;
}
constexpr void SetNext(Thread* thread) {
this->next = thread;
next = thread;
}
private:
@@ -601,11 +504,11 @@ public:
};
QueueEntry& GetPriorityQueueEntry(s32 core) {
return this->per_core_priority_queue_entry[core];
return per_core_priority_queue_entry[core];
}
const QueueEntry& GetPriorityQueueEntry(s32 core) const {
return this->per_core_priority_queue_entry[core];
return per_core_priority_queue_entry[core];
}
s32 GetDisableDispatchCount() const {
@@ -622,24 +525,170 @@ public:
disable_count--;
}
private:
friend class GlobalSchedulerContext;
friend class KScheduler;
friend class Process;
void SetWaitReasonForDebugging(ThreadWaitReasonForDebugging reason) {
wait_reason_for_debugging = reason;
}
void SetSchedulingStatus(ThreadSchedStatus new_status);
[[nodiscard]] ThreadWaitReasonForDebugging GetWaitReasonForDebugging() const {
return wait_reason_for_debugging;
}
void SetWaitObjectsForDebugging(const std::span<KSynchronizationObject*>& objects) {
wait_objects_for_debugging.clear();
wait_objects_for_debugging.reserve(objects.size());
for (const auto& object : objects) {
wait_objects_for_debugging.emplace_back(object);
}
}
[[nodiscard]] const std::vector<KSynchronizationObject*>& GetWaitObjectsForDebugging() const {
return wait_objects_for_debugging;
}
void SetMutexWaitAddressForDebugging(VAddr address) {
mutex_wait_address_for_debugging = address;
}
[[nodiscard]] VAddr GetMutexWaitAddressForDebugging() const {
return mutex_wait_address_for_debugging;
}
void AddWaiter(Thread* thread);
void RemoveWaiter(Thread* thread);
[[nodiscard]] Thread* RemoveWaiterByKey(s32* out_num_waiters, VAddr key);
[[nodiscard]] VAddr GetAddressKey() const {
return address_key;
}
[[nodiscard]] u32 GetAddressKeyValue() const {
return address_key_value;
}
void SetAddressKey(VAddr key) {
address_key = key;
}
void SetAddressKey(VAddr key, u32 val) {
address_key = key;
address_key_value = val;
}
private:
static constexpr size_t PriorityInheritanceCountMax = 10;
union SyncObjectBuffer {
std::array<KSynchronizationObject*, Svc::ArgumentHandleCountMax> sync_objects{};
std::array<Handle,
Svc::ArgumentHandleCountMax*(sizeof(KSynchronizationObject*) / sizeof(Handle))>
handles;
constexpr SyncObjectBuffer() {}
};
static_assert(sizeof(SyncObjectBuffer::sync_objects) == sizeof(SyncObjectBuffer::handles));
struct ConditionVariableComparator {
struct LightCompareType {
u64 cv_key{};
s32 priority{};
[[nodiscard]] constexpr u64 GetConditionVariableKey() const {
return cv_key;
}
[[nodiscard]] constexpr s32 GetPriority() const {
return priority;
}
};
template <typename T>
requires(
std::same_as<T, Thread> ||
std::same_as<T, LightCompareType>) static constexpr int Compare(const T& lhs,
const Thread& rhs) {
const uintptr_t l_key = lhs.GetConditionVariableKey();
const uintptr_t r_key = rhs.GetConditionVariableKey();
if (l_key < r_key) {
// Sort first by key
return -1;
} else if (l_key == r_key && lhs.GetPriority() < rhs.GetPriority()) {
// And then by priority.
return -1;
} else {
return 1;
}
}
};
Common::IntrusiveRedBlackTreeNode condvar_arbiter_tree_node{};
using ConditionVariableThreadTreeTraits =
Common::IntrusiveRedBlackTreeMemberTraitsDeferredAssert<&Thread::condvar_arbiter_tree_node>;
using ConditionVariableThreadTree =
ConditionVariableThreadTreeTraits::TreeType<ConditionVariableComparator>;
public:
using ConditionVariableThreadTreeType = ConditionVariableThreadTree;
[[nodiscard]] uintptr_t GetConditionVariableKey() const {
return condvar_key;
}
[[nodiscard]] uintptr_t GetAddressArbiterKey() const {
return condvar_key;
}
void SetConditionVariable(ConditionVariableThreadTree* tree, VAddr address, uintptr_t cv_key,
u32 value) {
condvar_tree = tree;
condvar_key = cv_key;
address_key = address;
address_key_value = value;
}
void ClearConditionVariable() {
condvar_tree = nullptr;
}
[[nodiscard]] bool IsWaitingForConditionVariable() const {
return condvar_tree != nullptr;
}
void SetAddressArbiter(ConditionVariableThreadTree* tree, uintptr_t address) {
condvar_tree = tree;
condvar_key = address;
}
void ClearAddressArbiter() {
condvar_tree = nullptr;
}
[[nodiscard]] bool IsWaitingForAddressArbiter() const {
return condvar_tree != nullptr;
}
[[nodiscard]] ConditionVariableThreadTree* GetConditionVariableTree() const {
return condvar_tree;
}
[[nodiscard]] bool HasWaiters() const {
return !waiter_list.empty();
}
private:
void AddSchedulingFlag(ThreadSchedFlags flag);
void RemoveSchedulingFlag(ThreadSchedFlags flag);
void SetCurrentPriority(u32 new_priority);
void AddWaiterImpl(Thread* thread);
void RemoveWaiterImpl(Thread* thread);
static void RestorePriority(KernelCore& kernel, Thread* thread);
Common::SpinLock context_guard{};
ThreadContext32 context_32{};
ThreadContext64 context_64{};
std::shared_ptr<Common::Fiber> host_context{};
ThreadStatus status = ThreadStatus::Dormant;
u32 scheduling_state = 0;
ThreadState thread_state = ThreadState::Initialized;
u64 thread_id = 0;
@@ -652,11 +701,11 @@ private:
/// Nominal thread priority, as set by the emulated application.
/// The nominal priority is the thread priority without priority
/// inheritance taken into account.
u32 nominal_priority = 0;
s32 base_priority{};
/// Current thread priority. This may change over the course of the
/// thread's lifetime in order to facilitate priority inheritance.
u32 current_priority = 0;
s32 current_priority{};
u64 total_cpu_time_ticks = 0; ///< Total CPU running ticks.
s64 schedule_count{};
@@ -671,37 +720,27 @@ private:
Process* owner_process;
/// Objects that the thread is waiting on, in the same order as they were
/// passed to WaitSynchronization.
ThreadSynchronizationObjects* wait_objects;
/// passed to WaitSynchronization. This is used for debugging only.
std::vector<KSynchronizationObject*> wait_objects_for_debugging;
SynchronizationObject* signaling_object;
/// The current mutex wait address. This is used for debugging only.
VAddr mutex_wait_address_for_debugging{};
/// The reason the thread is waiting. This is used for debugging only.
ThreadWaitReasonForDebugging wait_reason_for_debugging{};
KSynchronizationObject* signaling_object;
ResultCode signaling_result{RESULT_SUCCESS};
/// List of threads that are waiting for a mutex that is held by this thread.
MutexWaitingThreads wait_mutex_threads;
/// Thread that owns the lock that this thread is waiting for.
std::shared_ptr<Thread> lock_owner;
/// If waiting on a ConditionVariable, this is the ConditionVariable address
VAddr condvar_wait_address = 0;
/// If waiting on a Mutex, this is the mutex address
VAddr mutex_wait_address = 0;
/// The handle used to wait for the mutex.
Handle wait_handle = 0;
/// If waiting for an AddressArbiter, this is the address being waited on.
VAddr arb_wait_address{0};
bool waiting_for_arbitration{};
Thread* lock_owner{};
/// Handle used as userdata to reference this object when inserting into the CoreTiming queue.
Handle global_handle = 0;
/// Callback for HLE Events
HLECallback hle_callback;
Handle hle_time_event;
SynchronizationObject* hle_object;
KScheduler* scheduler = nullptr;
std::array<QueueEntry, Core::Hardware::NUM_CPU_CORES> per_core_priority_queue_entry{};
@@ -714,7 +753,7 @@ private:
u32 pausing_state = 0;
bool is_running = false;
bool is_waiting_on_sync = false;
bool is_cancellable = false;
bool is_sync_cancelled = false;
bool is_continuous_on_svc = false;
@@ -725,6 +764,18 @@ private:
bool was_running = false;
bool signaled{};
ConditionVariableThreadTree* condvar_tree{};
uintptr_t condvar_key{};
VAddr address_key{};
u32 address_key_value{};
s32 num_kernel_waiters{};
using WaiterList = boost::intrusive::list<Thread>;
WaiterList waiter_list{};
WaiterList pinned_waiter_list{};
std::string name;
};

View File

@@ -18,12 +18,10 @@ TimeManager::TimeManager(Core::System& system_) : system{system_} {
time_manager_event_type = Core::Timing::CreateEvent(
"Kernel::TimeManagerCallback",
[this](std::uintptr_t thread_handle, std::chrono::nanoseconds) {
const KScopedSchedulerLock lock(system.Kernel());
const auto proper_handle = static_cast<Handle>(thread_handle);
std::shared_ptr<Thread> thread;
{
std::lock_guard lock{mutex};
const auto proper_handle = static_cast<Handle>(thread_handle);
if (cancelled_events[proper_handle]) {
return;
}
@@ -32,7 +30,7 @@ TimeManager::TimeManager(Core::System& system_) : system{system_} {
if (thread) {
// Thread can be null if process has exited
thread->OnWakeUp();
thread->Wakeup();
}
});
}
@@ -42,8 +40,7 @@ void TimeManager::ScheduleTimeEvent(Handle& event_handle, Thread* timetask, s64
event_handle = timetask->GetGlobalHandle();
if (nanoseconds > 0) {
ASSERT(timetask);
ASSERT(timetask->GetStatus() != ThreadStatus::Ready);
ASSERT(timetask->GetStatus() != ThreadStatus::WaitMutex);
ASSERT(timetask->GetState() != ThreadState::Runnable);
system.CoreTiming().ScheduleEvent(std::chrono::nanoseconds{nanoseconds},
time_manager_event_type, event_handle);
} else {

View File

@@ -534,7 +534,7 @@ private:
rb.Push(RESULT_SUCCESS);
}
Common::UUID user_id;
Common::UUID user_id{Common::INVALID_UUID};
};
// 6.0.0+

View File

@@ -227,17 +227,17 @@ void ProfileManager::CloseUser(UUID uuid) {
/// Gets all valid user ids on the system
UserIDArray ProfileManager::GetAllUsers() const {
UserIDArray output;
std::transform(profiles.begin(), profiles.end(), output.begin(),
[](const ProfileInfo& p) { return p.user_uuid; });
UserIDArray output{};
std::ranges::transform(profiles, output.begin(),
[](const ProfileInfo& p) { return p.user_uuid; });
return output;
}
/// Get all the open users on the system and zero out the rest of the data. This is specifically
/// needed for GetOpenUsers and we need to ensure the rest of the output buffer is zero'd out
UserIDArray ProfileManager::GetOpenUsers() const {
UserIDArray output;
std::transform(profiles.begin(), profiles.end(), output.begin(), [](const ProfileInfo& p) {
UserIDArray output{};
std::ranges::transform(profiles, output.begin(), [](const ProfileInfo& p) {
if (p.is_open)
return p.user_uuid;
return UUID{Common::INVALID_UUID};

Some files were not shown because too many files have changed in this diff Show More