Compare commits

...

383 Commits

Author SHA1 Message Date
yuzubot
846462e41e "Merge Tagged PR 1340" 2020-07-01 12:01:16 +00:00
yuzubot
c4f9c072cb "Merge Tagged PR 1703" 2020-07-01 12:01:15 +00:00
David
15a04fb704 Merge pull request #4217 from lioncash/prototype
key_manager: Make use of canonical deleted operator=
2020-07-01 16:13:14 +10:00
LC
0b954a3305 Merge pull request #4208 from jbeich/freebsd
common: unbreak build on BSDs
2020-07-01 00:34:20 -04:00
Lioncash
fb13f053bb key_manager: Correct casing of instance()
Our codebase uppercases member function names.
2020-07-01 00:28:50 -04:00
David
beb172e9fc Merge pull request #4209 from jbeich/webengine
cmake: unbreak YUZU_USE_QT_WEB_ENGINE without YUZU_USE_BUNDLED_QT
2020-07-01 14:25:47 +10:00
Lioncash
c91710a82f key_manager: Delete move operations
Prevents the singleton from being moved from.
2020-07-01 00:24:38 -04:00
Lioncash
00a1d106bd key_manager: Make use of canonical deleted operator=
operator= typically returns a reference, it's not void.

While we're at it, we can correct the parameter formatting to adhere to the
codebase.
2020-07-01 00:21:31 -04:00
David
3bb63bc0b3 Merge pull request #3967 from FearlessTobi/keys-singleton
crypto: Make KeyManager a singleton class
2020-07-01 14:16:26 +10:00
bunnei
c6b0353c4d Merge pull request #4153 from ogniK5377/prepo-multibuf
prepo: : Don't read extra buffer from report unless passed
2020-06-30 22:37:13 -04:00
bunnei
424540d9e8 Merge pull request #4063 from FreddyFunk/game-version-in-title
Add game version to window title
2020-06-30 21:42:33 -04:00
bunnei
f1b1238e2d Merge pull request #4166 from VolcaEM/quickstart-faq
Add "Open Quickstart Guide" and "FAQ" buttons to the Help menu
2020-06-30 19:03:47 -04:00
Jan Beich
3b1683a152 common: switch to nullptr for sysctl's empty new value 2020-06-30 23:00:18 +00:00
Fernando Sahmkow
a4f48efea4 Merge pull request #4176 from ReinUsesLisp/compatible-formats
texture_cache: Check format compatibility before copying
2020-06-30 15:36:13 -04:00
Fernando Sahmkow
977a3ab352 Merge pull request #4157 from ReinUsesLisp/unified-turing
gl_device: Enable NV_vertex_buffer_unified_memory on Turing devices
2020-06-30 14:36:51 -04:00
Rodrigo Locatti
d217017c9e Merge pull request #4191 from Morph1984/vertex-formats
maxwell_to_gl/vk: Reorder vertex formats
2020-06-30 03:30:00 -03:00
David
7c970132b5 macro: Add support for "middle methods" on the code cache (#4112)
Macro code is just uploaded sequentially from a starting address, however that does not mean the entry point for the macro is at that address. This PR adds preliminary support for executing macros in the middle of our cached code.
2020-06-30 02:32:24 -03:00
bunnei
fa8e35c49f Merge pull request #4182 from Kewlan/fullscreen-hotkey-fix
hotkeys: Fix issues caused when changing the fullscreen hotkey
2020-06-29 23:11:57 -04:00
Jan Beich
dda90ce1c2 cmake: depend on WebEngine with system Qt
CMake Error at src/yuzu/CMakeLists.txt:7 (add_executable):
  Target "yuzu" links to target "Qt5::WebEngineCore" but the target was not
  found.  Perhaps a find_package() call is missing for an IMPORTED target, or
  an ALIAS target is missing?
2020-06-29 23:52:45 +00:00
Jan Beich
e6085ea35f common: add sysconf() fallback
src/common/memory_detect.cpp:15:10: fatal error: 'sys/sysinfo.h' file not found
 #include <sys/sysinfo.h>
          ^~~~~~~~~~~~~~~
2020-06-29 22:41:22 +00:00
Morph
10eca7f651 maxwell_to_gl: Rename VertexType() to VertexFormat() 2020-06-29 11:48:38 -04:00
Rodrigo Locatti
f84cbf6429 Merge pull request #4140 from ReinUsesLisp/validation-layers
renderer_vulkan: Update validation layer name and test before enabling
2020-06-29 02:12:38 -03:00
bunnei
b05795d704 Merge pull request #3955 from FernandoS27/prometheus-2b
Remake Kernel Scheduling, CPU Management & Boot Management (Prometheus)
2020-06-28 12:37:50 -04:00
bunnei
8596a12772 Merge pull request #4196 from ogniK5377/nrr-nro-fixes
ldr: Cleanup NRO & NRR structs
2020-06-28 11:55:55 -04:00
David Marcec
db824b59c8 ldr: Cleanup NRO & NRR structs 2020-06-28 20:54:37 +10:00
David
d67c7d9a82 Merge pull request #4026 from VolcaEM/ldr
ldr: Update NRR/NRO structs
2020-06-28 20:46:42 +10:00
David
4a5f6c03b0 Merge pull request #4184 from VolcaEM/patch-9
grc: Update function table
2020-06-28 18:49:15 +10:00
David
d3a306b7a6 Merge pull request #4185 from VolcaEM/patch-10
lbl: Update function table
2020-06-28 18:48:54 +10:00
David
477979dd43 Merge pull request #4186 from VolcaEM/patch-11
ldn: Update function table
2020-06-28 18:48:28 +10:00
David
b478b61dcf Merge pull request #4187 from VolcaEM/patch-12
mig: Update function table
2020-06-28 18:48:15 +10:00
David
00aa9f6a53 Merge pull request #4188 from VolcaEM/patch-13
mm: Update function table
2020-06-28 18:47:55 +10:00
David
26e243d2d7 Merge pull request #4189 from VolcaEM/patch-14
ncm: Update function table
2020-06-28 18:47:27 +10:00
David
bd590895cf Merge pull request #4190 from VolcaEM/patch-15
nfc: Update function table
2020-06-28 18:47:07 +10:00
David
e978f05ed1 Merge pull request #4183 from VolcaEM/patch-8
friend: Update function table
2020-06-28 18:46:40 +10:00
Morph
4a35df337b maxwell_to_vk: Reorder vertex formats and add A2B10G10R10 for all types except float 2020-06-28 02:57:10 -04:00
Morph
78d80d99a0 maxwell_to_gl: Add 32 bit component sizes to (un)signed scaled formats
Add 32 bit component sizes to (un)signed scaled formats and group (un)signed normalized, scaled, and integer formats together.
2020-06-28 02:51:13 -04:00
Fernando Sahmkow
2f8947583f Core/Common: Address Feedback. 2020-06-27 18:20:06 -04:00
Fernando Sahmkow
e486c66850 NvFlinger: Clang Format. 2020-06-27 11:36:30 -04:00
Fernando Sahmkow
626cc44d7a Build System: Fix GCC & MINGW Build. 2020-06-27 11:36:28 -04:00
Fernando Sahmkow
4105f38022 SVC: Implement 32-bits wrappers and update Dynarmic. 2020-06-27 11:36:27 -04:00
Fernando Sahmkow
ce350e7ce0 SVC: Add GetCurrentProcessorNumber32, CreateTransferMemory32, SetMemoryAttribute32 2020-06-27 11:36:27 -04:00
Fernando Sahmkow
b8df61c642 ARM: Update Dynarmic and Setup A32 according to latest interface. 2020-06-27 11:36:26 -04:00
Fernando Sahmkow
22ceaca2f4 SVC: Add GetThreadPriority32 & SetThreadPriority32 2020-06-27 11:36:25 -04:00
Fernando Sahmkow
ec11918323 ArmDynarmic32: Setup CNTPCT correctly 2020-06-27 11:36:24 -04:00
Fernando Sahmkow
e3d561fb84 Audio: Correct buffer release for host timing. 2020-06-27 11:36:23 -04:00
Fernando Sahmkow
7fd7d05838 Common/Kernel: Corrections and small bug fixing. 2020-06-27 11:36:21 -04:00
Fernando Sahmkow
272a87127a Services/NvFlinger: Do vSync in a sepparate thread on Multicore. 2020-06-27 11:36:20 -04:00
Fernando Sahmkow
39ddce1ab5 Externals: Update Dynarmic. 2020-06-27 11:36:19 -04:00
Fernando Sahmkow
3165152396 Common/NativeClockx86: Reduce native clock accuracy further. 2020-06-27 11:36:18 -04:00
Fernando Sahmkow
71c4779211 Tests/CoreTiming: Correct host timing tests. 2020-06-27 11:36:17 -04:00
Fernando Sahmkow
0a8013d71e ARMDynarmicInterface: Correct GCC Build Errors. 2020-06-27 11:36:17 -04:00
Fernando Sahmkow
7b1804dab4 Common/AtomicOps: Correct GCC Intrinsic argument ordering. 2020-06-27 11:36:16 -04:00
Fernando Sahmkow
d240143588 Kernel: Correct Host Context on Threads and Scheduler. 2020-06-27 11:36:15 -04:00
Fernando Sahmkow
0e4c35c591 YuzuQT: Hide Speed UI on Multicore. 2020-06-27 11:36:14 -04:00
Fernando Sahmkow
467d43570e Clang Format. 2020-06-27 11:36:14 -04:00
Fernando Sahmkow
3714f2e471 ARMInterface/Externals: Update dynarmic and fit to latest version. 2020-06-27 11:36:13 -04:00
Fernando Sahmkow
dda6147b0d ARMInterface: Correct rebase errors. 2020-06-27 11:36:12 -04:00
Fernando Sahmkow
71f1c0f9f9 CoreTiming: Correct rebase bugs and other miscellaneous things. 2020-06-27 11:36:11 -04:00
Fernando Sahmkow
cdf900f1e3 Core: Split Microprofile Dynarmic timing per Core 2020-06-27 11:36:10 -04:00
Fernando Sahmkow
528b19a842 General: Tune the priority of main emulation threads so they have higher priority than less important helper threads. 2020-06-27 11:36:09 -04:00
Fernando Sahmkow
7b44187fd2 Dynarmic Interface: don't clear cache if JIT has not been created. 2020-06-27 11:36:08 -04:00
Fernando Sahmkow
ad92865497 General: Correct rebase, sync gpu and context management. 2020-06-27 11:36:08 -04:00
Fernando Sahmkow
bfb5244cf8 CoreTiming/CycleTimer: Correct Idling. 2020-06-27 11:36:07 -04:00
Fernando Sahmkow
bece52cd81 SingleCore: Correct ticks reset to be on preemption. 2020-06-27 11:36:06 -04:00
Fernando Sahmkow
48fa3b7a0f General: Cleanup legacy code. 2020-06-27 11:36:05 -04:00
Fernando Sahmkow
c8bf47dcfb Kernel/svcBreak: Implement CacheInvalidation for Singlecore and correct svcBreak. 2020-06-27 11:36:04 -04:00
Fernando Sahmkow
54e304fe2a Bootmanager/CPU_Manager: Correct shader caches and sync GPU on OpenGL. 2020-06-27 11:36:03 -04:00
Fernando Sahmkow
19165cd859 HLE_IPC: Correct HLE Event behavior on timeout. 2020-06-27 11:36:03 -04:00
Fernando Sahmkow
7e2ce2f7f4 SingleCore: Improve Cycle timing Behavior and replace mutex in global scheduler for spinlock. 2020-06-27 11:36:02 -04:00
Fernando Sahmkow
a7ecd9e19c FrameLimiting: Enable frame limiting for single core. 2020-06-27 11:36:01 -04:00
Fernando Sahmkow
f5e32935ca SingleCore: Use Cycle Timing instead of Host Timing. 2020-06-27 11:36:01 -04:00
Fernando Sahmkow
9bde28d7b1 Scheduler: Correct Reload/Unload 2020-06-27 11:35:59 -04:00
Fernando Sahmkow
5974e3ea33 Thread: Release the ARM Interface on exitting. 2020-06-27 11:35:58 -04:00
Fernando Sahmkow
1567824d2d General: Move ARM_Interface into Threads. 2020-06-27 11:35:58 -04:00
Fernando Sahmkow
1b82ccec22 Core: Refactor ARM Interface. 2020-06-27 11:35:56 -04:00
Fernando Sahmkow
534466754f X64 Clock: Reduce accuracy to be less or equal to guest accuracy. 2020-06-27 11:35:55 -04:00
Fernando Sahmkow
7b18174eef ARM/WaitTree: Better track the CallStack for each thread. 2020-06-27 11:35:54 -04:00
Fernando Sahmkow
87c49aa7be SVC/ARM: Correct svcSendSyncRequest and cache ticks on arm interface. 2020-06-27 11:35:53 -04:00
Fernando Sahmkow
f2ade343e2 SingleCore: Move Host Timing from a sepparate thread to main cpu thread. 2020-06-27 11:35:52 -04:00
Fernando Sahmkow
5d3a2be04f GUI: Make multicore only work with Async and add GUI for multicore. 2020-06-27 11:35:52 -04:00
Fernando Sahmkow
25565dffd5 ARM: Addapt to new Exclusive Monitor Interface. 2020-06-27 11:35:50 -04:00
Fernando Sahmkow
1a5f2e290b CPU_Manager: Correct stopping on SingleCore. 2020-06-27 11:35:49 -04:00
Fernando Sahmkow
db68fba4a6 Scheduler: Correct yielding interaction with SetThreadActivity. 2020-06-27 11:35:49 -04:00
Fernando Sahmkow
7020d498c5 General: Fix microprofile on dynarmic/svc, fix wait tree showing which threads were running. 2020-06-27 11:35:48 -04:00
Fernando Sahmkow
e6f8bde74b General: Fix Stop function 2020-06-27 11:35:47 -04:00
Fernando Sahmkow
f370de84b1 Kernel: Rewind on SVC change. 2020-06-27 11:35:46 -04:00
Fernando Sahmkow
d494b074e8 Kernel: Preempt Single core on redudant yields. 2020-06-27 11:35:45 -04:00
Fernando Sahmkow
a439cdf22e CPU_Manager: Unload/Reload threads on preemption on SingleCore 2020-06-27 11:35:43 -04:00
Fernando Sahmkow
8a78fc2580 Synchronization: Correct wide Assertion. 2020-06-27 11:35:43 -04:00
Fernando Sahmkow
ab9aae28bf General: Initial Setup for Single Core. 2020-06-27 11:35:42 -04:00
Fernando Sahmkow
391f5f360d Scheduler: Set last running time on thread. 2020-06-27 11:35:41 -04:00
Fernando Sahmkow
9e9c287f8b Kernel: Corrections to TimeManager, Scheduler and Mutex. 2020-06-27 11:35:40 -04:00
Fernando Sahmkow
6515c6e8c6 Kernel: Fixes, corrections and asserts to scheduler and different svcs. 2020-06-27 11:35:40 -04:00
Fernando Sahmkow
4217e58a10 Scheduler: Correct yields. 2020-06-27 11:35:39 -04:00
Fernando Sahmkow
445b4342b3 Mutex: Revert workaround due to poor exclusive memory. 2020-06-27 11:35:38 -04:00
Fernando Sahmkow
cd1c38be8d ARM/Memory: Correct Exclusive Monitor and Implement Exclusive Memory Writes. 2020-06-27 11:35:37 -04:00
Fernando Sahmkow
535c542d84 SVC: WaitSynchronization add Termination Pending Result. 2020-06-27 11:35:36 -04:00
Fernando Sahmkow
725bac1404 Scheduler: Remove arm_interface lock and a few corrections. 2020-06-27 11:35:35 -04:00
Fernando Sahmkow
38c6c497f6 Yuzu/Debuggers: Correct Wait Tree for Paused threads. 2020-06-27 11:35:34 -04:00
Fernando Sahmkow
83c7ba1ef7 SVC: Correct SetThreadActivity. 2020-06-27 11:35:33 -04:00
Fernando Sahmkow
a66c61ca2d SCC: Small corrections to CancelSynchronization 2020-06-27 11:35:33 -04:00
Fernando Sahmkow
44cb9997b3 Scheduler: Correct locking for hle threads. 2020-06-27 11:35:32 -04:00
Fernando Sahmkow
6ed28e15fa Scheduler: Fix HLE Threads on guard 2020-06-27 11:35:31 -04:00
Fernando Sahmkow
3de33348e4 Scheduler: Protect on closed threads. 2020-06-27 11:35:31 -04:00
Fernando Sahmkow
19847d4d42 Scheduler: Correct assert. 2020-06-27 11:35:30 -04:00
Fernando Sahmkow
a33fbaddec Core: Correct rebase. 2020-06-27 11:35:29 -04:00
Fernando Sahmkow
1c672128c4 Scheduler: Release old thread fiber before trying to switch to the next thread fiber. 2020-06-27 11:35:28 -04:00
Fernando Sahmkow
c43e559734 NVDRV: Remove frame limiting as Host Timing already takes care. 2020-06-27 11:35:28 -04:00
Fernando Sahmkow
a6bce296ad Mutex: Correct Result writting to clear exclusivity. 2020-06-27 11:35:26 -04:00
Fernando Sahmkow
e4b175ade2 SVC: Correct svcWaitForAddress and svcSignalToAddress. 2020-06-27 11:35:25 -04:00
Fernando Sahmkow
1e987dbe8d Scheduler: Correct Select Threads Step 2. 2020-06-27 11:35:24 -04:00
Fernando Sahmkow
07993ac8c8 Kernel: Corrections to Scheduling. 2020-06-27 11:35:23 -04:00
Fernando Sahmkow
b4dc01f16a Kernel: Correct Signal on Thread Death and Setup Sync Objects on Thread for Debugging 2020-06-27 11:35:23 -04:00
Fernando Sahmkow
75e10578f1 Core: Correct HLE Event Callbacks and other issues. 2020-06-27 11:35:22 -04:00
Fernando Sahmkow
de5b521c09 Process: Protect TLS region and Modules. 2020-06-27 11:35:21 -04:00
Fernando Sahmkow
2a8837ff51 General: Add Asserts 2020-06-27 11:35:21 -04:00
Fernando Sahmkow
04e0f8776c General: Add better safety for JIT use. 2020-06-27 11:35:20 -04:00
Fernando Sahmkow
bd36eaf15d SVC: Correct races on physical core switching. 2020-06-27 11:35:19 -04:00
Fernando Sahmkow
cc3aa95926 NVFlinger: Lock race condition between CPU, Host Timing, VSync. 2020-06-27 11:35:18 -04:00
Fernando Sahmkow
3902067008 SVC: Add locks to the memory management. 2020-06-27 11:35:18 -04:00
Fernando Sahmkow
d4ebb510a0 SVC: Correct WaitSynchronization, WaitProcessWideKey, SignalProcessWideKey. 2020-06-27 11:35:17 -04:00
Fernando Sahmkow
5b6a67f849 SVC: Cleanup old methods. 2020-06-27 11:35:16 -04:00
Fernando Sahmkow
3d9fbb8226 CPU_Manager: Reconfigre guest threads for dynamrmic downsides 2020-06-27 11:35:15 -04:00
Fernando Sahmkow
15a79eb0d7 SVC: Correct SendSyncRequest. 2020-06-27 11:35:14 -04:00
Fernando Sahmkow
203e706302 SVC: Correct ArbitrateUnlock 2020-06-27 11:35:14 -04:00
Fernando Sahmkow
3b5b950c89 SVC: Correct SignalEvent, ClearEvent, ResetSignal, WaitSynchronization, CancelSynchronization, ArbitrateLock 2020-06-27 11:35:13 -04:00
Fernando Sahmkow
ef4afa9760 SVC: Remove global HLE Lock. 2020-06-27 11:35:13 -04:00
Fernando Sahmkow
589f9cf108 SVC: Correct GetThreadPriority, SetThreadPriority, GetThreadCoreMask, SetThreadCoreMask, GetCurrentProcessorNumber 2020-06-27 11:35:12 -04:00
Fernando Sahmkow
49ba563995 SVC: Correct CreateThread, StartThread, ExitThread, SleepThread. 2020-06-27 11:35:11 -04:00
Fernando Sahmkow
18dcb09342 HostTiming: Pause the hardware clock on pause. 2020-06-27 11:35:10 -04:00
Fernando Sahmkow
6bf137a0e8 AudioCore: Use nanoseconds instead of cycles for buffer time. 2020-06-27 11:35:10 -04:00
Fernando Sahmkow
dc58058203 General: Setup yuzu threads' microprofile, naming and registry. 2020-06-27 11:35:09 -04:00
Fernando Sahmkow
a5c58a25ef CPU_Manager: remove debugging code. 2020-06-27 11:35:08 -04:00
Fernando Sahmkow
9e4b9f1afd YuzuCMD/Tester: Correct execution 2020-06-27 11:35:07 -04:00
Fernando Sahmkow
e31425df38 General: Recover Prometheus project from harddrive failure
This commit: Implements CPU Interrupts, Replaces Cycle Timing for Host 
Timing, Reworks the Kernel's Scheduler, Introduce Idle State and 
Suspended State, Recreates the bootmanager, Initializes Multicore 
system.
2020-06-27 11:35:06 -04:00
David
0ea4a8bcc4 Merge pull request #3396 from FernandoS27/prometheus-1
Implement SpinLocks, Fibers and a Host Timer
2020-06-28 01:34:07 +10:00
VolcaEM
23515e0ccc nfc: Update function table
This was based on Switchbrew page: https://switchbrew.org/wiki/NFC_services
2020-06-27 13:09:36 +02:00
VolcaEM
c56414b80d ncm: Update function table
This was based on Switchbrew page: https://switchbrew.org/wiki/NCM_services

ILocationResolver's 16, 17, 18 and 19 have unofficial names
2020-06-27 13:05:22 +02:00
VolcaEM
b829643946 mm: Update function table
This was based on Switchbrew page: https://switchbrew.org/wiki/Display_services
2020-06-27 12:59:01 +02:00
VolcaEM
5219424226 mig: Update function table
This was based on Switchbrew page: https://switchbrew.org/wiki/Migration_services
2020-06-27 12:53:59 +02:00
VolcaEM
b9be484a51 ldn: Update function table
This was based on Switchbrew page: https://switchbrew.org/wiki/LDN_services
2020-06-27 12:50:56 +02:00
VolcaEM
a8d17adb7c Oops (fix typo) 2020-06-27 12:45:42 +02:00
VolcaEM
73b035d2e2 lbl: Update function table
This was based on Switchbrew page: https://switchbrew.org/wiki/Backlight_services
2020-06-27 12:43:33 +02:00
VolcaEM
64fa9b9f57 grc: Update function table
This was based on Switchbrew page: https://switchbrew.org/wiki/GRC_services
2020-06-27 12:41:21 +02:00
VolcaEM
af88767508 friend: Update function table 2020-06-27 12:39:10 +02:00
Kewlan
323eb86c9f Fix issues caused when changing the fullscreen hotkey 2020-06-27 11:30:32 +02:00
bunnei
6205965df9 Merge pull request #4097 from kevinxucs/kevinxucs/device-pixel-scaling-float
Fix framebuffer size on fractional scaling display
2020-06-27 02:49:07 -04:00
bunnei
9eaccac674 Merge pull request #4164 from Kewlan/mute-audio-hotkey
hotkeys: Add a "Mute Audio" hotkey
2020-06-27 02:47:13 -04:00
bunnei
6f16f54f10 Merge pull request #4158 from Morph1984/caps
caps: Use enum classes and check struct sizes on compile time
2020-06-27 00:09:32 -04:00
bunnei
a91f92a89d Merge pull request #4152 from ogniK5377/ipc-err
Mark invalid IPC buffers as ASSERT_OR_EXECUTE_MSG
2020-06-26 23:37:19 -04:00
bunnei
705cccb1e4 Merge pull request #4154 from ogniK5377/swkbd-nullptr
Prevent nullptr dereference on swkbd error case
2020-06-26 23:25:04 -04:00
bunnei
efef7b1517 Merge pull request #4147 from ReinUsesLisp/hset2-imm
shader/half_set: Implement HSET2_IMM
2020-06-26 23:14:56 -04:00
David
b32b7c6e74 Merge pull request #4178 from VolcaEM/patch-6
es: Update function table
2020-06-27 13:05:12 +10:00
VolcaEM
2d82b7f1a1 Use better names for "Unknown"s 2020-06-27 02:48:29 +02:00
LC
7c07941882 Merge pull request #4180 from ogniK5377/fix-btm-names
btm: Give better names for unknown functions
2020-06-26 20:44:00 -04:00
VolcaEM
bc51a9365b Update function names 2020-06-27 02:43:22 +02:00
David Marcec
0b23ce6ef2 btm: Give better names for unknown functions 2020-06-27 10:42:46 +10:00
VolcaEM
032b7d490d btdrv: Update function table (#4174)
* btdrv: Update function table
2020-06-26 20:34:29 -04:00
VolcaEM
6e14edbcc2 bpc: Update function tables (#4173)
* bpc: Update function tables

This was based on Switchbrew page: https://switchbrew.org/wiki/PCV_services
2020-06-26 20:33:55 -04:00
VolcaEM
e6fee39ae7 bcat: Update function tables and add missing classes (#4172)
* bcat: Update function tables and add missing classes
2020-06-26 20:33:25 -04:00
VolcaEM
ca25a3845e am: Update function tables and add missing classes (#4169)
* am: Update function tables and add missing classes

* Remove comments (1/5)

* Remove comments (2/5)

* Remove comments (3/5)

* Remove comments (4/5)

* Remove comments (5/5)

* Remove unused classes (1/2)

* Remove unused classes (2/2)
2020-06-26 20:32:26 -04:00
VolcaEM
b5d54619cc aoc: Update function table (#4170)
* aoc: Update function table

* Remove comments
2020-06-26 20:31:44 -04:00
LC
98bbab8030 Merge pull request #4177 from VolcaEM/patch-5
btm: Update function tables
2020-06-26 20:30:59 -04:00
LC
a6b5528e9c Merge pull request #4179 from VolcaEM/patch-7
eupld: Update function table
2020-06-26 20:29:40 -04:00
VolcaEM
0f4a611129 eupld: Update function table
This was based on Switchbrew page: https://switchbrew.org/wiki/Error_Upload_services
2020-06-27 02:25:04 +02:00
VolcaEM
3828aa4927 es: Update function table
This was based on Switchbrew page: https://switchbrew.org/wiki/ETicket_services
2020-06-27 02:17:51 +02:00
VolcaEM
b1f4de7874 Update FAQ function name (2/2) 2020-06-27 02:14:29 +02:00
VolcaEM
db96b5ee3b Update FAQ function name (1/2) 2020-06-27 02:13:34 +02:00
VolcaEM
d3e9b45ce0 btm: Update function tables
This was based on Switchbrew page: https://switchbrew.org/wiki/BTM_services

"No comment" edition
2020-06-27 01:57:48 +02:00
ReinUsesLisp
bb2cbdf704 texture_cache: Test format compatibility before copying
Avoid illegal copies. This intercepts the last step of a copy to avoid
generating validation errors or corrupting the driver on some instances.

We can create views and emit copies accordingly in future commits and
remove this last-step validation.
2020-06-26 20:52:22 -03:00
bunnei
3579db425e Merge pull request #4144 from FernandoS27/tt-fix
TextureCache: Fix case where layer goes off bound.
2020-06-26 19:02:39 -04:00
bunnei
78d3b54ea7 Merge pull request #4111 from ReinUsesLisp/preserve-contents-vk
vk_rasterizer: Don't preserve contents on full screen clears
2020-06-26 18:48:12 -04:00
ReinUsesLisp
1d6be9febf video_core/compatible_formats: Table to test if two formats are legal to view or copy
Add a flat table to test if it's legal to create a texture view between
two formats or copy betweem them.

This table is based on ARB_copy_image and ARB_texture_view. Copies are
more permissive than views.
2020-06-26 19:28:11 -03:00
VolcaEM
9e1975a166 Update function name again 2020-06-26 18:51:12 +02:00
VolcaEM
0b86c7eb6a Update function name (2/2) 2020-06-26 18:50:28 +02:00
VolcaEM
f8247826fa Update function name (1/2) 2020-06-26 18:49:57 +02:00
Fernando Sahmkow
7b893c7963 Common: Fix non-conan build 2020-06-26 12:25:19 -04:00
Morph
72f14ae21f caps_u: Fix GetAlbumContentsFileListForApplication stub 2020-06-26 08:35:21 -04:00
Morph
3017be7855 caps: Use enum classes and check struct sizes on compile time 2020-06-26 08:35:21 -04:00
Morph
02a33feef4 caps: Update copyright headers
Updated to "yuzu Emulator Project"
2020-06-26 08:35:21 -04:00
Kewlan
3eb8efc095 Add a "Mute Audio" hotkey 2020-06-26 06:03:29 +02:00
bunnei
c4fe83a7bc Merge pull request #4159 from ogniK5377/mem-manager-dumb-assert
memory_manager: Remove useless assertion
2020-06-25 22:53:13 -04:00
Rodrigo Locatti
5872fc21fe Merge pull request #4151 from ReinUsesLisp/gl-invalidations
gl_shader_cache: Avoid use after move for program size
2020-06-25 21:05:27 -03:00
VolcaEM
7d08d548a9 Clang-format again 2020-06-25 23:44:41 +02:00
VolcaEM
b9f0b9dd06 Clang-format 2020-06-25 23:40:53 +02:00
VolcaEM
6582857356 Remove unnecessary newline 2020-06-25 23:38:38 +02:00
VolcaEM
0f4512291a Merge branch 'master' into quickstart-faq 2020-06-25 23:34:37 +02:00
VolcaEM
a46df40939 Fix typo 2: electric boogaloo 2020-06-25 23:32:43 +02:00
VolcaEM
9e7ac6a009 Use QUrl (2/2) 2020-06-25 23:31:01 +02:00
VolcaEM
5c6adea222 Use QUrl (1/2) 2020-06-25 23:28:38 +02:00
VolcaEM
04497d9e4a Fix formatting 2020-06-25 23:18:54 +02:00
VolcaEM
5f6e44552a Fix typo 2020-06-25 23:07:58 +02:00
VolcaEM
57b93395a8 Add "Open Quickstart Guide" and "FAQ" buttons to the Help menu
While we're at it, also refactor the function used by OnOpenModsPage to be compatible with other URLs
2020-06-25 23:02:33 +02:00
bunnei
a980b4cbc1 Merge pull request #4136 from VolcaEM/mods
Add a "Open Mods Page" button to the GUI
2020-06-25 15:10:18 -04:00
Rodrigo Locatti
ae1f709658 Merge pull request #4160 from ogniK5377/IsASTCSupported-fix
gl_device: Fix IsASTCSupported to scan all targets instead of just GL_TEXTURE_2D
2020-06-25 15:58:09 -03:00
David
d11baf8bf8 Merge pull request #4141 from Morph1984/SevenSixAxisSensor
hid: Stub a series of "SevenSixAxisSensor" service commands
2020-06-25 19:37:39 +10:00
David Marcec
a927d8be52 gl_device: Fix IsASTCSupported
Other targets were never actually checked
2020-06-25 19:12:56 +10:00
David Marcec
38868e5750 memory_manager: Remove useless assertion
num_pages is an std::size_t. It will always be >= 0
2020-06-25 16:35:58 +10:00
ReinUsesLisp
bc8d3b8f82 gl_device: Enable NV_vertex_buffer_unified_memory on Turing devices
Once we make sure not to corrupt Nvidia's driver, we can safely use
resident buffers on Turing devices.

See GitHub pull request #4156
2020-06-25 01:28:47 -03:00
Morph
2c9308954c hid: Stub a series of "SevenSixAxisSensor" service commands
- Used by Captain Toad: Treasure Tracker Update 1.3.0

While we're at it, fix the input parameters for SetIsPalmaAllConnectable and SetPalmaBoostMode
2020-06-24 11:57:39 -04:00
bunnei
0e1268e507 Merge pull request #4105 from ReinUsesLisp/resident-buffers
gl_rasterizer: Use NV_vertex_buffer_unified_memory for vertex buffer robustness
2020-06-24 11:40:30 -04:00
bunnei
2f2df9a4a7 Merge pull request #4083 from Morph1984/B10G11R11F
decode/image: Implement B10G11R11F
2020-06-24 11:02:38 -04:00
David Marcec
510838759f Prevent nullptr dereference on swkbd error case 2020-06-25 00:25:15 +10:00
David Marcec
2f0b322e72 prepo: : Don't read extra buffer from report unless passed
Prepo doesn't always pass a secondary buffer, we assume it always does which leads to a bad read.
2020-06-24 23:01:00 +10:00
Fernando Sahmkow
32343d820d Merge pull request #4046 from ogniK5377/macro-hle-prod
Add support for HLEing Macros
2020-06-24 09:01:00 -04:00
David Marcec
82ecdd0104 Mark invalid IPC buffers as ASSERT_OR_EXECUTE_MSG
Previously if applications would send faulty buffers(example homebrew) it would lead to us returning uninitalized data. Switching from ASSERT_MSG to ASSERT_OR_EXECUTE_MSG allows us to have a fail safe to prevent crashes but also continue execution without introducing undefined behavior
2020-06-24 22:50:27 +10:00
ReinUsesLisp
32a2dcd415 buffer_cache: Use buffer methods instead of cache virtual methods 2020-06-24 02:36:14 -03:00
ReinUsesLisp
39c97f1b65 gl_stream_buffer: Use InvalidateBufferData instead unmap and map
Making the stream buffer resident increases GPU usage significantly on
some games. This seems to be addressed invalidating the stream buffer
with InvalidateBufferData instead of using a Unmap + Map (with
invalidation flags).
2020-06-24 02:36:14 -03:00
ReinUsesLisp
41a4090320 gl_rasterizer: Use NV_vertex_buffer_unified_memory for vertex buffer robustness
Switch games are allowed to bind less data than what they use in a
vertex buffer, the expected behavior here is that these values are read
as zero. At the moment of writing this only D3D12, OpenGL and NVN through
NV_vertex_buffer_unified_memory support vertex buffer with a size limit.

In theory this could be emulated on Vulkan creating a new VkBuffer for
each (handle, offset, length) tuple and binding the expected data to it.
This is likely going to be slow and memory expensive when used on the
vertex buffer and we have to do it on all draws because we can't know
without analyzing indices when a game is going to read vertex data out
of bounds.

This is not a problem on OpenGL's BufferAddressRangeNV because it takes
a length parameter, unlike Vulkan's CmdBindVertexBuffers that only takes
buffers and offsets (the length is implicit in VkBuffer). It isn't a
problem on D3D12 either, because D3D12_VERTEX_BUFFER_VIEW on
IASetVertexBuffers takes SizeInBytes as a parameter (although I am not
familiar with robustness on D3D12).

Currently this only implements buffer ranges for vertex buffers,
although indices can also be affected. A KHR_robustness profile is not
created, but Nvidia's driver reads out of bound vertex data as zero
anyway, this might have to be changed in the future.

- Fixes SMO random triangles when capturing an enemy, getting hit, or
looking at the environment on certain maps.
2020-06-24 02:36:14 -03:00
ReinUsesLisp
32485917ba gl_buffer_cache: Mark buffers as resident
Make stream buffer and cached buffers as resident and query their
address. This allows us to use GPU addresses for several proprietary
Nvidia extensions.
2020-06-24 02:36:14 -03:00
ReinUsesLisp
73fb3a304b gl_device: Expose NV_vertex_buffer_unified_memory except on Turing
Expose NV_vertex_buffer_unified_memory when the driver supports it.

This commit adds a function the determine if a GL_RENDERER is a Turing
GPU. This is required because on Turing GPUs Nvidia's driver crashes
when the buffer is marked as resident or on DeleteBuffers. Without a
synchronous debug output (single threaded driver), it's likely that
the driver will crash in the first blocking call.
2020-06-24 02:36:14 -03:00
ReinUsesLisp
00c66a7289 gl_stream_buffer: Always use a non-coherent buffer 2020-06-24 02:35:33 -03:00
ReinUsesLisp
da79ec9565 gl_stream_buffer: Always use persistent memory maps
yuzu no longer supports platforms without persistent maps.
2020-06-24 02:35:33 -03:00
Rodrigo Locatti
b66ccaa376 Merge pull request #4129 from Morph1984/texture-shadow-lod-workaround
gl_shader_decompiler: Workaround textureLod when GL_EXT_texture_shadow_lod is not available
2020-06-24 01:51:15 -03:00
David Marcec
f5e2aec422 addressed issues 2020-06-24 12:18:33 +10:00
David Marcec
52340e94ac clear mme draw mode
We already draw, so we can clear it
2020-06-24 12:09:04 +10:00
David Marcec
fabdf5d385 Addressed issues 2020-06-24 12:09:03 +10:00
David Marcec
74b4334d51 Fix constbuffer for 0217920100488FF7 2020-06-24 12:09:02 +10:00
David Marcec
6ce5f3120b Macro HLE support 2020-06-24 12:09:01 +10:00
bunnei
3bab5a5e4a Merge pull request #4138 from Morph1984/GyroscopeZeroDriftMode
hid: Implement Get/ResetGyroscopeZeroDriftMode
2020-06-23 21:56:16 -04:00
ReinUsesLisp
9f54cd4dad gl_shader_cache: Avoid use after move for program size
All programs had a size of zero due to this bug, skipping invalidations.

While we are at it, remove some unused forward declarations.
2020-06-23 22:54:42 -03:00
bunnei
1d1489da80 Merge pull request #4128 from lioncash/move2
software_keyboard: Eliminate trivial redundant copies
2020-06-23 18:24:15 -04:00
bunnei
bfe2e40882 Merge pull request #4135 from FearlessTobi/port-5324
Port citra-emu/citra#5324: "Update manifest file to include new elements that are introduced with Windows 10 later versions"
2020-06-23 16:03:35 -04:00
bunnei
15aeae3dd3 Merge pull request #4127 from lioncash/dst-typo
texture_cache: Fix incorrect address used in a DeduceSurface() call
2020-06-23 15:59:37 -04:00
bunnei
60da57b518 Merge pull request #3948 from Morph1984/log-cpu-instructions
main/common: Log/append AVX/FMA to the Host CPU string if available and add AVX512 detection
2020-06-23 15:19:47 -04:00
Rodrigo Locatti
2ce3aedda8 Merge pull request #4148 from Morph1984/silence-warnings
Silence miscellaneous warnings
2020-06-23 00:39:04 -03:00
Morph
b8798a995b yuzu_tester: Silence type conversion warning 2020-06-22 22:56:15 -04:00
Morph
45dac6bc5c lm: Silence no return value warning 2020-06-22 22:55:32 -04:00
ReinUsesLisp
39ab33ee1c shader/half_set: Implement HSET2_IMM
Add HSET2_IMM. Due to the complexity of the encoding avoid using
BitField unions and read the relevant bits from the code itself.
This is less error prone.
2020-06-22 20:51:18 -03:00
VolcaEM
e193aa3f53 account: Update function tables and add missing classes (#4145)
* account: Update function tables and add missing classes

* clang-format

* Add missing "public"

* Add missing public again

* Add missing final
2020-06-22 16:03:26 -04:00
LC
25174afa79 Merge pull request #4142 from Morph1984/core-arm-logging
arm_dynarmic: Minor logging changes
2020-06-22 14:21:53 -04:00
Fernando Sahmkow
544b15e8e4 TextureCache: Fix case where layer goes off bound.
The returned layer is expected to be between 0 and the depth of the
surface, anything larger is off bounds.
2020-06-22 11:37:40 -04:00
unknown
8cf6efe677 Reorder variables to comply with the Auzure build pipeline 2020-06-22 15:56:41 +02:00
Morph
f2df941e8d arm_dynarmic_64: Log the instruction when an exception is raised 2020-06-22 07:00:24 -04:00
Morph
e0af4cdf98 arm_dynarmic_32: Log under Core_ARM instead of HW_GPU 2020-06-22 06:59:41 -04:00
Rodrigo Locatti
406d298457 Merge pull request #4110 from ReinUsesLisp/direct-upload-sets
vk_update_descriptor: Upload descriptor sets data directly
2020-06-22 05:02:13 -03:00
ReinUsesLisp
2f09c7ddd3 renderer_vulkan: Update validation layer name and test before enabling
Update validation layer string to VK_LAYER_KHRONOS_validation.

While we are at it, properly check for available validation layers
before enabling them.
2020-06-22 04:10:45 -03:00
bunnei
14a1181a97 Merge pull request #4122 from lioncash/hide
video_core: Eliminate some variable shadowing
2020-06-21 22:38:04 -04:00
bunnei
c27c76ed43 Merge pull request #4126 from lioncash/noexcept
vulkan/wrapper: Remove noexcept from GetSurfaceCapabilitiesKHR()
2020-06-21 22:36:14 -04:00
bunnei
e8855ed0fc Merge pull request #4134 from FearlessTobi/port-5322
Port citra-emu/citra#5322: "Fix: fatal error CVT1100 when compiling manifest file"
2020-06-21 22:35:17 -04:00
Morph
0235915baa hid: Implement Get/ResetGyroscopeZeroDriftMode
- Used by Captain Toad Treasure Tracker
2020-06-21 16:25:41 -04:00
VolcaEM
409fedaf97 Correct function name (2/2) 2020-06-21 18:10:23 +02:00
VolcaEM
182ac8a504 Correct function name (1/2) 2020-06-21 18:09:14 +02:00
VolcaEM
23d57ed4f7 Clang-format 2020-06-21 06:17:46 +02:00
VolcaEM
d11b04ed46 Remove unnecessary conversion 2020-06-21 06:16:03 +02:00
VolcaEM
606e833d26 Address review comment by Lioncash
Co-authored-by: LC <mathew1800@gmail.com>
2020-06-21 06:12:23 +02:00
VolcaEM
b81af6ae9b Add a "Open Mods Page" button to the GUI 2020-06-21 06:09:28 +02:00
Morph
f77c897b8d gl_shader_decompiler: Enable GL_EXT_texture_shadow_lod if available
Enable GL_EXT_texture_shadow_lod if available. If this extension is not available, such as on Intel/AMD proprietary drivers, use textureGrad as a workaround.
2020-06-20 23:02:29 -04:00
Morph
1e65da971b gl_device: Check for GL_EXT_texture_shadow_lod 2020-06-20 22:14:32 -04:00
bunnei
f98bf1025f Merge pull request #4120 from lioncash/arb
gl_arb_decompiler: Avoid several string copies
2020-06-20 22:11:49 -04:00
FearlessTobi
20ed33b53b Update manifest file to include new elements that are introduced with Windows 10 later versions
Co-Authored-By: dragios <dragios@users.noreply.github.com>
2020-06-21 03:17:55 +02:00
FearlessTobi
a8674a7b86 Fix: fatal error CVT1100 when compiling manifest file
Occurs when doing a local compile in MSVC build. The compiler I'm using is as below:
Microsoft Visual Studio Community 2019 Preview
Version 16.6.0 Preview 5.0

Fixes this error:
CVTRES : fatal error CVT1100: duplicate resource. type:MANIFEST, name:1, language:0x0409
LINK : fatal error LNK1123: failure during conversion to COFF: file invalid or corrupt

I have put 0 since previous name was 1. If have other names in mind, please let me know.

Co-Authored-By: dragios <dragios@users.noreply.github.com>
2020-06-21 03:11:23 +02:00
LC
c6ba7a228d Merge pull request #4133 from MerryMage/macrojit-shifts
macro_jit_x64: Use ecx for shift register
2020-06-20 19:58:51 -04:00
MerryMage
c12eb814b4 macro_jit_x64: Use ecx for shift register
shl/shr only accept cl as their second argument
2020-06-20 22:24:05 +01:00
Lioncash
ef53b2fd08 texture_cache: Fix incorrect address used in a DeduceSurface() call
Previously the source was being deduced twice in a row.
2020-06-20 14:11:28 -04:00
merry
928e9c09aa Merge pull request #4125 from lioncash/macro-shift
macro_jit_x64: Amend readability of Compile_ExtractShiftLeftRegister()
2020-06-20 16:08:23 +01:00
merry
2bd903e021 Merge pull request #4123 from lioncash/unused-var
macro_jit_x64: Remove unused variable
2020-06-20 16:07:58 +01:00
Lioncash
a5ed0c3df7 software_keyboard: Eliminate trivial redundant copies
We can just make use of moves here to get rid of two redundant copies
2020-06-20 01:06:10 -04:00
Morph
9bb5bf0b2b main: Append AVX and FMA instructions to cpu string
Append AVX and FMA instructions to cpu string if the host cpu supports them
2020-06-20 00:31:37 -04:00
Morph
97ba520434 common/telemetry: Add AVX512 to telemetry 2020-06-20 00:31:37 -04:00
Morph
d6474b4aca common/cpu_detect: Add AVX512 detection 2020-06-20 00:31:37 -04:00
Morph
480e1fa987 decode/image: Implement B10G11R11F
- Used by Kirby Star Allies
2020-06-20 00:28:30 -04:00
bunnei
7d1dca4c98 Merge pull request #4099 from MerryMage/macOS-build
Fix compilation on macOS
2020-06-19 23:31:04 -04:00
Lioncash
5865a10885 gl_arb_decompiler: Avoid several string copies
Variables that are marked as const cannot have the move constructor
invoked when returning from a function (the move constructor requires a
non-const variable so it can "steal" the resources from it.
2020-06-19 23:09:16 -04:00
Lioncash
a6e5b84d1f vulkan/wrapper: Remove noexcept from GetSurfaceCapabilitiesKHR()
Check() can throw an exception if the Vulkan result isn't successful.

We remove the check so that std::terminate isn't outright called and
allows for better debugging (should it ever actually fail).
2020-06-19 23:01:59 -04:00
Lioncash
5a4e89b901 macro_jit_x64: Correct readability of Compile_ExtractShiftLeftImmediate()
Previously dst wasn't being used.
2020-06-19 22:57:23 -04:00
Lioncash
140f953b6a macro_jit_x64: Correct readability of Compile_ExtractShiftLeftRegister()
Previously dst wasn't being used.
2020-06-19 22:56:55 -04:00
Lioncash
8ea749c1ca macro_jit_x64: Remove unused variable
Removes a completely unused label and marks another variable as unused,
given it seems like it has potential uses in the future.
2020-06-19 22:10:45 -04:00
Lioncash
479605b3e5 memory_manager: Eliminate variable shadowing
Renames some variables to prevent ones in inner scopes from shadowing
outer-scoped variables.

The Copy* functions have no shadowing, but we rename them anyways to
remain consistent with the other functions.
2020-06-19 22:02:58 -04:00
bunnei
9c5ed4408d Merge pull request #4113 from ogniK5377/boxcat-disable
Fix compilation when not building with boxcat
2020-06-19 21:59:59 -04:00
David Marcec
a7fe6dc232 Add translation of "Current Boxcat Events" 2020-06-20 11:57:51 +10:00
Lioncash
811bff009e macro_jit_x64: Eliminate variable shadowing in Compile_ProcessResult()
We can reduce the capture scope so that it's not possible for both "reg"
variables to clash with one another.

While we're at it, we can prevent unnecessary copies while we're at it.
2020-06-19 21:57:44 -04:00
Lioncash
4514b80b3e buffer_cache: Eliminate local variable shadowing
We can just make use of the instance in the scope above this one.
2020-06-19 21:55:02 -04:00
bunnei
7daea551c0 Merge pull request #4087 from MerryMage/macrojit-inline-Read
macro_jit_x64: Inline Engines::Maxwell3D::GetRegisterValue
2020-06-19 21:32:07 -04:00
LC
8434630dcc Merge pull request #4114 from MerryMage/nrvo
Remove redundant moves
2020-06-19 15:17:04 -04:00
MerryMage
c6a963c48e input_common/motion_emu: Remove redundant move
Named return value optimization automatically applies here.
2020-06-19 14:29:59 +01:00
MerryMage
8272f53cf9 input_common/keyboard: Remove redundant move
Named return value optimization automatically applies here.
2020-06-19 14:29:36 +01:00
MerryMage
7236393114 mii_model: Remove redundant std::move
Named return value optimization automatically applies here.
2020-06-19 14:29:09 +01:00
David Marcec
c7ed7d9427 Fix compilation when not building with boxcat
Fixes compilation when trying to build without boxcat enabled
2020-06-19 22:17:56 +10:00
MerryMage
977ceb4056 macro_jit_x64: Remove unused function Read 2020-06-19 11:39:41 +01:00
bunnei
0f7822acb1 Merge pull request #4080 from ogniK5377/audren-RendererInfo
audren: Implement RendererInfo
2020-06-19 01:02:30 -04:00
bunnei
5a092fb61e Merge pull request #4090 from MerryMage/macrojit-bugs
macro_jit_x64: Optimization correctness
2020-06-18 22:28:17 -04:00
ReinUsesLisp
cf137ea40b vk_rasterizer: Don't preserve contents on full screen clears
There's no need to load contents from the CPU when a clear resets all
the contents of the underlying memory. This is already implemented on
OpenGL and the texture cache.
2020-06-18 18:18:33 -03:00
Rodrigo Locatti
de644d506f Merge pull request #4081 from Morph1984/maxwell-to-gl-vk
maxwell_to_gl/vk: Miscellaneous changes
2020-06-18 17:51:41 -03:00
ReinUsesLisp
7d763f060e vk_update_descriptor: Upload descriptor sets data directly
Instead of copying to a temporary payload before sending the update task
to the worker thread, insert elements to the payload directly.
2020-06-18 17:47:19 -03:00
Fernando Sahmkow
45d29436b3 Tests/HostTiming: Correct GCC Compile error. 2020-06-18 16:29:28 -04:00
Fernando Sahmkow
e77ee67bfa Common/Fiber: Address Feedback and Correct Memory leaks. 2020-06-18 16:29:27 -04:00
Fernando Sahmkow
b6655aa2e4 Common/Fiber: Implement Rewind on Boost Context. 2020-06-18 16:29:27 -04:00
Fernando Sahmkow
59ce6e6d06 Common/uint128: Correct MSVC Compilation in old versions. 2020-06-18 16:29:26 -04:00
Fernando Sahmkow
18f54f7486 Common/Fiber: Document fiber interexchange. 2020-06-18 16:29:26 -04:00
Fernando Sahmkow
137d862d9b Common/Fiber: Implement Rewinding. 2020-06-18 16:29:25 -04:00
Fernando Sahmkow
41013381d6 Common/Fiber: Additional corrections to f_context. 2020-06-18 16:29:25 -04:00
Fernando Sahmkow
7d2b1a6ec4 Common/Fiber: Correct f_context based Fibers. 2020-06-18 16:29:24 -04:00
Fernando Sahmkow
8f6ffcd5c4 Host Timing: Correct clang format. 2020-06-18 16:29:23 -04:00
Fernando Sahmkow
96b2d8419c HostTiming: Correct rebase and implement AddTicks. 2020-06-18 16:29:22 -04:00
Fernando Sahmkow
49a7e0984a Core/HostTiming: Allow events to be advanced manually. 2020-06-18 16:29:22 -04:00
Fernando Sahmkow
1f7dd36499 Common/Tests: Address Feedback 2020-06-18 16:29:21 -04:00
Fernando Sahmkow
3398f701ee Common: Make MinGW build use Windows Fibers instead of fcontext_t 2020-06-18 16:29:20 -04:00
Fernando Sahmkow
1bd706344e Common/Tests: Clang Format. 2020-06-18 16:29:19 -04:00
Fernando Sahmkow
03e4f5dac4 Common: Correct fcontext fibers. 2020-06-18 16:29:19 -04:00
Fernando Sahmkow
e3524d1142 Common: Refactor & Document Wall clock. 2020-06-18 16:29:18 -04:00
Fernando Sahmkow
234b5ff6a9 Common: Implement WallClock Interface and implement a native clock for x64 2020-06-18 16:29:17 -04:00
Fernando Sahmkow
0f8e5a1465 Tests: Add base tests to host timing 2020-06-18 16:29:17 -04:00
Fernando Sahmkow
62e35ffc0e Core: Implement a Host Timer. 2020-06-18 16:29:16 -04:00
Fernando Sahmkow
be320a9e10 Common: Polish Fiber class, add comments, asserts and more tests. 2020-06-18 16:29:15 -04:00
Fernando Sahmkow
8d0e3c5422 Tests: Add tests for fibers and refactor/fix Fiber class 2020-06-18 16:29:15 -04:00
Fernando Sahmkow
bc266a9d98 Common: Implement a basic Fiber class. 2020-06-18 16:29:14 -04:00
Fernando Sahmkow
13ed9438fb Common: Implement a basic SpinLock class 2020-06-18 16:29:13 -04:00
bunnei
bfa6193eb9 Merge pull request #4108 from ReinUsesLisp/a32-implicit-cast
arm_dynarmic_32: Fix implicit conversion error in SetTPIDR_EL0
2020-06-18 16:07:15 -04:00
ReinUsesLisp
778043a44c arm_dynarmic_32: Fix implicit conversion error in SetTPIDR_EL0
On MSVC builds we treat conversion warnings as errors.
2020-06-18 16:52:15 -03:00
MerryMage
778f86989a bootmanager: Remove references to OpenGL for macOS
OpenGL macOS headers definitions clash heavily with each other
2020-06-18 15:47:44 +01:00
MerryMage
b19fe55f84 memory_manager: Explicitly specifcy std::min<size_t> 2020-06-18 15:47:44 +01:00
MerryMage
4f09f0aea4 shared_font: Service::NS::EncryptSharedFont takes a size_t& 2020-06-18 15:47:44 +01:00
MerryMage
69f38355ed vk_rasterizer: BindTransformFeedbackBuffersEXT accepts a size of type VkDeviceSize 2020-06-18 15:47:44 +01:00
MerryMage
b1eada6079 renderer_vulkan: Fix macOS GetBundleDirectory reference 2020-06-18 15:47:44 +01:00
MerryMage
442e48ef4c memory_util: boost hashes are size_t
* boost::hash_value returns a size_t
* boost::hash_combine takes a size_t& argument
2020-06-18 15:47:43 +01:00
MerryMage
8ae7154541 Rename PAGE_SHIFT to PAGE_BITS
macOS header files #define PAGE_SHIFT
2020-06-18 15:47:43 +01:00
VolcaEM
684dfbf209 Move SHA256Hash to its original position
It's not needed to have it in its previous position anymore
2020-06-18 15:45:47 +02:00
Morph
2f420618ea vk_sampler_cache: Emulate GL_LINEAR/NEAREST minification filters
Emulate GL_LINEAR/NEAREST minification filters using minLod = 0 and maxLod = 0.25 during sampler creation
2020-06-18 04:56:31 -04:00
Morph
be660e7749 maxwell_to_vk: Reorder filter cases and correct mipmap_filter=None
maxwell_to_vk: Reorder filtering modes to start with None, then Nearest, then Linear.
maxwell_to_vk: Logs filter modes under UNREACHABLE_MSG instead of UNIMPLEMENTED_MSG, since any unknown filter modes are invalid and not unimplemented.
maxwell_to_vk: Return VK_SAMPLER_MIPMAP_MODE_NEAREST instead of VK_SAMPLER_MIPMAP_MODE_LINEAR when mipmap_filter is None with the description from the VkSamplerCreateInfo(3) man page.
2020-06-18 04:56:31 -04:00
Morph
8868fb745f maxwell_to_gl: Miscellaneous changes
maxwell_to_gl: Log unimplemented features under UNIMPLEMENTED_MSG instead of LOG_ERROR to bring into parity with maxwell_to_vk
maxwell_to_gl: Deduplicate logging in VertexType(), merging them into one.

maxwell_to_gl: Return GL_NEAREST instead of GL_LINEAR if an unknown texture filter mode is encountered.
maxwell_to_gl: Log the mipmap filter mode if an unknown value is passed in.
maxwell_to_gl: Reorder filtering modes to start with None, then Nearest, then Linear.
2020-06-18 04:56:31 -04:00
Rodrigo Locatti
edb2114bac Merge pull request #4092 from Morph1984/image-bindings
gl_device: Reserve 4 image bindings for fragment stage
2020-06-18 04:59:48 -03:00
Fernando Sahmkow
1394a581f2 Merge pull request #4100 from MerryMage/no-a32-interp
arm_dynarmic: CP15 changes
2020-06-17 22:44:52 -04:00
MerryMage
44f10d9b9f macro_jit_x64: Inline Engines::Maxwell3D::GetRegisterValue 2020-06-17 17:17:08 +01:00
MerryMage
52bcfac116 arm_dynarmic_cp15: Implement CNTPCT 2020-06-17 17:10:24 +01:00
MerryMage
109df7705f arm_dynarmic_cp15: Update CP15 2020-06-17 17:10:24 +01:00
MerryMage
32a127faaa arm_dynarmic_32: InterpreterFallback should never happen 2020-06-17 17:10:24 +01:00
bunnei
a8ac99b619 Merge pull request #4086 from MerryMage/abi
xbyak_abi: Cleanup
2020-06-17 11:20:52 -04:00
MerryMage
c409722435 macro_jit_x64: Optimization implicitly assumes same destination 2020-06-17 10:36:36 +01:00
MerryMage
a6ddd7c382 macro_jit_x64: Should not skip zero registers for certain ALU ops
The code generated for these ALU ops assume src_a and src_b are always valid.
2020-06-17 10:36:34 +01:00
Kaiwen Xu
7a59eeb5be Fix framebuffer size on fractional scaling display. 2020-06-16 20:45:20 -07:00
bunnei
b660ef6c8a Merge pull request #4089 from MerryMage/macrojit-cleanup-1
macro_jit_x64: Cleanup
2020-06-16 23:44:48 -04:00
bunnei
0f57bbfa3f Merge pull request #3976 from Neodyblue/qdarkstyle_fix_prop
qt_themes: remove unknown qss property from dark theme
2020-06-16 22:27:27 -04:00
bunnei
2a3d4cad63 externals: Revert to libressl, as build is broken with find_package(OpenSSL). (#4093)
* externals: Revert to libressl, as build is broken with find_package(OpenSLL).

* fixup! externals: Revert to libressl, as build is broken with find_package(OpenSLL).

* fixup! externals: Revert to libressl, as build is broken with find_package(OpenSLL).
2020-06-16 21:46:19 -04:00
bunnei
798ec003ce Merge pull request #4041 from ReinUsesLisp/arb-decomp
gl_arb_decompiler: Implement an assembly shader decompiler
2020-06-16 14:56:23 -04:00
VolcaEM
bd9495c9ab Remove unnecessary pragmas 2020-06-16 20:28:44 +02:00
VolcaEM
c0d6162050 Revert IsValidNRO refactor but make it more readable 2020-06-16 20:24:58 +02:00
bunnei
f22d02083c Merge pull request #3966 from Morph1984/hide-internal-resolution-ui
yuzu/frontend: Remove internal resolution option
2020-06-16 14:12:17 -04:00
VolcaEM
4b71bf654d Update assert string 2020-06-16 15:57:02 +02:00
Morph
e2f5d16540 gl_device: Reserve at least 4 image bindings for fragment stage
Due to the limitation of GL_MAX_IMAGE_UNITS being low (8) on Intel's and Nvidia's proprietary drivers, we have to reserve an appropriate amount of image bindings for each of the stages. So far games have been observed to use 4 image bindings on the fragment stage (Kirby Star Allies) and 1 on the vertex stage (TWD series).
No games thus far in my limited testing used more than 4 images concurrently and across all currently active programs.
This fixes shader compilation errors on Kirby Star Allies on OpenGL (GLSL/GLASM)
2020-06-16 03:03:07 -04:00
bunnei
ed2cd9d8f3 Merge pull request #4091 from MerryMage/cmakelists-xbyak-order
CMakeLists: xbyak comes before dynarmic
2020-06-15 22:39:48 -04:00
Rodrigo Locatti
0bd9bc7201 Merge pull request #4066 from ReinUsesLisp/shared-ptr-buf
buffer_cache: Avoid passing references of shared pointers and misc style changes
2020-06-15 22:29:32 -03:00
MerryMage
256cb2979b CMakeLists: xbyak comes before dynarmic 2020-06-15 22:37:27 +01:00
MerryMage
cf0aad7d6a macro_jit_x64: Remove NEXT_PARAMETER
Not required, as PARAMETERS can just be incremented directly.
2020-06-15 21:19:38 +01:00
MerryMage
1799f4e774 macro_jit_x64: Remove unused function Compile_WriteCarry 2020-06-15 21:19:38 +01:00
MerryMage
c09a9e5cc7 macro_jit_x64: Select better registers
All registers are now callee-save registers.

RBX and RBP selected for STATE and RESULT because these are most commonly accessed; this is to avoid the REX prefix.
RBP not used for STATE because there are some SIB restrictions, RBX emits smaller code.
2020-06-15 21:19:38 +01:00
MerryMage
79aa7b3ace macro_jit_x64: Remove REGISTERS
Unnecessary since this is just an offset from STATE.
2020-06-15 21:00:59 +01:00
MerryMage
35db6e1c68 macro_jit_x64: Remove JITState::parameters
This can be passed in as an argument instead.
2020-06-15 20:55:02 +01:00
MerryMage
389549b80d macro_jit_x64: Remove METHOD_ADDRESS_64
Unnecessary variable.
2020-06-15 20:51:33 +01:00
MerryMage
a6a43a5ae0 macro_jit_x64: Remove RESULT_64
This Reg64 codepath has the exact same behaviour as the Reg32 one.
2020-06-15 20:35:08 +01:00
MerryMage
7c6203dc5e xbyak_abi: Prefer returning a struct to using out parameters in ABI_CalculateFrameSize 2020-06-15 19:07:11 +01:00
MerryMage
36362e9695 xbyak_abi: Register indexes should be unsigned 2020-06-15 19:07:11 +01:00
MerryMage
d563017dfe xbyak_abi: Remove *GPS variants of stack manipulation functions 2020-06-15 18:59:54 +01:00
MerryMage
4417770ba9 xbyak_abi: Fix ABI_PushRegistersAndAdjustStack
Pushing GPRs twice.
2020-06-15 18:59:01 +01:00
David
5c9dee2c94 Merge pull request #4085 from ReinUsesLisp/gcc-times
video_core/macro_jit_x64: Remove initializer in member variable
2020-06-15 23:05:21 +10:00
ReinUsesLisp
6e5d8aac4d video_core/macro_jit_x64: Remove initializer in member variable
Fix build time issues on gcc. Confirmed through asan that avoiding this
initialization is safe.
2020-06-15 05:17:55 -03:00
bunnei
55ebf68636 Merge pull request #4070 from ogniK5377/GetTPCMasks-fix
nvdrv: Fix GetTPCMasks for ioctl3
2020-06-14 20:12:45 -04:00
VolcaEM
39213b1c59 Clang-format again 2020-06-14 19:41:28 +02:00
VolcaEM
198b0fa790 Use consistent variable names 2020-06-14 19:37:44 +02:00
VolcaEM
1520d7865d Clang-format 2020-06-14 19:34:58 +02:00
VolcaEM
761d206049 Make assert strings consistent 2020-06-14 19:30:08 +02:00
VolcaEM
151a3fe7b3 Attempt to fix crashes in SSBU and refactor IsValidNRO 2020-06-14 19:28:39 +02:00
bunnei
89d11f2268 Merge pull request #4069 from ogniK5377/total-phys-mem
kernel: Account for system resource size for memory usage
2020-06-14 00:44:34 -04:00
bunnei
92021a344c Merge pull request #4064 from ReinUsesLisp/invalidate-buffers
gl_rasterizer: Mark vertex buffers as dirty after buffer cache invalidation
2020-06-14 00:29:16 -04:00
bunnei
c2ea1e1bcb Merge pull request #4049 from ReinUsesLisp/separate-samplers
shader/texture: Join separate image and sampler pairs offline
2020-06-13 13:48:27 -04:00
David Marcec
42250427c5 audren: Implement RendererInfo
Fixes ZLA softlock
2020-06-13 14:04:28 +10:00
ReinUsesLisp
87011a97f9 gl_arb_decompiler: Implement FSwizzleAdd 2020-06-11 22:12:07 -03:00
ReinUsesLisp
a63a0daa5e gl_arb_decompiler: Implement an assembly shader decompiler
Emit code compatible with NV_gpu_program5.
This should emit code compatible with Fermi, but it wasn't tested on
that architecture. Pascal has some issues not present on Turing GPUs.
2020-06-11 22:12:07 -03:00
ReinUsesLisp
d89888389d yuzu/configuration: Show assembly shaders check box 2020-06-10 19:04:53 -03:00
David Marcec
b15cbf9bcf nvdrv: Fix GetTPCMasks for ioctl3
Fixes animal crossing svcBreak on launch
2020-06-10 18:36:42 +10:00
David Marcec
74ff1db758 kernel: Account for system resource size for memory usage
GetTotalPhysicalMemoryAvailableWithoutSystemResource & GetTotalPhysicalMemoryUsedWithoutSystemResource seem to subtract the resource size from the usage.
2020-06-10 14:49:00 +10:00
ReinUsesLisp
6508cdd003 buffer_cache: Avoid passing references of shared pointers and misc style changes
Instead of using as template argument a shared pointer, use the
underlying type and manage shared pointers explicitly. This can make
removing shared pointers from the cache more easy.

While we are at it, make some misc style changes and general
improvements (like insert_or_assign instead of operator[] + operator=).
2020-06-09 18:30:49 -03:00
ReinUsesLisp
7646f2c21d gl_rasterizer: Mark vertex buffers as dirty after buffer cache invalidation
Vertex buffers bindings become invalid after the stream buffer is
invalidated. We were originally doing this, but it got lost at some
point.

- Fixes Animal Crossing: New Horizons, but it affects everything.
2020-06-08 20:24:16 -03:00
ReinUsesLisp
6e122f0b2c buffer_cache: Return stream buffer invalidation in Map instead of Unmap
We have to invalidate whatever cache is being used before uploading the
data, hence it makes more sense to return this on Map instead of Unmap.
2020-06-08 20:22:31 -03:00
unknown
20a779299a Add game versio to title bar 2020-06-08 23:58:04 +02:00
Morph
03fad5ebe8 yuzu/frontend: Remove internal resolution option 2020-06-06 15:56:14 -04:00
ReinUsesLisp
5b2b6d594c shader/texture: Join separate image and sampler pairs offline
Games using D3D idioms can join images and samplers when a shader
executes, instead of baking them into a combined sampler image. This is
also possible on Vulkan.

One approach to this solution would be to use separate samplers on
Vulkan and leave this unimplemented on OpenGL, but we can't do this
because there's no consistent way of determining which constant buffer
holds a sampler and which one an image. We could in theory find the
first bit and if it's in the TIC area, it's an image; but this falls
apart when an image or sampler handle use an index of zero.

The used approach is to track for a LOP.OR operation (this is done at an
IR level, not at an ISA level), track again the constant buffers used as
source and store this pair. Then, outside of shader execution, join
the sample and image pair with a bitwise or operation.

This approach won't work on games that truly use separate samplers in a
meaningful way. For example, pooling textures in a 2D array and
determining at runtime what sampler to use.

This invalidates OpenGL's disk shader cache :)

- Used mostly by D3D ports to Switch
2020-06-05 00:24:51 -03:00
ReinUsesLisp
e1438f8e91 shader/track: Move bindless tracking to a separate function 2020-06-04 23:02:55 -03:00
VolcaEM
dfd1badc12 Address review comments 2020-06-02 17:54:10 +02:00
VolcaEM
a087b3365a Add comment to nrr_kind
According to Atmosphére (c7026b9094/libraries/libstratosphere/include/stratosphere/ro/ro_types.hpp), nrr_kind (Atmosphére calls it "type") is 7.0.0+
2020-05-31 19:12:09 +02:00
VolcaEM
2b1cc232bc ldr: Update NRR/NRO structs
This was based on Switchbrew pages:

https://switchbrew.org/wiki/NRR

https://switchbrew.org/wiki/NRO
2020-05-31 18:49:51 +02:00
Neodyblue
ea14af2164 qt_themes: remove unknown qss property from dark theme
Qdarkstyle's qss file uses an overflow property.
According to `https://doc.qt.io/qt-5/stylesheet-reference.html`,
the property `overflow` doesn't exist, which leads to a warning message
in the console.
2020-05-21 17:51:53 -07:00
FearlessTobi
9f82a9a244 crypto: Make KeyManager a singleton class
Previously, we were reading the keys everytime a KeyManager object was created, causing yuzu to reread the keys file multiple hundreds of times when loading the game list.
With this change, it is only loaded once.
On my system, this decreased game list loading times by a factor of 20.
2020-05-20 21:28:16 +02:00
278 changed files with 10623 additions and 3609 deletions

3
.gitmodules vendored
View File

@@ -13,6 +13,9 @@
[submodule "soundtouch"]
path = externals/soundtouch
url = https://github.com/citra-emu/ext-soundtouch.git
[submodule "libressl"]
path = externals/libressl
url = https://github.com/citra-emu/ext-libressl-portable.git
[submodule "discord-rpc"]
path = externals/discord-rpc
url = https://github.com/discordapp/discord-rpc.git

View File

@@ -152,7 +152,6 @@ macro(yuzu_find_packages)
"Boost 1.71 boost/1.72.0"
"Catch2 2.11 catch2/2.11.0"
"fmt 6.2 fmt/6.2.0"
"OpenSSL 1.1 openssl/1.1.1f"
# can't use until https://github.com/bincrafters/community/issues/1173
#"libzip 1.5 libzip/1.5.2@bincrafters/stable"
"lz4 1.8 lz4/1.9.2"
@@ -215,6 +214,9 @@ if(ENABLE_QT)
set(QT_PREFIX_HINT HINTS "${QT_PREFIX}")
endif()
find_package(Qt5 5.9 COMPONENTS Widgets OpenGL ${QT_PREFIX_HINT})
if (YUZU_USE_QT_WEB_ENGINE)
find_package(Qt5 COMPONENTS WebEngineCore WebEngineWidgets)
endif()
if (NOT Qt5_FOUND)
list(APPEND CONAN_REQUIRED_LIBS "qt/5.14.1@bincrafters/stable")
endif()
@@ -312,15 +314,6 @@ elseif (TARGET Boost::boost)
add_library(boost ALIAS Boost::boost)
endif()
if (NOT TARGET OpenSSL::SSL)
set_target_properties(OpenSSL::OpenSSL PROPERTIES IMPORTED_GLOBAL TRUE)
add_library(OpenSSL::SSL ALIAS OpenSSL::OpenSSL)
endif()
if (NOT TARGET OpenSSL::Crypto)
set_target_properties(OpenSSL::OpenSSL PROPERTIES IMPORTED_GLOBAL TRUE)
add_library(OpenSSL::Crypto ALIAS OpenSSL::OpenSSL)
endif()
if (TARGET sdl2::sdl2)
# imported from the conan generated sdl2Config.cmake
set_target_properties(sdl2::sdl2 PROPERTIES IMPORTED_GLOBAL TRUE)

View File

@@ -51,6 +51,8 @@ endif()
# The variable SRC_DIR must be passed into the script (since it uses the current build directory for all values of CMAKE_*_DIR)
set(VIDEO_CORE "${SRC_DIR}/src/video_core")
set(HASH_FILES
"${VIDEO_CORE}/renderer_opengl/gl_arb_decompiler.cpp"
"${VIDEO_CORE}/renderer_opengl/gl_arb_decompiler.h"
"${VIDEO_CORE}/renderer_opengl/gl_shader_cache.cpp"
"${VIDEO_CORE}/renderer_opengl/gl_shader_cache.h"
"${VIDEO_CORE}/renderer_opengl/gl_shader_decompiler.cpp"

View File

@@ -673,10 +673,6 @@ QTabWidget::pane {
border-bottom-left-radius: 2px;
}
QTabWidget::tab-bar {
overflow: visible;
}
QTabBar {
qproperty-drawBase: 0;
border-radius: 3px;

80
dist/yuzu.manifest vendored
View File

@@ -1,24 +1,58 @@
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
<trustInfo xmlns="urn:schemas-microsoft-com:asm.v3">
<security>
<requestedPrivileges>
<requestedExecutionLevel level="asInvoker" uiAccess="false"/>
</requestedPrivileges>
</security>
</trustInfo>
<application xmlns="urn:schemas-microsoft-com:asm.v3">
<windowsSettings>
<dpiAware xmlns="http://schemas.microsoft.com/SMI/2005/WindowsSettings">True/PM</dpiAware>
<longPathAware xmlns="http://schemas.microsoft.com/SMI/2016/WindowsSettings">true</longPathAware>
</windowsSettings>
</application>
<compatibility xmlns="urn:schemas-microsoft-com:compatibility.v1">
<application>
<supportedOS Id="{35138b9a-5d96-4fbd-8e2d-a2440225f93a}"/>
<supportedOS Id="{4a2f28e3-53b9-4441-ba9c-d69d4a4a6e38}"/>
<supportedOS Id="{1f676c76-80e1-4239-95bb-83d0f6d0da78}"/>
<supportedOS Id="{8e0f7a12-bfb3-4fe8-b9a5-48fd50a15a9a}"/>
</application>
</compatibility>
</assembly>
<assembly manifestVersion="1.0"
xmlns="urn:schemas-microsoft-com:asm.v1"
xmlns:asmv3="urn:schemas-microsoft-com:asm.v3">
<asmv3:application>
<asmv3:windowsSettings>
<!-- Windows 7/8/8.1/10 -->
<dpiAware
xmlns="http://schemas.microsoft.com/SMI/2005/WindowsSettings">
true/pm
</dpiAware>
<!-- Windows 10, version 1607 or later -->
<dpiAwareness
xmlns="http://schemas.microsoft.com/SMI/2016/WindowsSettings">
PerMonitorV2
</dpiAwareness>
<!-- Windows 10, version 1703 or later -->
<gdiScaling
xmlns="http://schemas.microsoft.com/SMI/2017/WindowsSettings">
true
</gdiScaling>
<ws2:longPathAware
xmlns:ws3="http://schemas.microsoft.com/SMI/2016/WindowsSettings">
true
</ws2:longPathAware>
</asmv3:windowsSettings>
</asmv3:application>
<compatibility
xmlns="urn:schemas-microsoft-com:compatibility.v1">
<application>
<!-- Windows 10 -->
<supportedOS Id="{8e0f7a12-bfb3-4fe8-b9a5-48fd50a15a9a}"/>
<!-- Windows 8.1 -->
<supportedOS Id="{1f676c76-80e1-4239-95bb-83d0f6d0da78}"/>
<!-- Windows 8 -->
<supportedOS Id="{4a2f28e3-53b9-4441-ba9c-d69d4a4a6e38}"/>
<!-- Windows 7 -->
<supportedOS Id="{35138b9a-5d96-4fbd-8e2d-a2440225f93a}"/>
</application>
</compatibility>
<trustInfo
xmlns="urn:schemas-microsoft-com:asm.v3">
<security>
<requestedPrivileges>
<!--
UAC settings:
- app should run at same integrity level as calling process
- app does not need to manipulate windows belonging to
higher-integrity-level processes
-->
<requestedExecutionLevel
level="asInvoker"
uiAccess="false"
/>
</requestedPrivileges>
</security>
</trustInfo>
</assembly>

View File

@@ -4,6 +4,13 @@ list(APPEND CMAKE_MODULE_PATH "${PROJECT_SOURCE_DIR}/CMakeModules")
list(APPEND CMAKE_MODULE_PATH "${PROJECT_SOURCE_DIR}/externals/find-modules")
include(DownloadExternals)
# xbyak
if (ARCHITECTURE_x86 OR ARCHITECTURE_x86_64)
add_library(xbyak INTERFACE)
target_include_directories(xbyak SYSTEM INTERFACE ./xbyak/xbyak)
target_compile_definitions(xbyak INTERFACE XBYAK_NO_OP_NAMES)
endif()
# Catch
add_library(catch-single-include INTERFACE)
target_include_directories(catch-single-include INTERFACE catch/single_include)
@@ -66,6 +73,15 @@ if (NOT LIBZIP_FOUND)
endif()
if (ENABLE_WEB_SERVICE)
# LibreSSL
set(LIBRESSL_SKIP_INSTALL ON CACHE BOOL "")
add_subdirectory(libressl EXCLUDE_FROM_ALL)
target_include_directories(ssl INTERFACE ./libressl/include)
target_compile_definitions(ssl PRIVATE -DHAVE_INET_NTOP)
get_directory_property(OPENSSL_LIBRARIES
DIRECTORY libressl
DEFINITION OPENSSL_LIBS)
# lurlparser
add_subdirectory(lurlparser EXCLUDE_FROM_ALL)
@@ -73,13 +89,5 @@ if (ENABLE_WEB_SERVICE)
add_library(httplib INTERFACE)
target_include_directories(httplib INTERFACE ./httplib)
target_compile_definitions(httplib INTERFACE -DCPPHTTPLIB_OPENSSL_SUPPORT)
target_link_libraries(httplib INTERFACE OpenSSL::SSL OpenSSL::Crypto)
endif()
if (NOT TARGET xbyak)
if (ARCHITECTURE_x86 OR ARCHITECTURE_x86_64)
add_library(xbyak INTERFACE)
target_include_directories(xbyak SYSTEM INTERFACE ./xbyak/xbyak)
target_compile_definitions(xbyak INTERFACE XBYAK_NO_OP_NAMES)
endif()
target_link_libraries(httplib INTERFACE ${OPENSSL_LIBRARIES})
endif()

1
externals/libressl vendored Submodule

Submodule externals/libressl added at 7d01cb01cb

View File

@@ -62,6 +62,10 @@ else()
-Wno-unused-parameter
)
if (ARCHITECTURE_x86_64)
add_compile_options("-mcx16")
endif()
if (APPLE AND CMAKE_CXX_COMPILER_ID STREQUAL Clang)
add_compile_options("-stdlib=libc++")
endif()

View File

@@ -180,11 +180,12 @@ ResultVal<std::vector<u8>> AudioRenderer::UpdateAudioRenderer(const std::vector<
// Copy output header
UpdateDataHeader response_data{worker_params};
std::vector<u8> output_params(response_data.total_size);
if (behavior_info.IsElapsedFrameCountSupported()) {
response_data.frame_count = 0x10;
response_data.total_size += 0x10;
response_data.render_info = sizeof(RendererInfo);
response_data.total_size += sizeof(RendererInfo);
}
std::vector<u8> output_params(response_data.total_size);
std::memcpy(output_params.data(), &response_data, sizeof(UpdateDataHeader));
// Copy output memory pool entries
@@ -219,6 +220,17 @@ ResultVal<std::vector<u8>> AudioRenderer::UpdateAudioRenderer(const std::vector<
return Audren::ERR_INVALID_PARAMETERS;
}
if (behavior_info.IsElapsedFrameCountSupported()) {
const std::size_t renderer_info_offset{
sizeof(UpdateDataHeader) + response_data.memory_pools_size + response_data.voices_size +
response_data.effects_size + response_data.sinks_size +
response_data.performance_manager_size + response_data.behavior_size};
RendererInfo renderer_info{};
renderer_info.elasped_frame_count = elapsed_frame_count;
std::memcpy(output_params.data() + renderer_info_offset, &renderer_info,
sizeof(RendererInfo));
}
return MakeResult(output_params);
}
@@ -447,6 +459,7 @@ void AudioRenderer::QueueMixedBuffer(Buffer::Tag tag) {
}
}
audio_out->QueueBuffer(stream, tag, std::move(buffer));
elapsed_frame_count++;
}
void AudioRenderer::ReleaseAndQueueBuffers() {

View File

@@ -196,6 +196,12 @@ struct EffectOutStatus {
};
static_assert(sizeof(EffectOutStatus) == 0x10, "EffectOutStatus is an invalid size");
struct RendererInfo {
u64_le elasped_frame_count{};
INSERT_PADDING_WORDS(2);
};
static_assert(sizeof(RendererInfo) == 0x10, "RendererInfo is an invalid size");
struct UpdateDataHeader {
UpdateDataHeader() {}
@@ -209,7 +215,7 @@ struct UpdateDataHeader {
mixes_size = 0x0;
sinks_size = config.sink_count * 0x20;
performance_manager_size = 0x10;
frame_count = 0;
render_info = 0;
total_size = sizeof(UpdateDataHeader) + behavior_size + memory_pools_size + voices_size +
effects_size + sinks_size + performance_manager_size;
}
@@ -223,8 +229,8 @@ struct UpdateDataHeader {
u32_le mixes_size{};
u32_le sinks_size{};
u32_le performance_manager_size{};
INSERT_PADDING_WORDS(1);
u32_le frame_count{};
u32_le splitter_size{};
u32_le render_info{};
INSERT_PADDING_WORDS(4);
u32_le total_size{};
};
@@ -258,6 +264,7 @@ private:
std::unique_ptr<AudioOut> audio_out;
StreamPtr stream;
Core::Memory::Memory& memory;
std::size_t elapsed_frame_count{};
};
} // namespace AudioCore

View File

@@ -59,15 +59,24 @@ Stream::State Stream::GetState() const {
return state;
}
s64 Stream::GetBufferReleaseCycles(const Buffer& buffer) const {
s64 Stream::GetBufferReleaseNS(const Buffer& buffer) const {
const std::size_t num_samples{buffer.GetSamples().size() / GetNumChannels()};
const auto us =
std::chrono::microseconds((static_cast<u64>(num_samples) * 1000000) / sample_rate);
return Core::Timing::usToCycles(us);
const auto ns =
std::chrono::nanoseconds((static_cast<u64>(num_samples) * 1000000000ULL) / sample_rate);
return ns.count();
}
s64 Stream::GetBufferReleaseNSHostTiming(const Buffer& buffer) const {
const std::size_t num_samples{buffer.GetSamples().size() / GetNumChannels()};
/// DSP signals before playing the last sample, in HLE we emulate this in this way
s64 base_samples = std::max<s64>(static_cast<s64>(num_samples) - 1, 0);
const auto ns =
std::chrono::nanoseconds((static_cast<u64>(base_samples) * 1000000000ULL) / sample_rate);
return ns.count();
}
static void VolumeAdjustSamples(std::vector<s16>& samples, float game_volume) {
const float volume{std::clamp(Settings::values.volume - (1.0f - game_volume), 0.0f, 1.0f)};
const float volume{std::clamp(Settings::Volume() - (1.0f - game_volume), 0.0f, 1.0f)};
if (volume == 1.0f) {
return;
@@ -105,7 +114,11 @@ void Stream::PlayNextBuffer() {
sink_stream.EnqueueSamples(GetNumChannels(), active_buffer->GetSamples());
core_timing.ScheduleEvent(GetBufferReleaseCycles(*active_buffer), release_event, {});
if (core_timing.IsHostTiming()) {
core_timing.ScheduleEvent(GetBufferReleaseNSHostTiming(*active_buffer), release_event, {});
} else {
core_timing.ScheduleEvent(GetBufferReleaseNS(*active_buffer), release_event, {});
}
}
void Stream::ReleaseActiveBuffer() {

View File

@@ -96,7 +96,10 @@ private:
void ReleaseActiveBuffer();
/// Gets the number of core cycles when the specified buffer will be released
s64 GetBufferReleaseCycles(const Buffer& buffer) const;
s64 GetBufferReleaseNS(const Buffer& buffer) const;
/// Gets the number of core cycles when the specified buffer will be released
s64 GetBufferReleaseNSHostTiming(const Buffer& buffer) const;
u32 sample_rate; ///< Sample rate of the stream
Format format; ///< Format of the stream

View File

@@ -32,6 +32,8 @@ add_custom_command(OUTPUT scm_rev.cpp
DEPENDS
# WARNING! It was too much work to try and make a common location for this list,
# so if you need to change it, please update CMakeModules/GenerateSCMRev.cmake as well
"${VIDEO_CORE}/renderer_opengl/gl_arb_decompiler.cpp"
"${VIDEO_CORE}/renderer_opengl/gl_arb_decompiler.h"
"${VIDEO_CORE}/renderer_opengl/gl_shader_cache.cpp"
"${VIDEO_CORE}/renderer_opengl/gl_shader_cache.h"
"${VIDEO_CORE}/renderer_opengl/gl_shader_decompiler.cpp"
@@ -96,6 +98,8 @@ add_library(common STATIC
algorithm.h
alignment.h
assert.h
atomic_ops.cpp
atomic_ops.h
detached_tasks.cpp
detached_tasks.h
bit_field.h
@@ -108,6 +112,8 @@ add_library(common STATIC
common_types.h
dynamic_library.cpp
dynamic_library.h
fiber.cpp
fiber.h
file_util.cpp
file_util.h
hash.h
@@ -141,6 +147,8 @@ add_library(common STATIC
scm_rev.cpp
scm_rev.h
scope_exit.h
spin_lock.cpp
spin_lock.h
string_util.cpp
string_util.h
swap.h
@@ -161,6 +169,8 @@ add_library(common STATIC
vector_math.h
virtual_buffer.cpp
virtual_buffer.h
wall_clock.cpp
wall_clock.h
web_result.h
zstd_compression.cpp
zstd_compression.h
@@ -171,12 +181,15 @@ if(ARCHITECTURE_x86_64)
PRIVATE
x64/cpu_detect.cpp
x64/cpu_detect.h
x64/native_clock.cpp
x64/native_clock.h
x64/xbyak_abi.h
x64/xbyak_util.h
)
endif()
create_target_directory_groups(common)
find_package(Boost 1.71 COMPONENTS context headers REQUIRED)
target_link_libraries(common PUBLIC Boost::boost fmt::fmt microprofile)
target_link_libraries(common PUBLIC ${Boost_LIBRARIES} fmt::fmt microprofile)
target_link_libraries(common PRIVATE lz4::lz4 zstd::zstd xbyak)

View File

@@ -28,22 +28,19 @@ __declspec(noinline, noreturn)
}
#define ASSERT(_a_) \
do \
if (!(_a_)) { \
assert_noinline_call([] { LOG_CRITICAL(Debug, "Assertion Failed!"); }); \
} \
while (0)
if (!(_a_)) { \
LOG_CRITICAL(Debug, "Assertion Failed!"); \
}
#define ASSERT_MSG(_a_, ...) \
do \
if (!(_a_)) { \
assert_noinline_call([&] { LOG_CRITICAL(Debug, "Assertion Failed!\n" __VA_ARGS__); }); \
} \
while (0)
if (!(_a_)) { \
LOG_CRITICAL(Debug, "Assertion Failed! " __VA_ARGS__); \
}
#define UNREACHABLE() assert_noinline_call([] { LOG_CRITICAL(Debug, "Unreachable code!"); })
#define UNREACHABLE() \
{ LOG_CRITICAL(Debug, "Unreachable code!"); }
#define UNREACHABLE_MSG(...) \
assert_noinline_call([&] { LOG_CRITICAL(Debug, "Unreachable code!\n" __VA_ARGS__); })
{ LOG_CRITICAL(Debug, "Unreachable code!\n" __VA_ARGS__); }
#ifdef _DEBUG
#define DEBUG_ASSERT(_a_) ASSERT(_a_)

70
src/common/atomic_ops.cpp Normal file
View File

@@ -0,0 +1,70 @@
// Copyright 2020 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <cstring>
#include "common/atomic_ops.h"
#if _MSC_VER
#include <intrin.h>
#endif
namespace Common {
#if _MSC_VER
bool AtomicCompareAndSwap(u8 volatile* pointer, u8 value, u8 expected) {
u8 result = _InterlockedCompareExchange8((char*)pointer, value, expected);
return result == expected;
}
bool AtomicCompareAndSwap(u16 volatile* pointer, u16 value, u16 expected) {
u16 result = _InterlockedCompareExchange16((short*)pointer, value, expected);
return result == expected;
}
bool AtomicCompareAndSwap(u32 volatile* pointer, u32 value, u32 expected) {
u32 result = _InterlockedCompareExchange((long*)pointer, value, expected);
return result == expected;
}
bool AtomicCompareAndSwap(u64 volatile* pointer, u64 value, u64 expected) {
u64 result = _InterlockedCompareExchange64((__int64*)pointer, value, expected);
return result == expected;
}
bool AtomicCompareAndSwap(u64 volatile* pointer, u128 value, u128 expected) {
return _InterlockedCompareExchange128((__int64*)pointer, value[1], value[0],
(__int64*)expected.data()) != 0;
}
#else
bool AtomicCompareAndSwap(u8 volatile* pointer, u8 value, u8 expected) {
return __sync_bool_compare_and_swap(pointer, expected, value);
}
bool AtomicCompareAndSwap(u16 volatile* pointer, u16 value, u16 expected) {
return __sync_bool_compare_and_swap(pointer, expected, value);
}
bool AtomicCompareAndSwap(u32 volatile* pointer, u32 value, u32 expected) {
return __sync_bool_compare_and_swap(pointer, expected, value);
}
bool AtomicCompareAndSwap(u64 volatile* pointer, u64 value, u64 expected) {
return __sync_bool_compare_and_swap(pointer, expected, value);
}
bool AtomicCompareAndSwap(u64 volatile* pointer, u128 value, u128 expected) {
unsigned __int128 value_a;
unsigned __int128 expected_a;
std::memcpy(&value_a, value.data(), sizeof(u128));
std::memcpy(&expected_a, expected.data(), sizeof(u128));
return __sync_bool_compare_and_swap((unsigned __int128*)pointer, expected_a, value_a);
}
#endif
} // namespace Common

17
src/common/atomic_ops.h Normal file
View File

@@ -0,0 +1,17 @@
// Copyright 2020 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include "common/common_types.h"
namespace Common {
bool AtomicCompareAndSwap(u8 volatile* pointer, u8 value, u8 expected);
bool AtomicCompareAndSwap(u16 volatile* pointer, u16 value, u16 expected);
bool AtomicCompareAndSwap(u32 volatile* pointer, u32 value, u32 expected);
bool AtomicCompareAndSwap(u64 volatile* pointer, u64 value, u64 expected);
bool AtomicCompareAndSwap(u64 volatile* pointer, u128 value, u128 expected);
} // namespace Common

222
src/common/fiber.cpp Normal file
View File

@@ -0,0 +1,222 @@
// Copyright 2020 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "common/assert.h"
#include "common/fiber.h"
#if defined(_WIN32) || defined(WIN32)
#include <windows.h>
#else
#include <boost/context/detail/fcontext.hpp>
#endif
namespace Common {
constexpr std::size_t default_stack_size = 256 * 1024; // 256kb
#if defined(_WIN32) || defined(WIN32)
struct Fiber::FiberImpl {
LPVOID handle = nullptr;
LPVOID rewind_handle = nullptr;
};
void Fiber::Start() {
ASSERT(previous_fiber != nullptr);
previous_fiber->guard.unlock();
previous_fiber.reset();
entry_point(start_parameter);
UNREACHABLE();
}
void Fiber::OnRewind() {
ASSERT(impl->handle != nullptr);
DeleteFiber(impl->handle);
impl->handle = impl->rewind_handle;
impl->rewind_handle = nullptr;
rewind_point(rewind_parameter);
UNREACHABLE();
}
void Fiber::FiberStartFunc(void* fiber_parameter) {
auto fiber = static_cast<Fiber*>(fiber_parameter);
fiber->Start();
}
void Fiber::RewindStartFunc(void* fiber_parameter) {
auto fiber = static_cast<Fiber*>(fiber_parameter);
fiber->OnRewind();
}
Fiber::Fiber(std::function<void(void*)>&& entry_point_func, void* start_parameter)
: entry_point{std::move(entry_point_func)}, start_parameter{start_parameter} {
impl = std::make_unique<FiberImpl>();
impl->handle = CreateFiber(default_stack_size, &FiberStartFunc, this);
}
Fiber::Fiber() : impl{std::make_unique<FiberImpl>()} {}
Fiber::~Fiber() {
if (released) {
return;
}
// Make sure the Fiber is not being used
const bool locked = guard.try_lock();
ASSERT_MSG(locked, "Destroying a fiber that's still running");
if (locked) {
guard.unlock();
}
DeleteFiber(impl->handle);
}
void Fiber::Exit() {
ASSERT_MSG(is_thread_fiber, "Exitting non main thread fiber");
if (!is_thread_fiber) {
return;
}
ConvertFiberToThread();
guard.unlock();
released = true;
}
void Fiber::SetRewindPoint(std::function<void(void*)>&& rewind_func, void* start_parameter) {
rewind_point = std::move(rewind_func);
rewind_parameter = start_parameter;
}
void Fiber::Rewind() {
ASSERT(rewind_point);
ASSERT(impl->rewind_handle == nullptr);
impl->rewind_handle = CreateFiber(default_stack_size, &RewindStartFunc, this);
SwitchToFiber(impl->rewind_handle);
}
void Fiber::YieldTo(std::shared_ptr<Fiber>& from, std::shared_ptr<Fiber>& to) {
ASSERT_MSG(from != nullptr, "Yielding fiber is null!");
ASSERT_MSG(to != nullptr, "Next fiber is null!");
to->guard.lock();
to->previous_fiber = from;
SwitchToFiber(to->impl->handle);
ASSERT(from->previous_fiber != nullptr);
from->previous_fiber->guard.unlock();
from->previous_fiber.reset();
}
std::shared_ptr<Fiber> Fiber::ThreadToFiber() {
std::shared_ptr<Fiber> fiber = std::shared_ptr<Fiber>{new Fiber()};
fiber->guard.lock();
fiber->impl->handle = ConvertThreadToFiber(nullptr);
fiber->is_thread_fiber = true;
return fiber;
}
#else
struct Fiber::FiberImpl {
alignas(64) std::array<u8, default_stack_size> stack;
alignas(64) std::array<u8, default_stack_size> rewind_stack;
u8* stack_limit;
u8* rewind_stack_limit;
boost::context::detail::fcontext_t context;
boost::context::detail::fcontext_t rewind_context;
};
void Fiber::Start(boost::context::detail::transfer_t& transfer) {
ASSERT(previous_fiber != nullptr);
previous_fiber->impl->context = transfer.fctx;
previous_fiber->guard.unlock();
previous_fiber.reset();
entry_point(start_parameter);
UNREACHABLE();
}
void Fiber::OnRewind([[maybe_unused]] boost::context::detail::transfer_t& transfer) {
ASSERT(impl->context != nullptr);
impl->context = impl->rewind_context;
impl->rewind_context = nullptr;
u8* tmp = impl->stack_limit;
impl->stack_limit = impl->rewind_stack_limit;
impl->rewind_stack_limit = tmp;
rewind_point(rewind_parameter);
UNREACHABLE();
}
void Fiber::FiberStartFunc(boost::context::detail::transfer_t transfer) {
auto fiber = static_cast<Fiber*>(transfer.data);
fiber->Start(transfer);
}
void Fiber::RewindStartFunc(boost::context::detail::transfer_t transfer) {
auto fiber = static_cast<Fiber*>(transfer.data);
fiber->OnRewind(transfer);
}
Fiber::Fiber(std::function<void(void*)>&& entry_point_func, void* start_parameter)
: entry_point{std::move(entry_point_func)}, start_parameter{start_parameter} {
impl = std::make_unique<FiberImpl>();
impl->stack_limit = impl->stack.data();
impl->rewind_stack_limit = impl->rewind_stack.data();
u8* stack_base = impl->stack_limit + default_stack_size;
impl->context =
boost::context::detail::make_fcontext(stack_base, impl->stack.size(), FiberStartFunc);
}
void Fiber::SetRewindPoint(std::function<void(void*)>&& rewind_func, void* start_parameter) {
rewind_point = std::move(rewind_func);
rewind_parameter = start_parameter;
}
Fiber::Fiber() : impl{std::make_unique<FiberImpl>()} {}
Fiber::~Fiber() {
if (released) {
return;
}
// Make sure the Fiber is not being used
const bool locked = guard.try_lock();
ASSERT_MSG(locked, "Destroying a fiber that's still running");
if (locked) {
guard.unlock();
}
}
void Fiber::Exit() {
ASSERT_MSG(is_thread_fiber, "Exitting non main thread fiber");
if (!is_thread_fiber) {
return;
}
guard.unlock();
released = true;
}
void Fiber::Rewind() {
ASSERT(rewind_point);
ASSERT(impl->rewind_context == nullptr);
u8* stack_base = impl->rewind_stack_limit + default_stack_size;
impl->rewind_context =
boost::context::detail::make_fcontext(stack_base, impl->stack.size(), RewindStartFunc);
boost::context::detail::jump_fcontext(impl->rewind_context, this);
}
void Fiber::YieldTo(std::shared_ptr<Fiber>& from, std::shared_ptr<Fiber>& to) {
ASSERT_MSG(from != nullptr, "Yielding fiber is null!");
ASSERT_MSG(to != nullptr, "Next fiber is null!");
to->guard.lock();
to->previous_fiber = from;
auto transfer = boost::context::detail::jump_fcontext(to->impl->context, to.get());
ASSERT(from->previous_fiber != nullptr);
from->previous_fiber->impl->context = transfer.fctx;
from->previous_fiber->guard.unlock();
from->previous_fiber.reset();
}
std::shared_ptr<Fiber> Fiber::ThreadToFiber() {
std::shared_ptr<Fiber> fiber = std::shared_ptr<Fiber>{new Fiber()};
fiber->guard.lock();
fiber->is_thread_fiber = true;
return fiber;
}
#endif
} // namespace Common

92
src/common/fiber.h Normal file
View File

@@ -0,0 +1,92 @@
// Copyright 2020 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <functional>
#include <memory>
#include "common/common_types.h"
#include "common/spin_lock.h"
#if !defined(_WIN32) && !defined(WIN32)
namespace boost::context::detail {
struct transfer_t;
}
#endif
namespace Common {
/**
* Fiber class
* a fiber is a userspace thread with it's own context. They can be used to
* implement coroutines, emulated threading systems and certain asynchronous
* patterns.
*
* This class implements fibers at a low level, thus allowing greater freedom
* to implement such patterns. This fiber class is 'threadsafe' only one fiber
* can be running at a time and threads will be locked while trying to yield to
* a running fiber until it yields. WARNING exchanging two running fibers between
* threads will cause a deadlock. In order to prevent a deadlock, each thread should
* have an intermediary fiber, you switch to the intermediary fiber of the current
* thread and then from it switch to the expected fiber. This way you can exchange
* 2 fibers within 2 different threads.
*/
class Fiber {
public:
Fiber(std::function<void(void*)>&& entry_point_func, void* start_parameter);
~Fiber();
Fiber(const Fiber&) = delete;
Fiber& operator=(const Fiber&) = delete;
Fiber(Fiber&&) = default;
Fiber& operator=(Fiber&&) = default;
/// Yields control from Fiber 'from' to Fiber 'to'
/// Fiber 'from' must be the currently running fiber.
static void YieldTo(std::shared_ptr<Fiber>& from, std::shared_ptr<Fiber>& to);
static std::shared_ptr<Fiber> ThreadToFiber();
void SetRewindPoint(std::function<void(void*)>&& rewind_func, void* start_parameter);
void Rewind();
/// Only call from main thread's fiber
void Exit();
/// Changes the start parameter of the fiber. Has no effect if the fiber already started
void SetStartParameter(void* new_parameter) {
start_parameter = new_parameter;
}
private:
Fiber();
#if defined(_WIN32) || defined(WIN32)
void OnRewind();
void Start();
static void FiberStartFunc(void* fiber_parameter);
static void RewindStartFunc(void* fiber_parameter);
#else
void OnRewind(boost::context::detail::transfer_t& transfer);
void Start(boost::context::detail::transfer_t& transfer);
static void FiberStartFunc(boost::context::detail::transfer_t transfer);
static void RewindStartFunc(boost::context::detail::transfer_t transfer);
#endif
struct FiberImpl;
SpinLock guard{};
std::function<void(void*)> entry_point;
std::function<void(void*)> rewind_point;
void* rewind_parameter{};
void* start_parameter{};
std::shared_ptr<Fiber> previous_fiber;
std::unique_ptr<FiberImpl> impl;
bool is_thread_fiber{};
bool released{};
};
} // namespace Common

View File

@@ -9,10 +9,12 @@
// clang-format on
#else
#include <sys/types.h>
#ifdef __APPLE__
#if defined(__APPLE__) || defined(__FreeBSD__)
#include <sys/sysctl.h>
#else
#elif defined(__linux__)
#include <sys/sysinfo.h>
#else
#include <unistd.h>
#endif
#endif
@@ -38,15 +40,26 @@ static MemoryInfo Detect() {
// hw and vm are defined in sysctl.h
// https://github.com/apple/darwin-xnu/blob/master/bsd/sys/sysctl.h#L471
// sysctlbyname(const char *, void *, size_t *, void *, size_t);
sysctlbyname("hw.memsize", &ramsize, &sizeof_ramsize, NULL, 0);
sysctlbyname("vm.swapusage", &vmusage, &sizeof_vmusage, NULL, 0);
sysctlbyname("hw.memsize", &ramsize, &sizeof_ramsize, nullptr, 0);
sysctlbyname("vm.swapusage", &vmusage, &sizeof_vmusage, nullptr, 0);
mem_info.TotalPhysicalMemory = ramsize;
mem_info.TotalSwapMemory = vmusage.xsu_total;
#else
#elif defined(__FreeBSD__)
u_long physmem, swap_total;
std::size_t sizeof_u_long = sizeof(u_long);
// sysctlbyname(const char *, void *, size_t *, const void *, size_t);
sysctlbyname("hw.physmem", &physmem, &sizeof_u_long, nullptr, 0);
sysctlbyname("vm.swap_total", &swap_total, &sizeof_u_long, nullptr, 0);
mem_info.TotalPhysicalMemory = physmem;
mem_info.TotalSwapMemory = swap_total;
#elif defined(__linux__)
struct sysinfo meminfo;
sysinfo(&meminfo);
mem_info.TotalPhysicalMemory = meminfo.totalram;
mem_info.TotalSwapMemory = meminfo.totalswap;
#else
mem_info.TotalPhysicalMemory = sysconf(_SC_PHYS_PAGES) * sysconf(_SC_PAGE_SIZE);
mem_info.TotalSwapMemory = 0;
#endif
return mem_info;

54
src/common/spin_lock.cpp Normal file
View File

@@ -0,0 +1,54 @@
// Copyright 2020 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "common/spin_lock.h"
#if _MSC_VER
#include <intrin.h>
#if _M_AMD64
#define __x86_64__ 1
#endif
#if _M_ARM64
#define __aarch64__ 1
#endif
#else
#if __x86_64__
#include <xmmintrin.h>
#endif
#endif
namespace {
void ThreadPause() {
#if __x86_64__
_mm_pause();
#elif __aarch64__ && _MSC_VER
__yield();
#elif __aarch64__
asm("yield");
#endif
}
} // Anonymous namespace
namespace Common {
void SpinLock::lock() {
while (lck.test_and_set(std::memory_order_acquire)) {
ThreadPause();
}
}
void SpinLock::unlock() {
lck.clear(std::memory_order_release);
}
bool SpinLock::try_lock() {
if (lck.test_and_set(std::memory_order_acquire)) {
return false;
}
return true;
}
} // namespace Common

26
src/common/spin_lock.h Normal file
View File

@@ -0,0 +1,26 @@
// Copyright 2020 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <atomic>
namespace Common {
/**
* SpinLock class
* a lock similar to mutex that forces a thread to spin wait instead calling the
* supervisor. Should be used on short sequences of code.
*/
class SpinLock {
public:
void lock();
void unlock();
bool try_lock();
private:
std::atomic_flag lck = ATOMIC_FLAG_INIT;
};
} // namespace Common

View File

@@ -60,6 +60,7 @@ void AppendCPUInfo(FieldCollection& fc) {
fc.AddField(FieldType::UserSystem, "CPU_Extension_x64_AES", Common::GetCPUCaps().aes);
fc.AddField(FieldType::UserSystem, "CPU_Extension_x64_AVX", Common::GetCPUCaps().avx);
fc.AddField(FieldType::UserSystem, "CPU_Extension_x64_AVX2", Common::GetCPUCaps().avx2);
fc.AddField(FieldType::UserSystem, "CPU_Extension_x64_AVX512", Common::GetCPUCaps().avx512);
fc.AddField(FieldType::UserSystem, "CPU_Extension_x64_BMI1", Common::GetCPUCaps().bmi1);
fc.AddField(FieldType::UserSystem, "CPU_Extension_x64_BMI2", Common::GetCPUCaps().bmi2);
fc.AddField(FieldType::UserSystem, "CPU_Extension_x64_FMA", Common::GetCPUCaps().fma);

View File

@@ -25,6 +25,52 @@
namespace Common {
#ifdef _WIN32
void SetCurrentThreadPriority(ThreadPriority new_priority) {
auto handle = GetCurrentThread();
int windows_priority = 0;
switch (new_priority) {
case ThreadPriority::Low:
windows_priority = THREAD_PRIORITY_BELOW_NORMAL;
break;
case ThreadPriority::Normal:
windows_priority = THREAD_PRIORITY_NORMAL;
break;
case ThreadPriority::High:
windows_priority = THREAD_PRIORITY_ABOVE_NORMAL;
break;
case ThreadPriority::VeryHigh:
windows_priority = THREAD_PRIORITY_HIGHEST;
break;
default:
windows_priority = THREAD_PRIORITY_NORMAL;
break;
}
SetThreadPriority(handle, windows_priority);
}
#else
void SetCurrentThreadPriority(ThreadPriority new_priority) {
pthread_t this_thread = pthread_self();
s32 max_prio = sched_get_priority_max(SCHED_OTHER);
s32 min_prio = sched_get_priority_min(SCHED_OTHER);
u32 level = static_cast<u32>(new_priority) + 1;
struct sched_param params;
if (max_prio > min_prio) {
params.sched_priority = min_prio + ((max_prio - min_prio) * level) / 4;
} else {
params.sched_priority = min_prio - ((min_prio - max_prio) * level) / 4;
}
pthread_setschedparam(this_thread, SCHED_OTHER, &params);
}
#endif
#ifdef _MSC_VER
// Sets the debugger-visible name of the current thread.
@@ -70,6 +116,12 @@ void SetCurrentThreadName(const char* name) {
}
#endif
#if defined(_WIN32)
void SetCurrentThreadName(const char* name) {
// Do Nothing on MingW
}
#endif
#endif
} // namespace Common

View File

@@ -9,6 +9,7 @@
#include <cstddef>
#include <mutex>
#include <thread>
#include "common/common_types.h"
namespace Common {
@@ -28,8 +29,7 @@ public:
is_set = false;
}
template <class Duration>
bool WaitFor(const std::chrono::duration<Duration>& time) {
bool WaitFor(const std::chrono::nanoseconds& time) {
std::unique_lock lk{mutex};
if (!condvar.wait_for(lk, time, [this] { return is_set; }))
return false;
@@ -86,6 +86,15 @@ private:
std::size_t generation = 0; // Incremented once each time the barrier is used
};
enum class ThreadPriority : u32 {
Low = 0,
Normal = 1,
High = 2,
VeryHigh = 3,
};
void SetCurrentThreadPriority(ThreadPriority new_priority);
void SetCurrentThreadName(const char* name);
} // namespace Common

View File

@@ -6,12 +6,38 @@
#include <intrin.h>
#pragma intrinsic(_umul128)
#pragma intrinsic(_udiv128)
#endif
#include <cstring>
#include "common/uint128.h"
namespace Common {
#ifdef _MSC_VER
u64 MultiplyAndDivide64(u64 a, u64 b, u64 d) {
u128 r{};
r[0] = _umul128(a, b, &r[1]);
u64 remainder;
#if _MSC_VER < 1923
return udiv128(r[1], r[0], d, &remainder);
#else
return _udiv128(r[1], r[0], d, &remainder);
#endif
}
#else
u64 MultiplyAndDivide64(u64 a, u64 b, u64 d) {
const u64 diva = a / d;
const u64 moda = a % d;
const u64 divb = b / d;
const u64 modb = b % d;
return diva * b + moda * divb + moda * modb / d;
}
#endif
u128 Multiply64Into128(u64 a, u64 b) {
u128 result;
#ifdef _MSC_VER

View File

@@ -9,6 +9,9 @@
namespace Common {
// This function multiplies 2 u64 values and divides it by a u64 value.
u64 MultiplyAndDivide64(u64 a, u64 b, u64 d);
// This function multiplies 2 u64 values and produces a u128 value;
u128 Multiply64Into128(u64 a, u64 b);

91
src/common/wall_clock.cpp Normal file
View File

@@ -0,0 +1,91 @@
// Copyright 2020 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "common/uint128.h"
#include "common/wall_clock.h"
#ifdef ARCHITECTURE_x86_64
#include "common/x64/cpu_detect.h"
#include "common/x64/native_clock.h"
#endif
namespace Common {
using base_timer = std::chrono::steady_clock;
using base_time_point = std::chrono::time_point<base_timer>;
class StandardWallClock : public WallClock {
public:
StandardWallClock(u64 emulated_cpu_frequency, u64 emulated_clock_frequency)
: WallClock(emulated_cpu_frequency, emulated_clock_frequency, false) {
start_time = base_timer::now();
}
std::chrono::nanoseconds GetTimeNS() override {
base_time_point current = base_timer::now();
auto elapsed = current - start_time;
return std::chrono::duration_cast<std::chrono::nanoseconds>(elapsed);
}
std::chrono::microseconds GetTimeUS() override {
base_time_point current = base_timer::now();
auto elapsed = current - start_time;
return std::chrono::duration_cast<std::chrono::microseconds>(elapsed);
}
std::chrono::milliseconds GetTimeMS() override {
base_time_point current = base_timer::now();
auto elapsed = current - start_time;
return std::chrono::duration_cast<std::chrono::milliseconds>(elapsed);
}
u64 GetClockCycles() override {
std::chrono::nanoseconds time_now = GetTimeNS();
const u128 temporary =
Common::Multiply64Into128(time_now.count(), emulated_clock_frequency);
return Common::Divide128On32(temporary, 1000000000).first;
}
u64 GetCPUCycles() override {
std::chrono::nanoseconds time_now = GetTimeNS();
const u128 temporary = Common::Multiply64Into128(time_now.count(), emulated_cpu_frequency);
return Common::Divide128On32(temporary, 1000000000).first;
}
void Pause(bool is_paused) override {
// Do nothing in this clock type.
}
private:
base_time_point start_time;
};
#ifdef ARCHITECTURE_x86_64
std::unique_ptr<WallClock> CreateBestMatchingClock(u32 emulated_cpu_frequency,
u32 emulated_clock_frequency) {
const auto& caps = GetCPUCaps();
u64 rtsc_frequency = 0;
if (caps.invariant_tsc) {
rtsc_frequency = EstimateRDTSCFrequency();
}
if (rtsc_frequency == 0) {
return std::make_unique<StandardWallClock>(emulated_cpu_frequency,
emulated_clock_frequency);
} else {
return std::make_unique<X64::NativeClock>(emulated_cpu_frequency, emulated_clock_frequency,
rtsc_frequency);
}
}
#else
std::unique_ptr<WallClock> CreateBestMatchingClock(u32 emulated_cpu_frequency,
u32 emulated_clock_frequency) {
return std::make_unique<StandardWallClock>(emulated_cpu_frequency, emulated_clock_frequency);
}
#endif
} // namespace Common

53
src/common/wall_clock.h Normal file
View File

@@ -0,0 +1,53 @@
// Copyright 2020 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <chrono>
#include <memory>
#include "common/common_types.h"
namespace Common {
class WallClock {
public:
/// Returns current wall time in nanoseconds
virtual std::chrono::nanoseconds GetTimeNS() = 0;
/// Returns current wall time in microseconds
virtual std::chrono::microseconds GetTimeUS() = 0;
/// Returns current wall time in milliseconds
virtual std::chrono::milliseconds GetTimeMS() = 0;
/// Returns current wall time in emulated clock cycles
virtual u64 GetClockCycles() = 0;
/// Returns current wall time in emulated cpu cycles
virtual u64 GetCPUCycles() = 0;
virtual void Pause(bool is_paused) = 0;
/// Tells if the wall clock, uses the host CPU's hardware clock
bool IsNative() const {
return is_native;
}
protected:
WallClock(u64 emulated_cpu_frequency, u64 emulated_clock_frequency, bool is_native)
: emulated_cpu_frequency{emulated_cpu_frequency},
emulated_clock_frequency{emulated_clock_frequency}, is_native{is_native} {}
u64 emulated_cpu_frequency;
u64 emulated_clock_frequency;
private:
bool is_native;
};
std::unique_ptr<WallClock> CreateBestMatchingClock(u32 emulated_cpu_frequency,
u32 emulated_clock_frequency);
} // namespace Common

View File

@@ -62,6 +62,17 @@ static CPUCaps Detect() {
std::memcpy(&caps.brand_string[0], &cpu_id[1], sizeof(int));
std::memcpy(&caps.brand_string[4], &cpu_id[3], sizeof(int));
std::memcpy(&caps.brand_string[8], &cpu_id[2], sizeof(int));
if (cpu_id[1] == 0x756e6547 && cpu_id[2] == 0x6c65746e && cpu_id[3] == 0x49656e69)
caps.manufacturer = Manufacturer::Intel;
else if (cpu_id[1] == 0x68747541 && cpu_id[2] == 0x444d4163 && cpu_id[3] == 0x69746e65)
caps.manufacturer = Manufacturer::AMD;
else if (cpu_id[1] == 0x6f677948 && cpu_id[2] == 0x656e6975 && cpu_id[3] == 0x6e65476e)
caps.manufacturer = Manufacturer::Hygon;
else
caps.manufacturer = Manufacturer::Unknown;
u32 family = {};
u32 model = {};
__cpuid(cpu_id, 0x80000000);
@@ -73,6 +84,14 @@ static CPUCaps Detect() {
// Detect family and other miscellaneous features
if (max_std_fn >= 1) {
__cpuid(cpu_id, 0x00000001);
family = (cpu_id[0] >> 8) & 0xf;
model = (cpu_id[0] >> 4) & 0xf;
if (family == 0xf) {
family += (cpu_id[0] >> 20) & 0xff;
}
if (family >= 6) {
model += ((cpu_id[0] >> 16) & 0xf) << 4;
}
if ((cpu_id[3] >> 25) & 1)
caps.sse = true;
@@ -110,6 +129,11 @@ static CPUCaps Detect() {
caps.bmi1 = true;
if ((cpu_id[1] >> 8) & 1)
caps.bmi2 = true;
// Checks for AVX512F, AVX512CD, AVX512VL, AVX512DQ, AVX512BW (Intel Skylake-X/SP)
if ((cpu_id[1] >> 16) & 1 && (cpu_id[1] >> 28) & 1 && (cpu_id[1] >> 31) & 1 &&
(cpu_id[1] >> 17) & 1 && (cpu_id[1] >> 30) & 1) {
caps.avx512 = caps.avx2;
}
}
}
@@ -130,6 +154,20 @@ static CPUCaps Detect() {
caps.fma4 = true;
}
if (max_ex_fn >= 0x80000007) {
__cpuid(cpu_id, 0x80000007);
if (cpu_id[3] & (1 << 8)) {
caps.invariant_tsc = true;
}
}
if (max_std_fn >= 0x16) {
__cpuid(cpu_id, 0x16);
caps.base_frequency = cpu_id[0];
caps.max_frequency = cpu_id[1];
caps.bus_frequency = cpu_id[2];
}
return caps;
}

View File

@@ -6,8 +6,16 @@
namespace Common {
enum class Manufacturer : u32 {
Intel = 0,
AMD = 1,
Hygon = 2,
Unknown = 3,
};
/// x86/x64 CPU capabilities that may be detected by this module
struct CPUCaps {
Manufacturer manufacturer;
char cpu_string[0x21];
char brand_string[0x41];
bool sse;
@@ -19,11 +27,16 @@ struct CPUCaps {
bool lzcnt;
bool avx;
bool avx2;
bool avx512;
bool bmi1;
bool bmi2;
bool fma;
bool fma4;
bool aes;
bool invariant_tsc;
u32 base_frequency;
u32 max_frequency;
u32 bus_frequency;
};
/**

View File

@@ -0,0 +1,103 @@
// Copyright 2020 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <chrono>
#include <mutex>
#include <thread>
#ifdef _MSC_VER
#include <intrin.h>
#else
#include <x86intrin.h>
#endif
#include "common/uint128.h"
#include "common/x64/native_clock.h"
namespace Common {
u64 EstimateRDTSCFrequency() {
const auto milli_10 = std::chrono::milliseconds{10};
// get current time
_mm_mfence();
const u64 tscStart = __rdtsc();
const auto startTime = std::chrono::high_resolution_clock::now();
// wait roughly 3 seconds
while (true) {
auto milli = std::chrono::duration_cast<std::chrono::milliseconds>(
std::chrono::high_resolution_clock::now() - startTime);
if (milli.count() >= 3000)
break;
std::this_thread::sleep_for(milli_10);
}
const auto endTime = std::chrono::high_resolution_clock::now();
_mm_mfence();
const u64 tscEnd = __rdtsc();
// calculate difference
const u64 timer_diff =
std::chrono::duration_cast<std::chrono::nanoseconds>(endTime - startTime).count();
const u64 tsc_diff = tscEnd - tscStart;
const u64 tsc_freq = MultiplyAndDivide64(tsc_diff, 1000000000ULL, timer_diff);
return tsc_freq;
}
namespace X64 {
NativeClock::NativeClock(u64 emulated_cpu_frequency, u64 emulated_clock_frequency,
u64 rtsc_frequency)
: WallClock(emulated_cpu_frequency, emulated_clock_frequency, true), rtsc_frequency{
rtsc_frequency} {
_mm_mfence();
last_measure = __rdtsc();
accumulated_ticks = 0U;
}
u64 NativeClock::GetRTSC() {
std::scoped_lock scope{rtsc_serialize};
_mm_mfence();
const u64 current_measure = __rdtsc();
u64 diff = current_measure - last_measure;
diff = diff & ~static_cast<u64>(static_cast<s64>(diff) >> 63); // max(diff, 0)
if (current_measure > last_measure) {
last_measure = current_measure;
}
accumulated_ticks += diff;
/// The clock cannot be more precise than the guest timer, remove the lower bits
return accumulated_ticks & inaccuracy_mask;
}
void NativeClock::Pause(bool is_paused) {
if (!is_paused) {
_mm_mfence();
last_measure = __rdtsc();
}
}
std::chrono::nanoseconds NativeClock::GetTimeNS() {
const u64 rtsc_value = GetRTSC();
return std::chrono::nanoseconds{MultiplyAndDivide64(rtsc_value, 1000000000, rtsc_frequency)};
}
std::chrono::microseconds NativeClock::GetTimeUS() {
const u64 rtsc_value = GetRTSC();
return std::chrono::microseconds{MultiplyAndDivide64(rtsc_value, 1000000, rtsc_frequency)};
}
std::chrono::milliseconds NativeClock::GetTimeMS() {
const u64 rtsc_value = GetRTSC();
return std::chrono::milliseconds{MultiplyAndDivide64(rtsc_value, 1000, rtsc_frequency)};
}
u64 NativeClock::GetClockCycles() {
const u64 rtsc_value = GetRTSC();
return MultiplyAndDivide64(rtsc_value, emulated_clock_frequency, rtsc_frequency);
}
u64 NativeClock::GetCPUCycles() {
const u64 rtsc_value = GetRTSC();
return MultiplyAndDivide64(rtsc_value, emulated_cpu_frequency, rtsc_frequency);
}
} // namespace X64
} // namespace Common

View File

@@ -0,0 +1,48 @@
// Copyright 2020 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <optional>
#include "common/spin_lock.h"
#include "common/wall_clock.h"
namespace Common {
namespace X64 {
class NativeClock : public WallClock {
public:
NativeClock(u64 emulated_cpu_frequency, u64 emulated_clock_frequency, u64 rtsc_frequency);
std::chrono::nanoseconds GetTimeNS() override;
std::chrono::microseconds GetTimeUS() override;
std::chrono::milliseconds GetTimeMS() override;
u64 GetClockCycles() override;
u64 GetCPUCycles() override;
void Pause(bool is_paused) override;
private:
u64 GetRTSC();
/// value used to reduce the native clocks accuracy as some apss rely on
/// undefined behavior where the level of accuracy in the clock shouldn't
/// be higher.
static constexpr u64 inaccuracy_mask = ~(0x400 - 1);
SpinLock rtsc_serialize{};
u64 last_measure{};
u64 accumulated_ticks{};
u64 rtsc_frequency;
};
} // namespace X64
u64 EstimateRDTSCFrequency();
} // namespace Common

View File

@@ -11,7 +11,7 @@
namespace Common::X64 {
inline int RegToIndex(const Xbyak::Reg& reg) {
inline std::size_t RegToIndex(const Xbyak::Reg& reg) {
using Kind = Xbyak::Reg::Kind;
ASSERT_MSG((reg.getKind() & (Kind::REG | Kind::XMM)) != 0,
"RegSet only support GPRs and XMM registers.");
@@ -19,17 +19,17 @@ inline int RegToIndex(const Xbyak::Reg& reg) {
return reg.getIdx() + (reg.getKind() == Kind::REG ? 0 : 16);
}
inline Xbyak::Reg64 IndexToReg64(int reg_index) {
inline Xbyak::Reg64 IndexToReg64(std::size_t reg_index) {
ASSERT(reg_index < 16);
return Xbyak::Reg64(reg_index);
return Xbyak::Reg64(static_cast<int>(reg_index));
}
inline Xbyak::Xmm IndexToXmm(int reg_index) {
inline Xbyak::Xmm IndexToXmm(std::size_t reg_index) {
ASSERT(reg_index >= 16 && reg_index < 32);
return Xbyak::Xmm(reg_index - 16);
return Xbyak::Xmm(static_cast<int>(reg_index - 16));
}
inline Xbyak::Reg IndexToReg(int reg_index) {
inline Xbyak::Reg IndexToReg(std::size_t reg_index) {
if (reg_index < 16) {
return IndexToReg64(reg_index);
} else {
@@ -151,9 +151,13 @@ constexpr size_t ABI_SHADOW_SPACE = 0;
#endif
inline void ABI_CalculateFrameSize(std::bitset<32> regs, size_t rsp_alignment,
size_t needed_frame_size, s32* out_subtraction,
s32* out_xmm_offset) {
struct ABIFrameInfo {
s32 subtraction;
s32 xmm_offset;
};
inline ABIFrameInfo ABI_CalculateFrameSize(std::bitset<32> regs, size_t rsp_alignment,
size_t needed_frame_size) {
const auto count = (regs & ABI_ALL_GPRS).count();
rsp_alignment -= count * 8;
size_t subtraction = 0;
@@ -170,33 +174,28 @@ inline void ABI_CalculateFrameSize(std::bitset<32> regs, size_t rsp_alignment,
rsp_alignment -= subtraction;
subtraction += rsp_alignment & 0xF;
*out_subtraction = (s32)subtraction;
*out_xmm_offset = (s32)(subtraction - xmm_base_subtraction);
return ABIFrameInfo{static_cast<s32>(subtraction),
static_cast<s32>(subtraction - xmm_base_subtraction)};
}
inline size_t ABI_PushRegistersAndAdjustStack(Xbyak::CodeGenerator& code, std::bitset<32> regs,
size_t rsp_alignment, size_t needed_frame_size = 0) {
s32 subtraction, xmm_offset;
ABI_CalculateFrameSize(regs, rsp_alignment, needed_frame_size, &subtraction, &xmm_offset);
auto frame_info = ABI_CalculateFrameSize(regs, rsp_alignment, needed_frame_size);
for (std::size_t i = 0; i < regs.size(); ++i) {
if (regs[i] && ABI_ALL_GPRS[i]) {
code.push(IndexToReg64(static_cast<int>(i)));
}
}
if (subtraction != 0) {
code.sub(code.rsp, subtraction);
}
for (int i = 0; i < regs.count(); i++) {
if (regs.test(i) & ABI_ALL_GPRS.test(i)) {
code.push(IndexToReg64(i));
}
}
if (frame_info.subtraction != 0) {
code.sub(code.rsp, frame_info.subtraction);
}
for (std::size_t i = 0; i < regs.size(); ++i) {
if (regs[i] && ABI_ALL_XMMS[i]) {
code.movaps(code.xword[code.rsp + xmm_offset], IndexToXmm(static_cast<int>(i)));
xmm_offset += 0x10;
code.movaps(code.xword[code.rsp + frame_info.xmm_offset], IndexToXmm(i));
frame_info.xmm_offset += 0x10;
}
}
@@ -205,59 +204,23 @@ inline size_t ABI_PushRegistersAndAdjustStack(Xbyak::CodeGenerator& code, std::b
inline void ABI_PopRegistersAndAdjustStack(Xbyak::CodeGenerator& code, std::bitset<32> regs,
size_t rsp_alignment, size_t needed_frame_size = 0) {
s32 subtraction, xmm_offset;
ABI_CalculateFrameSize(regs, rsp_alignment, needed_frame_size, &subtraction, &xmm_offset);
auto frame_info = ABI_CalculateFrameSize(regs, rsp_alignment, needed_frame_size);
for (std::size_t i = 0; i < regs.size(); ++i) {
if (regs[i] && ABI_ALL_XMMS[i]) {
code.movaps(IndexToXmm(static_cast<int>(i)), code.xword[code.rsp + xmm_offset]);
xmm_offset += 0x10;
code.movaps(IndexToXmm(i), code.xword[code.rsp + frame_info.xmm_offset]);
frame_info.xmm_offset += 0x10;
}
}
if (subtraction != 0) {
code.add(code.rsp, subtraction);
if (frame_info.subtraction != 0) {
code.add(code.rsp, frame_info.subtraction);
}
// GPRs need to be popped in reverse order
for (int i = 15; i >= 0; i--) {
if (regs[i]) {
code.pop(IndexToReg64(i));
}
}
}
inline size_t ABI_PushRegistersAndAdjustStackGPS(Xbyak::CodeGenerator& code, std::bitset<32> regs,
size_t rsp_alignment,
size_t needed_frame_size = 0) {
s32 subtraction, xmm_offset;
ABI_CalculateFrameSize(regs, rsp_alignment, needed_frame_size, &subtraction, &xmm_offset);
for (std::size_t i = 0; i < regs.size(); ++i) {
for (std::size_t j = 0; j < regs.size(); ++j) {
const std::size_t i = regs.size() - j - 1;
if (regs[i] && ABI_ALL_GPRS[i]) {
code.push(IndexToReg64(static_cast<int>(i)));
}
}
if (subtraction != 0) {
code.sub(code.rsp, subtraction);
}
return ABI_SHADOW_SPACE;
}
inline void ABI_PopRegistersAndAdjustStackGPS(Xbyak::CodeGenerator& code, std::bitset<32> regs,
size_t rsp_alignment, size_t needed_frame_size = 0) {
s32 subtraction, xmm_offset;
ABI_CalculateFrameSize(regs, rsp_alignment, needed_frame_size, &subtraction, &xmm_offset);
if (subtraction != 0) {
code.add(code.rsp, subtraction);
}
// GPRs need to be popped in reverse order
for (int i = 15; i >= 0; i--) {
if (regs[i]) {
code.pop(IndexToReg64(i));
}
}

View File

@@ -7,6 +7,16 @@ endif()
add_library(core STATIC
arm/arm_interface.h
arm/arm_interface.cpp
arm/cpu_interrupt_handler.cpp
arm/cpu_interrupt_handler.h
arm/dynarmic/arm_dynarmic_32.cpp
arm/dynarmic/arm_dynarmic_32.h
arm/dynarmic/arm_dynarmic_64.cpp
arm/dynarmic/arm_dynarmic_64.h
arm/dynarmic/arm_dynarmic_cp15.cpp
arm/dynarmic/arm_dynarmic_cp15.h
arm/dynarmic/arm_exclusive_monitor.cpp
arm/dynarmic/arm_exclusive_monitor.h
arm/exclusive_monitor.cpp
arm/exclusive_monitor.h
arm/unicorn/arm_unicorn.cpp
@@ -15,8 +25,6 @@ add_library(core STATIC
constants.h
core.cpp
core.h
core_manager.cpp
core_manager.h
core_timing.cpp
core_timing.h
core_timing_util.cpp
@@ -606,11 +614,11 @@ endif()
create_target_directory_groups(core)
target_link_libraries(core PUBLIC common PRIVATE audio_core video_core)
target_link_libraries(core PUBLIC Boost::boost PRIVATE fmt::fmt nlohmann_json::nlohmann_json mbedtls Opus::Opus unicorn)
target_link_libraries(core PUBLIC Boost::boost PRIVATE fmt::fmt nlohmann_json::nlohmann_json mbedtls Opus::Opus unicorn zip)
if (YUZU_ENABLE_BOXCAT)
target_compile_definitions(core PRIVATE -DYUZU_ENABLE_BOXCAT)
target_link_libraries(core PRIVATE httplib nlohmann_json::nlohmann_json zip)
target_link_libraries(core PRIVATE httplib nlohmann_json::nlohmann_json)
endif()
if (ENABLE_WEB_SERVICE)

View File

@@ -139,6 +139,63 @@ std::optional<std::string> GetSymbolName(const Symbols& symbols, VAddr func_addr
constexpr u64 SEGMENT_BASE = 0x7100000000ull;
std::vector<ARM_Interface::BacktraceEntry> ARM_Interface::GetBacktraceFromContext(
System& system, const ThreadContext64& ctx) {
std::vector<BacktraceEntry> out;
auto& memory = system.Memory();
auto fp = ctx.cpu_registers[29];
auto lr = ctx.cpu_registers[30];
while (true) {
out.push_back({"", 0, lr, 0});
if (!fp) {
break;
}
lr = memory.Read64(fp + 8) - 4;
fp = memory.Read64(fp);
}
std::map<VAddr, std::string> modules;
auto& loader{system.GetAppLoader()};
if (loader.ReadNSOModules(modules) != Loader::ResultStatus::Success) {
return {};
}
std::map<std::string, Symbols> symbols;
for (const auto& module : modules) {
symbols.insert_or_assign(module.second, GetSymbols(module.first, memory));
}
for (auto& entry : out) {
VAddr base = 0;
for (auto iter = modules.rbegin(); iter != modules.rend(); ++iter) {
const auto& module{*iter};
if (entry.original_address >= module.first) {
entry.module = module.second;
base = module.first;
break;
}
}
entry.offset = entry.original_address - base;
entry.address = SEGMENT_BASE + entry.offset;
if (entry.module.empty())
entry.module = "unknown";
const auto symbol_set = symbols.find(entry.module);
if (symbol_set != symbols.end()) {
const auto symbol = GetSymbolName(symbol_set->second, entry.offset);
if (symbol.has_value()) {
// TODO(DarkLordZach): Add demangling of symbol names.
entry.name = *symbol;
}
}
}
return out;
}
std::vector<ARM_Interface::BacktraceEntry> ARM_Interface::GetBacktrace() const {
std::vector<BacktraceEntry> out;
auto& memory = system.Memory();

View File

@@ -7,6 +7,7 @@
#include <array>
#include <vector>
#include "common/common_types.h"
#include "core/hardware_properties.h"
namespace Common {
struct PageTable;
@@ -18,25 +19,29 @@ enum class VMAPermission : u8;
namespace Core {
class System;
class CPUInterruptHandler;
using CPUInterrupts = std::array<CPUInterruptHandler, Core::Hardware::NUM_CPU_CORES>;
/// Generic ARMv8 CPU interface
class ARM_Interface : NonCopyable {
public:
explicit ARM_Interface(System& system_) : system{system_} {}
explicit ARM_Interface(System& system_, CPUInterrupts& interrupt_handlers, bool uses_wall_clock)
: system{system_}, interrupt_handlers{interrupt_handlers}, uses_wall_clock{
uses_wall_clock} {}
virtual ~ARM_Interface() = default;
struct ThreadContext32 {
std::array<u32, 16> cpu_registers{};
std::array<u32, 64> extension_registers{};
u32 cpsr{};
std::array<u8, 4> padding{};
std::array<u64, 32> fprs{};
u32 fpscr{};
u32 fpexc{};
u32 tpidr{};
};
// Internally within the kernel, it expects the AArch32 version of the
// thread context to be 344 bytes in size.
static_assert(sizeof(ThreadContext32) == 0x158);
static_assert(sizeof(ThreadContext32) == 0x150);
struct ThreadContext64 {
std::array<u64, 31> cpu_registers{};
@@ -143,6 +148,8 @@ public:
*/
virtual void SetTPIDR_EL0(u64 value) = 0;
virtual void ChangeProcessorID(std::size_t new_core_id) = 0;
virtual void SaveContext(ThreadContext32& ctx) = 0;
virtual void SaveContext(ThreadContext64& ctx) = 0;
virtual void LoadContext(const ThreadContext32& ctx) = 0;
@@ -162,6 +169,9 @@ public:
std::string name;
};
static std::vector<BacktraceEntry> GetBacktraceFromContext(System& system,
const ThreadContext64& ctx);
std::vector<BacktraceEntry> GetBacktrace() const;
/// fp (= r29) points to the last frame record.
@@ -175,6 +185,8 @@ public:
protected:
/// System context that this ARM interface is running under.
System& system;
CPUInterrupts& interrupt_handlers;
bool uses_wall_clock;
};
} // namespace Core

View File

@@ -0,0 +1,29 @@
// Copyright 2020 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include "common/thread.h"
#include "core/arm/cpu_interrupt_handler.h"
namespace Core {
CPUInterruptHandler::CPUInterruptHandler() : is_interrupted{} {
interrupt_event = std::make_unique<Common::Event>();
}
CPUInterruptHandler::~CPUInterruptHandler() = default;
void CPUInterruptHandler::SetInterrupt(bool is_interrupted_) {
if (is_interrupted_) {
interrupt_event->Set();
}
this->is_interrupted = is_interrupted_;
}
void CPUInterruptHandler::AwaitInterrupt() {
interrupt_event->Wait();
}
} // namespace Core

View File

@@ -0,0 +1,39 @@
// Copyright 2020 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <memory>
namespace Common {
class Event;
}
namespace Core {
class CPUInterruptHandler {
public:
CPUInterruptHandler();
~CPUInterruptHandler();
CPUInterruptHandler(const CPUInterruptHandler&) = delete;
CPUInterruptHandler& operator=(const CPUInterruptHandler&) = delete;
CPUInterruptHandler(CPUInterruptHandler&&) = default;
CPUInterruptHandler& operator=(CPUInterruptHandler&&) = default;
bool IsInterrupted() const {
return is_interrupted;
}
void SetInterrupt(bool is_interrupted);
void AwaitInterrupt();
private:
bool is_interrupted{};
std::unique_ptr<Common::Event> interrupt_event;
};
} // namespace Core

View File

@@ -7,15 +7,17 @@
#include <dynarmic/A32/a32.h>
#include <dynarmic/A32/config.h>
#include <dynarmic/A32/context.h>
#include "common/microprofile.h"
#include "common/logging/log.h"
#include "common/page_table.h"
#include "core/arm/cpu_interrupt_handler.h"
#include "core/arm/dynarmic/arm_dynarmic_32.h"
#include "core/arm/dynarmic/arm_dynarmic_64.h"
#include "core/arm/dynarmic/arm_dynarmic_cp15.h"
#include "core/arm/dynarmic/arm_exclusive_monitor.h"
#include "core/core.h"
#include "core/core_manager.h"
#include "core/core_timing.h"
#include "core/hle/kernel/svc.h"
#include "core/memory.h"
#include "core/settings.h"
namespace Core {
@@ -49,8 +51,22 @@ public:
parent.system.Memory().Write64(vaddr, value);
}
bool MemoryWriteExclusive8(u32 vaddr, u8 value, u8 expected) override {
return parent.system.Memory().WriteExclusive8(vaddr, value, expected);
}
bool MemoryWriteExclusive16(u32 vaddr, u16 value, u16 expected) override {
return parent.system.Memory().WriteExclusive16(vaddr, value, expected);
}
bool MemoryWriteExclusive32(u32 vaddr, u32 value, u32 expected) override {
return parent.system.Memory().WriteExclusive32(vaddr, value, expected);
}
bool MemoryWriteExclusive64(u32 vaddr, u64 value, u64 expected) override {
return parent.system.Memory().WriteExclusive64(vaddr, value, expected);
}
void InterpreterFallback(u32 pc, std::size_t num_instructions) override {
UNIMPLEMENTED();
UNIMPLEMENTED_MSG("This should never happen, pc = {:08X}, code = {:08X}", pc,
MemoryReadCode(pc));
}
void ExceptionRaised(u32 pc, Dynarmic::A32::Exception exception) override {
@@ -61,7 +77,7 @@ public:
case Dynarmic::A32::Exception::Breakpoint:
break;
}
LOG_CRITICAL(HW_GPU, "ExceptionRaised(exception = {}, pc = {:08X}, code = {:08X})",
LOG_CRITICAL(Core_ARM, "ExceptionRaised(exception = {}, pc = {:08X}, code = {:08X})",
static_cast<std::size_t>(exception), pc, MemoryReadCode(pc));
UNIMPLEMENTED();
}
@@ -71,26 +87,36 @@ public:
}
void AddTicks(u64 ticks) override {
if (parent.uses_wall_clock) {
return;
}
// Divide the number of ticks by the amount of CPU cores. TODO(Subv): This yields only a
// rough approximation of the amount of executed ticks in the system, it may be thrown off
// if not all cores are doing a similar amount of work. Instead of doing this, we should
// device a way so that timing is consistent across all cores without increasing the ticks 4
// times.
u64 amortized_ticks = (ticks - num_interpreted_instructions) / Core::NUM_CPU_CORES;
u64 amortized_ticks =
(ticks - num_interpreted_instructions) / Core::Hardware::NUM_CPU_CORES;
// Always execute at least one tick.
amortized_ticks = std::max<u64>(amortized_ticks, 1);
parent.system.CoreTiming().AddTicks(amortized_ticks);
num_interpreted_instructions = 0;
}
u64 GetTicksRemaining() override {
return std::max(parent.system.CoreTiming().GetDowncount(), {});
if (parent.uses_wall_clock) {
if (!parent.interrupt_handlers[parent.core_index].IsInterrupted()) {
return minimum_run_cycles;
}
return 0U;
}
return std::max<s64>(parent.system.CoreTiming().GetDowncount(), 0);
}
ARM_Dynarmic_32& parent;
std::size_t num_interpreted_instructions{};
u64 tpidrro_el0{};
u64 tpidr_el0{};
static constexpr u64 minimum_run_cycles = 1000U;
};
std::shared_ptr<Dynarmic::A32::Jit> ARM_Dynarmic_32::MakeJit(Common::PageTable& page_table,
@@ -99,26 +125,46 @@ std::shared_ptr<Dynarmic::A32::Jit> ARM_Dynarmic_32::MakeJit(Common::PageTable&
config.callbacks = cb.get();
// TODO(bunnei): Implement page table for 32-bit
// config.page_table = &page_table.pointers;
config.coprocessors[15] = std::make_shared<DynarmicCP15>((u32*)&CP15_regs[0]);
config.coprocessors[15] = cp15;
config.define_unpredictable_behaviour = true;
static constexpr std::size_t PAGE_BITS = 12;
static constexpr std::size_t NUM_PAGE_TABLE_ENTRIES = 1 << (32 - PAGE_BITS);
config.page_table = reinterpret_cast<std::array<std::uint8_t*, NUM_PAGE_TABLE_ENTRIES>*>(
page_table.pointers.data());
config.absolute_offset_page_table = true;
config.detect_misaligned_access_via_page_table = 16 | 32 | 64 | 128;
config.only_detect_misalignment_via_page_table_on_page_boundary = true;
// Multi-process state
config.processor_id = core_index;
config.global_monitor = &exclusive_monitor.monitor;
// Timing
config.wall_clock_cntpct = uses_wall_clock;
// Optimizations
if (Settings::values.disable_cpu_opt) {
config.enable_optimizations = false;
config.enable_fast_dispatch = false;
}
return std::make_unique<Dynarmic::A32::Jit>(config);
}
MICROPROFILE_DEFINE(ARM_Jit_Dynarmic_32, "ARM JIT", "Dynarmic", MP_RGB(255, 64, 64));
void ARM_Dynarmic_32::Run() {
MICROPROFILE_SCOPE(ARM_Jit_Dynarmic_32);
jit->Run();
}
void ARM_Dynarmic_32::Step() {
cb->InterpreterFallback(jit->Regs()[15], 1);
jit->Step();
}
ARM_Dynarmic_32::ARM_Dynarmic_32(System& system, ExclusiveMonitor& exclusive_monitor,
ARM_Dynarmic_32::ARM_Dynarmic_32(System& system, CPUInterrupts& interrupt_handlers,
bool uses_wall_clock, ExclusiveMonitor& exclusive_monitor,
std::size_t core_index)
: ARM_Interface{system},
cb(std::make_unique<DynarmicCallbacks32>(*this)), core_index{core_index},
: ARM_Interface{system, interrupt_handlers, uses_wall_clock},
cb(std::make_unique<DynarmicCallbacks32>(*this)),
cp15(std::make_shared<DynarmicCP15>(*this)), core_index{core_index},
exclusive_monitor{dynamic_cast<DynarmicExclusiveMonitor&>(exclusive_monitor)} {}
ARM_Dynarmic_32::~ARM_Dynarmic_32() = default;
@@ -154,32 +200,40 @@ void ARM_Dynarmic_32::SetPSTATE(u32 cpsr) {
}
u64 ARM_Dynarmic_32::GetTlsAddress() const {
return CP15_regs[static_cast<std::size_t>(CP15Register::CP15_THREAD_URO)];
return cp15->uro;
}
void ARM_Dynarmic_32::SetTlsAddress(VAddr address) {
CP15_regs[static_cast<std::size_t>(CP15Register::CP15_THREAD_URO)] = static_cast<u32>(address);
cp15->uro = static_cast<u32>(address);
}
u64 ARM_Dynarmic_32::GetTPIDR_EL0() const {
return cb->tpidr_el0;
return cp15->uprw;
}
void ARM_Dynarmic_32::SetTPIDR_EL0(u64 value) {
cb->tpidr_el0 = value;
cp15->uprw = static_cast<u32>(value);
}
void ARM_Dynarmic_32::ChangeProcessorID(std::size_t new_core_id) {
jit->ChangeProcessorID(new_core_id);
}
void ARM_Dynarmic_32::SaveContext(ThreadContext32& ctx) {
Dynarmic::A32::Context context;
jit->SaveContext(context);
ctx.cpu_registers = context.Regs();
ctx.extension_registers = context.ExtRegs();
ctx.cpsr = context.Cpsr();
ctx.fpscr = context.Fpscr();
}
void ARM_Dynarmic_32::LoadContext(const ThreadContext32& ctx) {
Dynarmic::A32::Context context;
context.Regs() = ctx.cpu_registers;
context.ExtRegs() = ctx.extension_registers;
context.SetCpsr(ctx.cpsr);
context.SetFpscr(ctx.fpscr);
jit->LoadContext(context);
}
@@ -188,10 +242,15 @@ void ARM_Dynarmic_32::PrepareReschedule() {
}
void ARM_Dynarmic_32::ClearInstructionCache() {
if (!jit) {
return;
}
jit->ClearCache();
}
void ARM_Dynarmic_32::ClearExclusiveState() {}
void ARM_Dynarmic_32::ClearExclusiveState() {
jit->ClearExclusiveState();
}
void ARM_Dynarmic_32::PageTableChanged(Common::PageTable& page_table,
std::size_t new_address_space_size_in_bits) {

View File

@@ -9,7 +9,7 @@
#include <dynarmic/A32/a32.h>
#include <dynarmic/A64/a64.h>
#include <dynarmic/A64/exclusive_monitor.h>
#include <dynarmic/exclusive_monitor.h>
#include "common/common_types.h"
#include "common/hash.h"
#include "core/arm/arm_interface.h"
@@ -21,13 +21,16 @@ class Memory;
namespace Core {
class CPUInterruptHandler;
class DynarmicCallbacks32;
class DynarmicCP15;
class DynarmicExclusiveMonitor;
class System;
class ARM_Dynarmic_32 final : public ARM_Interface {
public:
ARM_Dynarmic_32(System& system, ExclusiveMonitor& exclusive_monitor, std::size_t core_index);
ARM_Dynarmic_32(System& system, CPUInterrupts& interrupt_handlers, bool uses_wall_clock,
ExclusiveMonitor& exclusive_monitor, std::size_t core_index);
~ARM_Dynarmic_32() override;
void SetPC(u64 pc) override;
@@ -44,6 +47,7 @@ public:
void SetTlsAddress(VAddr address) override;
void SetTPIDR_EL0(u64 value) override;
u64 GetTPIDR_EL0() const override;
void ChangeProcessorID(std::size_t new_core_id) override;
void SaveContext(ThreadContext32& ctx) override;
void SaveContext(ThreadContext64& ctx) override {}
@@ -66,12 +70,14 @@ private:
std::unordered_map<JitCacheKey, std::shared_ptr<Dynarmic::A32::Jit>, Common::PairHash>;
friend class DynarmicCallbacks32;
friend class DynarmicCP15;
std::unique_ptr<DynarmicCallbacks32> cb;
JitCacheType jit_cache;
std::shared_ptr<Dynarmic::A32::Jit> jit;
std::shared_ptr<DynarmicCP15> cp15;
std::size_t core_index;
DynarmicExclusiveMonitor& exclusive_monitor;
std::array<u32, 84> CP15_regs{};
};
} // namespace Core

View File

@@ -7,11 +7,11 @@
#include <dynarmic/A64/a64.h>
#include <dynarmic/A64/config.h>
#include "common/logging/log.h"
#include "common/microprofile.h"
#include "common/page_table.h"
#include "core/arm/cpu_interrupt_handler.h"
#include "core/arm/dynarmic/arm_dynarmic_64.h"
#include "core/arm/dynarmic/arm_exclusive_monitor.h"
#include "core/core.h"
#include "core/core_manager.h"
#include "core/core_timing.h"
#include "core/core_timing_util.h"
#include "core/gdbstub/gdbstub.h"
@@ -65,6 +65,22 @@ public:
memory.Write64(vaddr + 8, value[1]);
}
bool MemoryWriteExclusive8(u64 vaddr, std::uint8_t value, std::uint8_t expected) override {
return parent.system.Memory().WriteExclusive8(vaddr, value, expected);
}
bool MemoryWriteExclusive16(u64 vaddr, std::uint16_t value, std::uint16_t expected) override {
return parent.system.Memory().WriteExclusive16(vaddr, value, expected);
}
bool MemoryWriteExclusive32(u64 vaddr, std::uint32_t value, std::uint32_t expected) override {
return parent.system.Memory().WriteExclusive32(vaddr, value, expected);
}
bool MemoryWriteExclusive64(u64 vaddr, std::uint64_t value, std::uint64_t expected) override {
return parent.system.Memory().WriteExclusive64(vaddr, value, expected);
}
bool MemoryWriteExclusive128(u64 vaddr, Vector value, Vector expected) override {
return parent.system.Memory().WriteExclusive128(vaddr, value, expected);
}
void InterpreterFallback(u64 pc, std::size_t num_instructions) override {
LOG_INFO(Core_ARM, "Unicorn fallback @ 0x{:X} for {} instructions (instr = {:08X})", pc,
num_instructions, MemoryReadCode(pc));
@@ -98,8 +114,8 @@ public:
}
[[fallthrough]];
default:
ASSERT_MSG(false, "ExceptionRaised(exception = {}, pc = {:X})",
static_cast<std::size_t>(exception), pc);
ASSERT_MSG(false, "ExceptionRaised(exception = {}, pc = {:08X}, code = {:08X})",
static_cast<std::size_t>(exception), pc, MemoryReadCode(pc));
}
}
@@ -108,29 +124,42 @@ public:
}
void AddTicks(u64 ticks) override {
if (parent.uses_wall_clock) {
return;
}
// Divide the number of ticks by the amount of CPU cores. TODO(Subv): This yields only a
// rough approximation of the amount of executed ticks in the system, it may be thrown off
// if not all cores are doing a similar amount of work. Instead of doing this, we should
// device a way so that timing is consistent across all cores without increasing the ticks 4
// times.
u64 amortized_ticks = (ticks - num_interpreted_instructions) / Core::NUM_CPU_CORES;
u64 amortized_ticks =
(ticks - num_interpreted_instructions) / Core::Hardware::NUM_CPU_CORES;
// Always execute at least one tick.
amortized_ticks = std::max<u64>(amortized_ticks, 1);
parent.system.CoreTiming().AddTicks(amortized_ticks);
num_interpreted_instructions = 0;
}
u64 GetTicksRemaining() override {
return std::max(parent.system.CoreTiming().GetDowncount(), s64{0});
if (parent.uses_wall_clock) {
if (!parent.interrupt_handlers[parent.core_index].IsInterrupted()) {
return minimum_run_cycles;
}
return 0U;
}
return std::max<s64>(parent.system.CoreTiming().GetDowncount(), 0);
}
u64 GetCNTPCT() override {
return Timing::CpuCyclesToClockCycles(parent.system.CoreTiming().GetTicks());
return parent.system.CoreTiming().GetClockTicks();
}
ARM_Dynarmic_64& parent;
std::size_t num_interpreted_instructions = 0;
u64 tpidrro_el0 = 0;
u64 tpidr_el0 = 0;
static constexpr u64 minimum_run_cycles = 1000U;
};
std::shared_ptr<Dynarmic::A64::Jit> ARM_Dynarmic_64::MakeJit(Common::PageTable& page_table,
@@ -168,14 +197,13 @@ std::shared_ptr<Dynarmic::A64::Jit> ARM_Dynarmic_64::MakeJit(Common::PageTable&
config.enable_fast_dispatch = false;
}
// Timing
config.wall_clock_cntpct = uses_wall_clock;
return std::make_shared<Dynarmic::A64::Jit>(config);
}
MICROPROFILE_DEFINE(ARM_Jit_Dynarmic_64, "ARM JIT", "Dynarmic", MP_RGB(255, 64, 64));
void ARM_Dynarmic_64::Run() {
MICROPROFILE_SCOPE(ARM_Jit_Dynarmic_64);
jit->Run();
}
@@ -183,11 +211,16 @@ void ARM_Dynarmic_64::Step() {
cb->InterpreterFallback(jit->GetPC(), 1);
}
ARM_Dynarmic_64::ARM_Dynarmic_64(System& system, ExclusiveMonitor& exclusive_monitor,
ARM_Dynarmic_64::ARM_Dynarmic_64(System& system, CPUInterrupts& interrupt_handlers,
bool uses_wall_clock, ExclusiveMonitor& exclusive_monitor,
std::size_t core_index)
: ARM_Interface{system}, cb(std::make_unique<DynarmicCallbacks64>(*this)),
inner_unicorn{system, ARM_Unicorn::Arch::AArch64}, core_index{core_index},
exclusive_monitor{dynamic_cast<DynarmicExclusiveMonitor&>(exclusive_monitor)} {}
: ARM_Interface{system, interrupt_handlers, uses_wall_clock},
cb(std::make_unique<DynarmicCallbacks64>(*this)), inner_unicorn{system, interrupt_handlers,
uses_wall_clock,
ARM_Unicorn::Arch::AArch64,
core_index},
core_index{core_index}, exclusive_monitor{
dynamic_cast<DynarmicExclusiveMonitor&>(exclusive_monitor)} {}
ARM_Dynarmic_64::~ARM_Dynarmic_64() = default;
@@ -239,6 +272,10 @@ void ARM_Dynarmic_64::SetTPIDR_EL0(u64 value) {
cb->tpidr_el0 = value;
}
void ARM_Dynarmic_64::ChangeProcessorID(std::size_t new_core_id) {
jit->ChangeProcessorID(new_core_id);
}
void ARM_Dynarmic_64::SaveContext(ThreadContext64& ctx) {
ctx.cpu_registers = jit->GetRegisters();
ctx.sp = jit->GetSP();
@@ -266,6 +303,9 @@ void ARM_Dynarmic_64::PrepareReschedule() {
}
void ARM_Dynarmic_64::ClearInstructionCache() {
if (!jit) {
return;
}
jit->ClearCache();
}
@@ -285,44 +325,4 @@ void ARM_Dynarmic_64::PageTableChanged(Common::PageTable& page_table,
jit_cache.emplace(key, jit);
}
DynarmicExclusiveMonitor::DynarmicExclusiveMonitor(Memory::Memory& memory, std::size_t core_count)
: monitor(core_count), memory{memory} {}
DynarmicExclusiveMonitor::~DynarmicExclusiveMonitor() = default;
void DynarmicExclusiveMonitor::SetExclusive(std::size_t core_index, VAddr addr) {
// Size doesn't actually matter.
monitor.Mark(core_index, addr, 16);
}
void DynarmicExclusiveMonitor::ClearExclusive() {
monitor.Clear();
}
bool DynarmicExclusiveMonitor::ExclusiveWrite8(std::size_t core_index, VAddr vaddr, u8 value) {
return monitor.DoExclusiveOperation(core_index, vaddr, 1, [&] { memory.Write8(vaddr, value); });
}
bool DynarmicExclusiveMonitor::ExclusiveWrite16(std::size_t core_index, VAddr vaddr, u16 value) {
return monitor.DoExclusiveOperation(core_index, vaddr, 2,
[&] { memory.Write16(vaddr, value); });
}
bool DynarmicExclusiveMonitor::ExclusiveWrite32(std::size_t core_index, VAddr vaddr, u32 value) {
return monitor.DoExclusiveOperation(core_index, vaddr, 4,
[&] { memory.Write32(vaddr, value); });
}
bool DynarmicExclusiveMonitor::ExclusiveWrite64(std::size_t core_index, VAddr vaddr, u64 value) {
return monitor.DoExclusiveOperation(core_index, vaddr, 8,
[&] { memory.Write64(vaddr, value); });
}
bool DynarmicExclusiveMonitor::ExclusiveWrite128(std::size_t core_index, VAddr vaddr, u128 value) {
return monitor.DoExclusiveOperation(core_index, vaddr, 16, [&] {
memory.Write64(vaddr + 0, value[0]);
memory.Write64(vaddr + 8, value[1]);
});
}
} // namespace Core

View File

@@ -8,7 +8,6 @@
#include <unordered_map>
#include <dynarmic/A64/a64.h>
#include <dynarmic/A64/exclusive_monitor.h>
#include "common/common_types.h"
#include "common/hash.h"
#include "core/arm/arm_interface.h"
@@ -22,12 +21,14 @@ class Memory;
namespace Core {
class DynarmicCallbacks64;
class CPUInterruptHandler;
class DynarmicExclusiveMonitor;
class System;
class ARM_Dynarmic_64 final : public ARM_Interface {
public:
ARM_Dynarmic_64(System& system, ExclusiveMonitor& exclusive_monitor, std::size_t core_index);
ARM_Dynarmic_64(System& system, CPUInterrupts& interrupt_handlers, bool uses_wall_clock,
ExclusiveMonitor& exclusive_monitor, std::size_t core_index);
~ARM_Dynarmic_64() override;
void SetPC(u64 pc) override;
@@ -44,6 +45,7 @@ public:
void SetTlsAddress(VAddr address) override;
void SetTPIDR_EL0(u64 value) override;
u64 GetTPIDR_EL0() const override;
void ChangeProcessorID(std::size_t new_core_id) override;
void SaveContext(ThreadContext32& ctx) override {}
void SaveContext(ThreadContext64& ctx) override;
@@ -75,24 +77,4 @@ private:
DynarmicExclusiveMonitor& exclusive_monitor;
};
class DynarmicExclusiveMonitor final : public ExclusiveMonitor {
public:
explicit DynarmicExclusiveMonitor(Memory::Memory& memory, std::size_t core_count);
~DynarmicExclusiveMonitor() override;
void SetExclusive(std::size_t core_index, VAddr addr) override;
void ClearExclusive() override;
bool ExclusiveWrite8(std::size_t core_index, VAddr vaddr, u8 value) override;
bool ExclusiveWrite16(std::size_t core_index, VAddr vaddr, u16 value) override;
bool ExclusiveWrite32(std::size_t core_index, VAddr vaddr, u32 value) override;
bool ExclusiveWrite64(std::size_t core_index, VAddr vaddr, u64 value) override;
bool ExclusiveWrite128(std::size_t core_index, VAddr vaddr, u128 value) override;
private:
friend class ARM_Dynarmic_64;
Dynarmic::A64::ExclusiveMonitor monitor;
Core::Memory::Memory& memory;
};
} // namespace Core

View File

@@ -2,79 +2,132 @@
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <fmt/format.h>
#include "common/logging/log.h"
#include "core/arm/dynarmic/arm_dynarmic_32.h"
#include "core/arm/dynarmic/arm_dynarmic_cp15.h"
#include "core/core.h"
#include "core/core_timing.h"
#include "core/core_timing_util.h"
using Callback = Dynarmic::A32::Coprocessor::Callback;
using CallbackOrAccessOneWord = Dynarmic::A32::Coprocessor::CallbackOrAccessOneWord;
using CallbackOrAccessTwoWords = Dynarmic::A32::Coprocessor::CallbackOrAccessTwoWords;
template <>
struct fmt::formatter<Dynarmic::A32::CoprocReg> {
constexpr auto parse(format_parse_context& ctx) {
return ctx.begin();
}
template <typename FormatContext>
auto format(const Dynarmic::A32::CoprocReg& reg, FormatContext& ctx) {
return format_to(ctx.out(), "cp{}", static_cast<size_t>(reg));
}
};
namespace Core {
static u32 dummy_value;
std::optional<Callback> DynarmicCP15::CompileInternalOperation(bool two, unsigned opc1,
CoprocReg CRd, CoprocReg CRn,
CoprocReg CRm, unsigned opc2) {
LOG_CRITICAL(Core_ARM, "CP15: cdp{} p15, {}, {}, {}, {}, {}", two ? "2" : "", opc1, CRd, CRn,
CRm, opc2);
return {};
}
CallbackOrAccessOneWord DynarmicCP15::CompileSendOneWord(bool two, unsigned opc1, CoprocReg CRn,
CoprocReg CRm, unsigned opc2) {
// TODO(merry): Privileged CP15 registers
if (!two && CRn == CoprocReg::C7 && opc1 == 0 && CRm == CoprocReg::C5 && opc2 == 4) {
// CP15_FLUSH_PREFETCH_BUFFER
// This is a dummy write, we ignore the value written here.
return &CP15[static_cast<std::size_t>(CP15Register::CP15_FLUSH_PREFETCH_BUFFER)];
return &dummy_value;
}
if (!two && CRn == CoprocReg::C7 && opc1 == 0 && CRm == CoprocReg::C10) {
switch (opc2) {
case 4:
// CP15_DATA_SYNC_BARRIER
// This is a dummy write, we ignore the value written here.
return &CP15[static_cast<std::size_t>(CP15Register::CP15_DATA_SYNC_BARRIER)];
return &dummy_value;
case 5:
// CP15_DATA_MEMORY_BARRIER
// This is a dummy write, we ignore the value written here.
return &CP15[static_cast<std::size_t>(CP15Register::CP15_DATA_MEMORY_BARRIER)];
default:
return {};
return &dummy_value;
}
}
if (!two && CRn == CoprocReg::C13 && opc1 == 0 && CRm == CoprocReg::C0 && opc2 == 2) {
return &CP15[static_cast<std::size_t>(CP15Register::CP15_THREAD_UPRW)];
// CP15_THREAD_UPRW
return &uprw;
}
LOG_CRITICAL(Core_ARM, "CP15: mcr{} p15, {}, <Rt>, {}, {}, {}", two ? "2" : "", opc1, CRn, CRm,
opc2);
return {};
}
CallbackOrAccessTwoWords DynarmicCP15::CompileSendTwoWords(bool two, unsigned opc, CoprocReg CRm) {
LOG_CRITICAL(Core_ARM, "CP15: mcrr{} p15, {}, <Rt>, <Rt2>, {}", two ? "2" : "", opc, CRm);
return {};
}
CallbackOrAccessOneWord DynarmicCP15::CompileGetOneWord(bool two, unsigned opc1, CoprocReg CRn,
CoprocReg CRm, unsigned opc2) {
// TODO(merry): Privileged CP15 registers
if (!two && CRn == CoprocReg::C13 && opc1 == 0 && CRm == CoprocReg::C0) {
switch (opc2) {
case 2:
return &CP15[static_cast<std::size_t>(CP15Register::CP15_THREAD_UPRW)];
// CP15_THREAD_UPRW
return &uprw;
case 3:
return &CP15[static_cast<std::size_t>(CP15Register::CP15_THREAD_URO)];
default:
return {};
// CP15_THREAD_URO
return &uro;
}
}
LOG_CRITICAL(Core_ARM, "CP15: mrc{} p15, {}, <Rt>, {}, {}, {}", two ? "2" : "", opc1, CRn, CRm,
opc2);
return {};
}
CallbackOrAccessTwoWords DynarmicCP15::CompileGetTwoWords(bool two, unsigned opc, CoprocReg CRm) {
if (!two && opc == 0 && CRm == CoprocReg::C14) {
// CNTPCT
const auto callback = static_cast<u64 (*)(Dynarmic::A32::Jit*, void*, u32, u32)>(
[](Dynarmic::A32::Jit*, void* arg, u32, u32) -> u64 {
ARM_Dynarmic_32& parent = *(ARM_Dynarmic_32*)arg;
return parent.system.CoreTiming().GetClockTicks();
});
return Dynarmic::A32::Coprocessor::Callback{callback, (void*)&parent};
}
LOG_CRITICAL(Core_ARM, "CP15: mrrc{} p15, {}, <Rt>, <Rt2>, {}", two ? "2" : "", opc, CRm);
return {};
}
std::optional<Callback> DynarmicCP15::CompileLoadWords(bool two, bool long_transfer, CoprocReg CRd,
std::optional<u8> option) {
if (option) {
LOG_CRITICAL(Core_ARM, "CP15: mrrc{}{} p15, {}, [...], {}", two ? "2" : "",
long_transfer ? "l" : "", CRd, *option);
} else {
LOG_CRITICAL(Core_ARM, "CP15: mrrc{}{} p15, {}, [...]", two ? "2" : "",
long_transfer ? "l" : "", CRd);
}
return {};
}
std::optional<Callback> DynarmicCP15::CompileStoreWords(bool two, bool long_transfer, CoprocReg CRd,
std::optional<u8> option) {
if (option) {
LOG_CRITICAL(Core_ARM, "CP15: mrrc{}{} p15, {}, [...], {}", two ? "2" : "",
long_transfer ? "l" : "", CRd, *option);
} else {
LOG_CRITICAL(Core_ARM, "CP15: mrrc{}{} p15, {}, [...]", two ? "2" : "",
long_transfer ? "l" : "", CRd);
}
return {};
}
} // namespace Core

View File

@@ -10,128 +10,15 @@
#include <dynarmic/A32/coprocessor.h>
#include "common/common_types.h"
enum class CP15Register {
// c0 - Information registers
CP15_MAIN_ID,
CP15_CACHE_TYPE,
CP15_TCM_STATUS,
CP15_TLB_TYPE,
CP15_CPU_ID,
CP15_PROCESSOR_FEATURE_0,
CP15_PROCESSOR_FEATURE_1,
CP15_DEBUG_FEATURE_0,
CP15_AUXILIARY_FEATURE_0,
CP15_MEMORY_MODEL_FEATURE_0,
CP15_MEMORY_MODEL_FEATURE_1,
CP15_MEMORY_MODEL_FEATURE_2,
CP15_MEMORY_MODEL_FEATURE_3,
CP15_ISA_FEATURE_0,
CP15_ISA_FEATURE_1,
CP15_ISA_FEATURE_2,
CP15_ISA_FEATURE_3,
CP15_ISA_FEATURE_4,
namespace Core {
// c1 - Control registers
CP15_CONTROL,
CP15_AUXILIARY_CONTROL,
CP15_COPROCESSOR_ACCESS_CONTROL,
// c2 - Translation table registers
CP15_TRANSLATION_BASE_TABLE_0,
CP15_TRANSLATION_BASE_TABLE_1,
CP15_TRANSLATION_BASE_CONTROL,
CP15_DOMAIN_ACCESS_CONTROL,
CP15_RESERVED,
// c5 - Fault status registers
CP15_FAULT_STATUS,
CP15_INSTR_FAULT_STATUS,
CP15_COMBINED_DATA_FSR = CP15_FAULT_STATUS,
CP15_INST_FSR,
// c6 - Fault Address registers
CP15_FAULT_ADDRESS,
CP15_COMBINED_DATA_FAR = CP15_FAULT_ADDRESS,
CP15_WFAR,
CP15_IFAR,
// c7 - Cache operation registers
CP15_WAIT_FOR_INTERRUPT,
CP15_PHYS_ADDRESS,
CP15_INVALIDATE_INSTR_CACHE,
CP15_INVALIDATE_INSTR_CACHE_USING_MVA,
CP15_INVALIDATE_INSTR_CACHE_USING_INDEX,
CP15_FLUSH_PREFETCH_BUFFER,
CP15_FLUSH_BRANCH_TARGET_CACHE,
CP15_FLUSH_BRANCH_TARGET_CACHE_ENTRY,
CP15_INVALIDATE_DATA_CACHE,
CP15_INVALIDATE_DATA_CACHE_LINE_USING_MVA,
CP15_INVALIDATE_DATA_CACHE_LINE_USING_INDEX,
CP15_INVALIDATE_DATA_AND_INSTR_CACHE,
CP15_CLEAN_DATA_CACHE,
CP15_CLEAN_DATA_CACHE_LINE_USING_MVA,
CP15_CLEAN_DATA_CACHE_LINE_USING_INDEX,
CP15_DATA_SYNC_BARRIER,
CP15_DATA_MEMORY_BARRIER,
CP15_CLEAN_AND_INVALIDATE_DATA_CACHE,
CP15_CLEAN_AND_INVALIDATE_DATA_CACHE_LINE_USING_MVA,
CP15_CLEAN_AND_INVALIDATE_DATA_CACHE_LINE_USING_INDEX,
// c8 - TLB operations
CP15_INVALIDATE_ITLB,
CP15_INVALIDATE_ITLB_SINGLE_ENTRY,
CP15_INVALIDATE_ITLB_ENTRY_ON_ASID_MATCH,
CP15_INVALIDATE_ITLB_ENTRY_ON_MVA,
CP15_INVALIDATE_DTLB,
CP15_INVALIDATE_DTLB_SINGLE_ENTRY,
CP15_INVALIDATE_DTLB_ENTRY_ON_ASID_MATCH,
CP15_INVALIDATE_DTLB_ENTRY_ON_MVA,
CP15_INVALIDATE_UTLB,
CP15_INVALIDATE_UTLB_SINGLE_ENTRY,
CP15_INVALIDATE_UTLB_ENTRY_ON_ASID_MATCH,
CP15_INVALIDATE_UTLB_ENTRY_ON_MVA,
// c9 - Data cache lockdown register
CP15_DATA_CACHE_LOCKDOWN,
// c10 - TLB/Memory map registers
CP15_TLB_LOCKDOWN,
CP15_PRIMARY_REGION_REMAP,
CP15_NORMAL_REGION_REMAP,
// c13 - Thread related registers
CP15_PID,
CP15_CONTEXT_ID,
CP15_THREAD_UPRW, // Thread ID register - User/Privileged Read/Write
CP15_THREAD_URO, // Thread ID register - User Read Only (Privileged R/W)
CP15_THREAD_PRW, // Thread ID register - Privileged R/W only.
// c15 - Performance and TLB lockdown registers
CP15_PERFORMANCE_MONITOR_CONTROL,
CP15_CYCLE_COUNTER,
CP15_COUNT_0,
CP15_COUNT_1,
CP15_READ_MAIN_TLB_LOCKDOWN_ENTRY,
CP15_WRITE_MAIN_TLB_LOCKDOWN_ENTRY,
CP15_MAIN_TLB_LOCKDOWN_VIRT_ADDRESS,
CP15_MAIN_TLB_LOCKDOWN_PHYS_ADDRESS,
CP15_MAIN_TLB_LOCKDOWN_ATTRIBUTE,
CP15_TLB_DEBUG_CONTROL,
// Skyeye defined
CP15_TLB_FAULT_ADDR,
CP15_TLB_FAULT_STATUS,
// Not an actual register.
// All registers should be defined above this.
CP15_REGISTER_COUNT,
};
class ARM_Dynarmic_32;
class DynarmicCP15 final : public Dynarmic::A32::Coprocessor {
public:
using CoprocReg = Dynarmic::A32::CoprocReg;
explicit DynarmicCP15(u32* cp15) : CP15(cp15){};
explicit DynarmicCP15(ARM_Dynarmic_32& parent) : parent(parent) {}
std::optional<Callback> CompileInternalOperation(bool two, unsigned opc1, CoprocReg CRd,
CoprocReg CRn, CoprocReg CRm,
@@ -147,6 +34,9 @@ public:
std::optional<Callback> CompileStoreWords(bool two, bool long_transfer, CoprocReg CRd,
std::optional<u8> option) override;
private:
u32* CP15{};
ARM_Dynarmic_32& parent;
u32 uprw;
u32 uro;
};
} // namespace Core

View File

@@ -0,0 +1,76 @@
// Copyright 2018 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <cinttypes>
#include <memory>
#include "core/arm/dynarmic/arm_exclusive_monitor.h"
#include "core/memory.h"
namespace Core {
DynarmicExclusiveMonitor::DynarmicExclusiveMonitor(Memory::Memory& memory, std::size_t core_count)
: monitor(core_count), memory{memory} {}
DynarmicExclusiveMonitor::~DynarmicExclusiveMonitor() = default;
u8 DynarmicExclusiveMonitor::ExclusiveRead8(std::size_t core_index, VAddr addr) {
return monitor.ReadAndMark<u8>(core_index, addr, [&]() -> u8 { return memory.Read8(addr); });
}
u16 DynarmicExclusiveMonitor::ExclusiveRead16(std::size_t core_index, VAddr addr) {
return monitor.ReadAndMark<u16>(core_index, addr, [&]() -> u16 { return memory.Read16(addr); });
}
u32 DynarmicExclusiveMonitor::ExclusiveRead32(std::size_t core_index, VAddr addr) {
return monitor.ReadAndMark<u32>(core_index, addr, [&]() -> u32 { return memory.Read32(addr); });
}
u64 DynarmicExclusiveMonitor::ExclusiveRead64(std::size_t core_index, VAddr addr) {
return monitor.ReadAndMark<u64>(core_index, addr, [&]() -> u64 { return memory.Read64(addr); });
}
u128 DynarmicExclusiveMonitor::ExclusiveRead128(std::size_t core_index, VAddr addr) {
return monitor.ReadAndMark<u128>(core_index, addr, [&]() -> u128 {
u128 result;
result[0] = memory.Read64(addr);
result[1] = memory.Read64(addr + 8);
return result;
});
}
void DynarmicExclusiveMonitor::ClearExclusive() {
monitor.Clear();
}
bool DynarmicExclusiveMonitor::ExclusiveWrite8(std::size_t core_index, VAddr vaddr, u8 value) {
return monitor.DoExclusiveOperation<u8>(core_index, vaddr, [&](u8 expected) -> bool {
return memory.WriteExclusive8(vaddr, value, expected);
});
}
bool DynarmicExclusiveMonitor::ExclusiveWrite16(std::size_t core_index, VAddr vaddr, u16 value) {
return monitor.DoExclusiveOperation<u16>(core_index, vaddr, [&](u16 expected) -> bool {
return memory.WriteExclusive16(vaddr, value, expected);
});
}
bool DynarmicExclusiveMonitor::ExclusiveWrite32(std::size_t core_index, VAddr vaddr, u32 value) {
return monitor.DoExclusiveOperation<u32>(core_index, vaddr, [&](u32 expected) -> bool {
return memory.WriteExclusive32(vaddr, value, expected);
});
}
bool DynarmicExclusiveMonitor::ExclusiveWrite64(std::size_t core_index, VAddr vaddr, u64 value) {
return monitor.DoExclusiveOperation<u64>(core_index, vaddr, [&](u64 expected) -> bool {
return memory.WriteExclusive64(vaddr, value, expected);
});
}
bool DynarmicExclusiveMonitor::ExclusiveWrite128(std::size_t core_index, VAddr vaddr, u128 value) {
return monitor.DoExclusiveOperation<u128>(core_index, vaddr, [&](u128 expected) -> bool {
return memory.WriteExclusive128(vaddr, value, expected);
});
}
} // namespace Core

View File

@@ -0,0 +1,48 @@
// Copyright 2020 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <memory>
#include <unordered_map>
#include <dynarmic/exclusive_monitor.h>
#include "common/common_types.h"
#include "core/arm/dynarmic/arm_dynarmic_32.h"
#include "core/arm/dynarmic/arm_dynarmic_64.h"
#include "core/arm/exclusive_monitor.h"
namespace Core::Memory {
class Memory;
}
namespace Core {
class DynarmicExclusiveMonitor final : public ExclusiveMonitor {
public:
explicit DynarmicExclusiveMonitor(Memory::Memory& memory, std::size_t core_count);
~DynarmicExclusiveMonitor() override;
u8 ExclusiveRead8(std::size_t core_index, VAddr addr) override;
u16 ExclusiveRead16(std::size_t core_index, VAddr addr) override;
u32 ExclusiveRead32(std::size_t core_index, VAddr addr) override;
u64 ExclusiveRead64(std::size_t core_index, VAddr addr) override;
u128 ExclusiveRead128(std::size_t core_index, VAddr addr) override;
void ClearExclusive() override;
bool ExclusiveWrite8(std::size_t core_index, VAddr vaddr, u8 value) override;
bool ExclusiveWrite16(std::size_t core_index, VAddr vaddr, u16 value) override;
bool ExclusiveWrite32(std::size_t core_index, VAddr vaddr, u32 value) override;
bool ExclusiveWrite64(std::size_t core_index, VAddr vaddr, u64 value) override;
bool ExclusiveWrite128(std::size_t core_index, VAddr vaddr, u128 value) override;
private:
friend class ARM_Dynarmic_32;
friend class ARM_Dynarmic_64;
Dynarmic::ExclusiveMonitor monitor;
Core::Memory::Memory& memory;
};
} // namespace Core

View File

@@ -3,7 +3,7 @@
// Refer to the license.txt file included.
#ifdef ARCHITECTURE_x86_64
#include "core/arm/dynarmic/arm_dynarmic_64.h"
#include "core/arm/dynarmic/arm_exclusive_monitor.h"
#endif
#include "core/arm/exclusive_monitor.h"
#include "core/memory.h"

View File

@@ -18,7 +18,11 @@ class ExclusiveMonitor {
public:
virtual ~ExclusiveMonitor();
virtual void SetExclusive(std::size_t core_index, VAddr addr) = 0;
virtual u8 ExclusiveRead8(std::size_t core_index, VAddr addr) = 0;
virtual u16 ExclusiveRead16(std::size_t core_index, VAddr addr) = 0;
virtual u32 ExclusiveRead32(std::size_t core_index, VAddr addr) = 0;
virtual u64 ExclusiveRead64(std::size_t core_index, VAddr addr) = 0;
virtual u128 ExclusiveRead128(std::size_t core_index, VAddr addr) = 0;
virtual void ClearExclusive() = 0;
virtual bool ExclusiveWrite8(std::size_t core_index, VAddr vaddr, u8 value) = 0;

View File

@@ -6,6 +6,7 @@
#include <unicorn/arm64.h>
#include "common/assert.h"
#include "common/microprofile.h"
#include "core/arm/cpu_interrupt_handler.h"
#include "core/arm/unicorn/arm_unicorn.h"
#include "core/core.h"
#include "core/core_timing.h"
@@ -62,7 +63,9 @@ static bool UnmappedMemoryHook(uc_engine* uc, uc_mem_type type, u64 addr, int si
return false;
}
ARM_Unicorn::ARM_Unicorn(System& system, Arch architecture) : ARM_Interface{system} {
ARM_Unicorn::ARM_Unicorn(System& system, CPUInterrupts& interrupt_handlers, bool uses_wall_clock,
Arch architecture, std::size_t core_index)
: ARM_Interface{system, interrupt_handlers, uses_wall_clock}, core_index{core_index} {
const auto arch = architecture == Arch::AArch32 ? UC_ARCH_ARM : UC_ARCH_ARM64;
CHECKED(uc_open(arch, UC_MODE_ARM, &uc));
@@ -156,12 +159,20 @@ void ARM_Unicorn::SetTPIDR_EL0(u64 value) {
CHECKED(uc_reg_write(uc, UC_ARM64_REG_TPIDR_EL0, &value));
}
void ARM_Unicorn::ChangeProcessorID(std::size_t new_core_id) {
core_index = new_core_id;
}
void ARM_Unicorn::Run() {
if (GDBStub::IsServerEnabled()) {
ExecuteInstructions(std::max(4000000U, 0U));
} else {
ExecuteInstructions(
std::max(std::size_t(system.CoreTiming().GetDowncount()), std::size_t{0}));
while (true) {
if (interrupt_handlers[core_index].IsInterrupted()) {
return;
}
ExecuteInstructions(10);
}
}
}
@@ -183,8 +194,6 @@ void ARM_Unicorn::ExecuteInstructions(std::size_t num_instructions) {
UC_PROT_READ | UC_PROT_WRITE | UC_PROT_EXEC, page_buffer.data()));
CHECKED(uc_emu_start(uc, GetPC(), 1ULL << 63, 0, num_instructions));
CHECKED(uc_mem_unmap(uc, map_addr, page_buffer.size()));
system.CoreTiming().AddTicks(num_instructions);
if (GDBStub::IsServerEnabled()) {
if (last_bkpt_hit && last_bkpt.type == GDBStub::BreakpointType::Execute) {
uc_reg_write(uc, UC_ARM64_REG_PC, &last_bkpt.address);

View File

@@ -20,7 +20,8 @@ public:
AArch64, // 64-bit ARM
};
explicit ARM_Unicorn(System& system, Arch architecture);
explicit ARM_Unicorn(System& system, CPUInterrupts& interrupt_handlers, bool uses_wall_clock,
Arch architecture, std::size_t core_index);
~ARM_Unicorn() override;
void SetPC(u64 pc) override;
@@ -35,6 +36,7 @@ public:
void SetTlsAddress(VAddr address) override;
void SetTPIDR_EL0(u64 value) override;
u64 GetTPIDR_EL0() const override;
void ChangeProcessorID(std::size_t new_core_id) override;
void PrepareReschedule() override;
void ClearExclusiveState() override;
void ExecuteInstructions(std::size_t num_instructions);
@@ -55,6 +57,7 @@ private:
uc_engine* uc{};
GDBStub::BreakpointAddress last_bkpt{};
bool last_bkpt_hit = false;
std::size_t core_index;
};
} // namespace Core

View File

@@ -8,10 +8,10 @@
#include "common/file_util.h"
#include "common/logging/log.h"
#include "common/microprofile.h"
#include "common/string_util.h"
#include "core/arm/exclusive_monitor.h"
#include "core/core.h"
#include "core/core_manager.h"
#include "core/core_timing.h"
#include "core/cpu_manager.h"
#include "core/device_memory.h"
@@ -51,6 +51,11 @@
#include "video_core/renderer_base.h"
#include "video_core/video_core.h"
MICROPROFILE_DEFINE(ARM_Jit_Dynarmic_CPU0, "ARM JIT", "Dynarmic CPU 0", MP_RGB(255, 64, 64));
MICROPROFILE_DEFINE(ARM_Jit_Dynarmic_CPU1, "ARM JIT", "Dynarmic CPU 1", MP_RGB(255, 64, 64));
MICROPROFILE_DEFINE(ARM_Jit_Dynarmic_CPU2, "ARM JIT", "Dynarmic CPU 2", MP_RGB(255, 64, 64));
MICROPROFILE_DEFINE(ARM_Jit_Dynarmic_CPU3, "ARM JIT", "Dynarmic CPU 3", MP_RGB(255, 64, 64));
namespace Core {
namespace {
@@ -117,23 +122,22 @@ struct System::Impl {
: kernel{system}, fs_controller{system}, memory{system},
cpu_manager{system}, reporter{system}, applet_manager{system} {}
CoreManager& CurrentCoreManager() {
return cpu_manager.GetCurrentCoreManager();
}
Kernel::PhysicalCore& CurrentPhysicalCore() {
const auto index = cpu_manager.GetActiveCoreIndex();
return kernel.PhysicalCore(index);
}
Kernel::PhysicalCore& GetPhysicalCore(std::size_t index) {
return kernel.PhysicalCore(index);
}
ResultStatus RunLoop(bool tight_loop) {
ResultStatus Run() {
status = ResultStatus::Success;
cpu_manager.RunLoop(tight_loop);
kernel.Suspend(false);
core_timing.SyncPause(false);
cpu_manager.Pause(false);
return status;
}
ResultStatus Pause() {
status = ResultStatus::Success;
core_timing.SyncPause(true);
kernel.Suspend(true);
cpu_manager.Pause(true);
return status;
}
@@ -143,7 +147,15 @@ struct System::Impl {
device_memory = std::make_unique<Core::DeviceMemory>(system);
core_timing.Initialize();
is_multicore = Settings::values.use_multi_core;
is_async_gpu = is_multicore || Settings::values.use_asynchronous_gpu_emulation;
kernel.SetMulticore(is_multicore);
cpu_manager.SetMulticore(is_multicore);
cpu_manager.SetAsyncGpu(is_async_gpu);
core_timing.SetMulticore(is_multicore);
core_timing.Initialize([&system]() { system.RegisterHostThread(); });
kernel.Initialize();
cpu_manager.Initialize();
@@ -180,6 +192,11 @@ struct System::Impl {
is_powered_on = true;
exit_lock = false;
microprofile_dynarmic[0] = MICROPROFILE_TOKEN(ARM_Jit_Dynarmic_CPU0);
microprofile_dynarmic[1] = MICROPROFILE_TOKEN(ARM_Jit_Dynarmic_CPU1);
microprofile_dynarmic[2] = MICROPROFILE_TOKEN(ARM_Jit_Dynarmic_CPU2);
microprofile_dynarmic[3] = MICROPROFILE_TOKEN(ARM_Jit_Dynarmic_CPU3);
LOG_DEBUG(Core, "Initialized OK");
return ResultStatus::Success;
@@ -277,8 +294,6 @@ struct System::Impl {
service_manager.reset();
cheat_engine.reset();
telemetry_session.reset();
perf_stats.reset();
gpu_core.reset();
device_memory.reset();
// Close all CPU/threading state
@@ -290,6 +305,8 @@ struct System::Impl {
// Close app loader
app_loader.reset();
gpu_core.reset();
perf_stats.reset();
// Clear all applets
applet_manager.ClearAll();
@@ -382,25 +399,35 @@ struct System::Impl {
std::unique_ptr<Core::PerfStats> perf_stats;
Core::FrameLimiter frame_limiter;
bool is_multicore{};
bool is_async_gpu{};
std::array<u64, Core::Hardware::NUM_CPU_CORES> dynarmic_ticks{};
std::array<MicroProfileToken, Core::Hardware::NUM_CPU_CORES> microprofile_dynarmic{};
};
System::System() : impl{std::make_unique<Impl>(*this)} {}
System::~System() = default;
CoreManager& System::CurrentCoreManager() {
return impl->CurrentCoreManager();
CpuManager& System::GetCpuManager() {
return impl->cpu_manager;
}
const CoreManager& System::CurrentCoreManager() const {
return impl->CurrentCoreManager();
const CpuManager& System::GetCpuManager() const {
return impl->cpu_manager;
}
System::ResultStatus System::RunLoop(bool tight_loop) {
return impl->RunLoop(tight_loop);
System::ResultStatus System::Run() {
return impl->Run();
}
System::ResultStatus System::Pause() {
return impl->Pause();
}
System::ResultStatus System::SingleStep() {
return RunLoop(false);
return ResultStatus::Success;
}
void System::InvalidateCpuInstructionCaches() {
@@ -416,7 +443,7 @@ bool System::IsPoweredOn() const {
}
void System::PrepareReschedule() {
impl->CurrentPhysicalCore().Stop();
// Deprecated, does nothing, kept for backward compatibility.
}
void System::PrepareReschedule(const u32 core_index) {
@@ -436,31 +463,41 @@ const TelemetrySession& System::TelemetrySession() const {
}
ARM_Interface& System::CurrentArmInterface() {
return impl->CurrentPhysicalCore().ArmInterface();
return impl->kernel.CurrentScheduler().GetCurrentThread()->ArmInterface();
}
const ARM_Interface& System::CurrentArmInterface() const {
return impl->CurrentPhysicalCore().ArmInterface();
return impl->kernel.CurrentScheduler().GetCurrentThread()->ArmInterface();
}
std::size_t System::CurrentCoreIndex() const {
return impl->cpu_manager.GetActiveCoreIndex();
std::size_t core = impl->kernel.GetCurrentHostThreadID();
ASSERT(core < Core::Hardware::NUM_CPU_CORES);
return core;
}
Kernel::Scheduler& System::CurrentScheduler() {
return impl->CurrentPhysicalCore().Scheduler();
return impl->kernel.CurrentScheduler();
}
const Kernel::Scheduler& System::CurrentScheduler() const {
return impl->CurrentPhysicalCore().Scheduler();
return impl->kernel.CurrentScheduler();
}
Kernel::PhysicalCore& System::CurrentPhysicalCore() {
return impl->kernel.CurrentPhysicalCore();
}
const Kernel::PhysicalCore& System::CurrentPhysicalCore() const {
return impl->kernel.CurrentPhysicalCore();
}
Kernel::Scheduler& System::Scheduler(std::size_t core_index) {
return impl->GetPhysicalCore(core_index).Scheduler();
return impl->kernel.Scheduler(core_index);
}
const Kernel::Scheduler& System::Scheduler(std::size_t core_index) const {
return impl->GetPhysicalCore(core_index).Scheduler();
return impl->kernel.Scheduler(core_index);
}
/// Gets the global scheduler
@@ -490,20 +527,15 @@ const Kernel::Process* System::CurrentProcess() const {
}
ARM_Interface& System::ArmInterface(std::size_t core_index) {
return impl->GetPhysicalCore(core_index).ArmInterface();
auto* thread = impl->kernel.Scheduler(core_index).GetCurrentThread();
ASSERT(thread && !thread->IsHLEThread());
return thread->ArmInterface();
}
const ARM_Interface& System::ArmInterface(std::size_t core_index) const {
return impl->GetPhysicalCore(core_index).ArmInterface();
}
CoreManager& System::GetCoreManager(std::size_t core_index) {
return impl->cpu_manager.GetCoreManager(core_index);
}
const CoreManager& System::GetCoreManager(std::size_t core_index) const {
ASSERT(core_index < NUM_CPU_CORES);
return impl->cpu_manager.GetCoreManager(core_index);
auto* thread = impl->kernel.Scheduler(core_index).GetCurrentThread();
ASSERT(thread && !thread->IsHLEThread());
return thread->ArmInterface();
}
ExclusiveMonitor& System::Monitor() {
@@ -722,4 +754,18 @@ void System::RegisterHostThread() {
impl->kernel.RegisterHostThread();
}
void System::EnterDynarmicProfile() {
std::size_t core = impl->kernel.GetCurrentHostThreadID();
impl->dynarmic_ticks[core] = MicroProfileEnter(impl->microprofile_dynarmic[core]);
}
void System::ExitDynarmicProfile() {
std::size_t core = impl->kernel.GetCurrentHostThreadID();
MicroProfileLeave(impl->microprofile_dynarmic[core], impl->dynarmic_ticks[core]);
}
bool System::IsMulticore() const {
return impl->is_multicore;
}
} // namespace Core

View File

@@ -27,6 +27,7 @@ class VfsFilesystem;
namespace Kernel {
class GlobalScheduler;
class KernelCore;
class PhysicalCore;
class Process;
class Scheduler;
} // namespace Kernel
@@ -90,7 +91,7 @@ class InterruptManager;
namespace Core {
class ARM_Interface;
class CoreManager;
class CpuManager;
class DeviceMemory;
class ExclusiveMonitor;
class FrameLimiter;
@@ -136,16 +137,16 @@ public:
};
/**
* Run the core CPU loop
* This function runs the core for the specified number of CPU instructions before trying to
* update hardware. This is much faster than SingleStep (and should be equivalent), as the CPU
* is not required to do a full dispatch with each instruction. NOTE: the number of instructions
* requested is not guaranteed to run, as this will be interrupted preemptively if a hardware
* update is requested (e.g. on a thread switch).
* @param tight_loop If false, the CPU single-steps.
* @return Result status, indicating whether or not the operation succeeded.
* Run the OS and Application
* This function will start emulation and run the relevant devices
*/
ResultStatus RunLoop(bool tight_loop = true);
ResultStatus Run();
/**
* Pause the OS and Application
* This function will pause emulation and stop the relevant devices
*/
ResultStatus Pause();
/**
* Step the CPU one instruction
@@ -209,17 +210,21 @@ public:
/// Gets the scheduler for the CPU core that is currently running
const Kernel::Scheduler& CurrentScheduler() const;
/// Gets the physical core for the CPU core that is currently running
Kernel::PhysicalCore& CurrentPhysicalCore();
/// Gets the physical core for the CPU core that is currently running
const Kernel::PhysicalCore& CurrentPhysicalCore() const;
/// Gets a reference to an ARM interface for the CPU core with the specified index
ARM_Interface& ArmInterface(std::size_t core_index);
/// Gets a const reference to an ARM interface from the CPU core with the specified index
const ARM_Interface& ArmInterface(std::size_t core_index) const;
/// Gets a CPU interface to the CPU core with the specified index
CoreManager& GetCoreManager(std::size_t core_index);
CpuManager& GetCpuManager();
/// Gets a CPU interface to the CPU core with the specified index
const CoreManager& GetCoreManager(std::size_t core_index) const;
const CpuManager& GetCpuManager() const;
/// Gets a reference to the exclusive monitor
ExclusiveMonitor& Monitor();
@@ -370,15 +375,18 @@ public:
/// Register a host thread as an auxiliary thread.
void RegisterHostThread();
/// Enter Dynarmic Microprofile
void EnterDynarmicProfile();
/// Exit Dynarmic Microprofile
void ExitDynarmicProfile();
/// Tells if system is running on multicore.
bool IsMulticore() const;
private:
System();
/// Returns the currently running CPU core
CoreManager& CurrentCoreManager();
/// Returns the currently running CPU core
const CoreManager& CurrentCoreManager() const;
/**
* Initialize the emulated system.
* @param emu_window Reference to the host-system window used for video output and keyboard

View File

@@ -1,67 +0,0 @@
// Copyright 2018 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <condition_variable>
#include <mutex>
#include "common/logging/log.h"
#include "core/arm/exclusive_monitor.h"
#include "core/arm/unicorn/arm_unicorn.h"
#include "core/core.h"
#include "core/core_manager.h"
#include "core/core_timing.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/physical_core.h"
#include "core/hle/kernel/scheduler.h"
#include "core/hle/kernel/thread.h"
#include "core/hle/lock.h"
#include "core/settings.h"
namespace Core {
CoreManager::CoreManager(System& system, std::size_t core_index)
: global_scheduler{system.GlobalScheduler()}, physical_core{system.Kernel().PhysicalCore(
core_index)},
core_timing{system.CoreTiming()}, core_index{core_index} {}
CoreManager::~CoreManager() = default;
void CoreManager::RunLoop(bool tight_loop) {
Reschedule();
// If we don't have a currently active thread then don't execute instructions,
// instead advance to the next event and try to yield to the next thread
if (Kernel::GetCurrentThread() == nullptr) {
LOG_TRACE(Core, "Core-{} idling", core_index);
core_timing.Idle();
} else {
if (tight_loop) {
physical_core.Run();
} else {
physical_core.Step();
}
}
core_timing.Advance();
Reschedule();
}
void CoreManager::SingleStep() {
return RunLoop(false);
}
void CoreManager::PrepareReschedule() {
physical_core.Stop();
}
void CoreManager::Reschedule() {
// Lock the global kernel mutex when we manipulate the HLE state
std::lock_guard lock(HLE::g_hle_lock);
global_scheduler.SelectThread(core_index);
physical_core.Scheduler().TryDoContextSwitch();
}
} // namespace Core

View File

@@ -1,63 +0,0 @@
// Copyright 2018 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <atomic>
#include <cstddef>
#include <memory>
#include "common/common_types.h"
namespace Kernel {
class GlobalScheduler;
class PhysicalCore;
} // namespace Kernel
namespace Core {
class System;
}
namespace Core::Timing {
class CoreTiming;
}
namespace Core::Memory {
class Memory;
}
namespace Core {
constexpr unsigned NUM_CPU_CORES{4};
class CoreManager {
public:
CoreManager(System& system, std::size_t core_index);
~CoreManager();
void RunLoop(bool tight_loop = true);
void SingleStep();
void PrepareReschedule();
bool IsMainCore() const {
return core_index == 0;
}
std::size_t CoreIndex() const {
return core_index;
}
private:
void Reschedule();
Kernel::GlobalScheduler& global_scheduler;
Kernel::PhysicalCore& physical_core;
Timing::CoreTiming& core_timing;
std::atomic<bool> reschedule_pending = false;
std::size_t core_index;
};
} // namespace Core

View File

@@ -1,29 +1,27 @@
// Copyright 2008 Dolphin Emulator Project / 2017 Citra Emulator Project
// Licensed under GPLv2+
// Copyright 2020 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "core/core_timing.h"
#include <algorithm>
#include <mutex>
#include <string>
#include <tuple>
#include "common/assert.h"
#include "common/thread.h"
#include "common/microprofile.h"
#include "core/core_timing.h"
#include "core/core_timing_util.h"
#include "core/hardware_properties.h"
namespace Core::Timing {
constexpr int MAX_SLICE_LENGTH = 10000;
constexpr u64 MAX_SLICE_LENGTH = 4000;
std::shared_ptr<EventType> CreateEvent(std::string name, TimedCallback&& callback) {
return std::make_shared<EventType>(std::move(callback), std::move(name));
}
struct CoreTiming::Event {
s64 time;
u64 time;
u64 fifo_order;
u64 userdata;
std::weak_ptr<EventType> type;
@@ -39,51 +37,90 @@ struct CoreTiming::Event {
}
};
CoreTiming::CoreTiming() = default;
CoreTiming::CoreTiming() {
clock =
Common::CreateBestMatchingClock(Core::Hardware::BASE_CLOCK_RATE, Core::Hardware::CNTFREQ);
}
CoreTiming::~CoreTiming() = default;
void CoreTiming::Initialize() {
downcounts.fill(MAX_SLICE_LENGTH);
time_slice.fill(MAX_SLICE_LENGTH);
slice_length = MAX_SLICE_LENGTH;
global_timer = 0;
idled_cycles = 0;
current_context = 0;
// The time between CoreTiming being initialized and the first call to Advance() is considered
// the slice boundary between slice -1 and slice 0. Dispatcher loops must call Advance() before
// executing the first cycle of each slice to prepare the slice length and downcount for
// that slice.
is_global_timer_sane = true;
void CoreTiming::ThreadEntry(CoreTiming& instance) {
constexpr char name[] = "yuzu:HostTiming";
MicroProfileOnThreadCreate(name);
Common::SetCurrentThreadName(name);
Common::SetCurrentThreadPriority(Common::ThreadPriority::VeryHigh);
instance.on_thread_init();
instance.ThreadLoop();
}
void CoreTiming::Initialize(std::function<void(void)>&& on_thread_init_) {
on_thread_init = std::move(on_thread_init_);
event_fifo_id = 0;
shutting_down = false;
ticks = 0;
const auto empty_timed_callback = [](u64, s64) {};
ev_lost = CreateEvent("_lost_event", empty_timed_callback);
if (is_multicore) {
timer_thread = std::make_unique<std::thread>(ThreadEntry, std::ref(*this));
}
}
void CoreTiming::Shutdown() {
paused = true;
shutting_down = true;
pause_event.Set();
event.Set();
if (timer_thread) {
timer_thread->join();
}
ClearPendingEvents();
timer_thread.reset();
has_started = false;
}
void CoreTiming::ScheduleEvent(s64 cycles_into_future, const std::shared_ptr<EventType>& event_type,
u64 userdata) {
std::lock_guard guard{inner_mutex};
const s64 timeout = GetTicks() + cycles_into_future;
void CoreTiming::Pause(bool is_paused) {
paused = is_paused;
pause_event.Set();
}
// If this event needs to be scheduled before the next advance(), force one early
if (!is_global_timer_sane) {
ForceExceptionCheck(cycles_into_future);
void CoreTiming::SyncPause(bool is_paused) {
if (is_paused == paused && paused_set == paused) {
return;
}
Pause(is_paused);
if (timer_thread) {
if (!is_paused) {
pause_event.Set();
}
event.Set();
while (paused_set != is_paused)
;
}
}
event_queue.emplace_back(Event{timeout, event_fifo_id++, userdata, event_type});
bool CoreTiming::IsRunning() const {
return !paused_set;
}
std::push_heap(event_queue.begin(), event_queue.end(), std::greater<>());
bool CoreTiming::HasPendingEvents() const {
return !(wait_set && event_queue.empty());
}
void CoreTiming::ScheduleEvent(s64 ns_into_future, const std::shared_ptr<EventType>& event_type,
u64 userdata) {
{
std::scoped_lock scope{basic_lock};
const u64 timeout = static_cast<u64>(GetGlobalTimeNs().count() + ns_into_future);
event_queue.emplace_back(Event{timeout, event_fifo_id++, userdata, event_type});
std::push_heap(event_queue.begin(), event_queue.end(), std::greater<>());
}
event.Set();
}
void CoreTiming::UnscheduleEvent(const std::shared_ptr<EventType>& event_type, u64 userdata) {
std::lock_guard guard{inner_mutex};
std::scoped_lock scope{basic_lock};
const auto itr = std::remove_if(event_queue.begin(), event_queue.end(), [&](const Event& e) {
return e.type.lock().get() == event_type.get() && e.userdata == userdata;
});
@@ -95,21 +132,39 @@ void CoreTiming::UnscheduleEvent(const std::shared_ptr<EventType>& event_type, u
}
}
u64 CoreTiming::GetTicks() const {
u64 ticks = static_cast<u64>(global_timer);
if (!is_global_timer_sane) {
ticks += accumulated_ticks;
void CoreTiming::AddTicks(u64 ticks) {
this->ticks += ticks;
downcount -= ticks;
}
void CoreTiming::Idle() {
if (!event_queue.empty()) {
const u64 next_event_time = event_queue.front().time;
const u64 next_ticks = nsToCycles(std::chrono::nanoseconds(next_event_time)) + 10U;
if (next_ticks > ticks) {
ticks = next_ticks;
}
return;
}
ticks += 1000U;
}
void CoreTiming::ResetTicks() {
downcount = MAX_SLICE_LENGTH;
}
u64 CoreTiming::GetCPUTicks() const {
if (is_multicore) {
return clock->GetCPUCycles();
}
return ticks;
}
u64 CoreTiming::GetIdleTicks() const {
return static_cast<u64>(idled_cycles);
}
void CoreTiming::AddTicks(u64 ticks) {
accumulated_ticks += ticks;
downcounts[current_context] -= static_cast<s64>(ticks);
u64 CoreTiming::GetClockTicks() const {
if (is_multicore) {
return clock->GetClockCycles();
}
return CpuCyclesToClockCycles(ticks);
}
void CoreTiming::ClearPendingEvents() {
@@ -117,7 +172,7 @@ void CoreTiming::ClearPendingEvents() {
}
void CoreTiming::RemoveEvent(const std::shared_ptr<EventType>& event_type) {
std::lock_guard guard{inner_mutex};
basic_lock.lock();
const auto itr = std::remove_if(event_queue.begin(), event_queue.end(), [&](const Event& e) {
return e.type.lock().get() == event_type.get();
@@ -128,99 +183,72 @@ void CoreTiming::RemoveEvent(const std::shared_ptr<EventType>& event_type) {
event_queue.erase(itr, event_queue.end());
std::make_heap(event_queue.begin(), event_queue.end(), std::greater<>());
}
basic_lock.unlock();
}
void CoreTiming::ForceExceptionCheck(s64 cycles) {
cycles = std::max<s64>(0, cycles);
if (downcounts[current_context] <= cycles) {
return;
}
// downcount is always (much) smaller than MAX_INT so we can safely cast cycles to an int
// here. Account for cycles already executed by adjusting the g.slice_length
downcounts[current_context] = static_cast<int>(cycles);
}
std::optional<u64> CoreTiming::NextAvailableCore(const s64 needed_ticks) const {
const u64 original_context = current_context;
u64 next_context = (original_context + 1) % num_cpu_cores;
while (next_context != original_context) {
if (time_slice[next_context] >= needed_ticks) {
return {next_context};
} else if (time_slice[next_context] >= 0) {
return std::nullopt;
}
next_context = (next_context + 1) % num_cpu_cores;
}
return std::nullopt;
}
void CoreTiming::Advance() {
std::unique_lock<std::mutex> guard(inner_mutex);
const u64 cycles_executed = accumulated_ticks;
time_slice[current_context] = std::max<s64>(0, time_slice[current_context] - accumulated_ticks);
global_timer += cycles_executed;
is_global_timer_sane = true;
std::optional<s64> CoreTiming::Advance() {
std::scoped_lock advance_scope{advance_lock};
std::scoped_lock basic_scope{basic_lock};
global_timer = GetGlobalTimeNs().count();
while (!event_queue.empty() && event_queue.front().time <= global_timer) {
Event evt = std::move(event_queue.front());
std::pop_heap(event_queue.begin(), event_queue.end(), std::greater<>());
event_queue.pop_back();
inner_mutex.unlock();
basic_lock.unlock();
if (auto event_type{evt.type.lock()}) {
event_type->callback(evt.userdata, global_timer - evt.time);
}
inner_mutex.lock();
basic_lock.lock();
global_timer = GetGlobalTimeNs().count();
}
is_global_timer_sane = false;
// Still events left (scheduled in the future)
if (!event_queue.empty()) {
const s64 needed_ticks =
std::min<s64>(event_queue.front().time - global_timer, MAX_SLICE_LENGTH);
const auto next_core = NextAvailableCore(needed_ticks);
if (next_core) {
downcounts[*next_core] = needed_ticks;
const s64 next_time = event_queue.front().time - global_timer;
return next_time;
} else {
return std::nullopt;
}
}
void CoreTiming::ThreadLoop() {
has_started = true;
while (!shutting_down) {
while (!paused) {
paused_set = false;
const auto next_time = Advance();
if (next_time) {
if (*next_time > 0) {
std::chrono::nanoseconds next_time_ns = std::chrono::nanoseconds(*next_time);
event.WaitFor(next_time_ns);
}
} else {
wait_set = true;
event.Wait();
}
wait_set = false;
}
paused_set = true;
clock->Pause(true);
pause_event.Wait();
clock->Pause(false);
}
accumulated_ticks = 0;
downcounts[current_context] = time_slice[current_context];
}
void CoreTiming::ResetRun() {
downcounts.fill(MAX_SLICE_LENGTH);
time_slice.fill(MAX_SLICE_LENGTH);
current_context = 0;
// Still events left (scheduled in the future)
if (!event_queue.empty()) {
const s64 needed_ticks =
std::min<s64>(event_queue.front().time - global_timer, MAX_SLICE_LENGTH);
downcounts[current_context] = needed_ticks;
std::chrono::nanoseconds CoreTiming::GetGlobalTimeNs() const {
if (is_multicore) {
return clock->GetTimeNS();
}
is_global_timer_sane = false;
accumulated_ticks = 0;
}
void CoreTiming::Idle() {
accumulated_ticks += downcounts[current_context];
idled_cycles += downcounts[current_context];
downcounts[current_context] = 0;
return CyclesToNs(ticks);
}
std::chrono::microseconds CoreTiming::GetGlobalTimeUs() const {
return std::chrono::microseconds{GetTicks() * 1000000 / Hardware::BASE_CLOCK_RATE};
}
s64 CoreTiming::GetDowncount() const {
return downcounts[current_context];
if (is_multicore) {
return clock->GetTimeUS();
}
return CyclesToUs(ticks);
}
} // namespace Core::Timing

View File

@@ -1,19 +1,25 @@
// Copyright 2008 Dolphin Emulator Project / 2017 Citra Emulator Project
// Licensed under GPLv2+
// Copyright 2020 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <atomic>
#include <chrono>
#include <functional>
#include <memory>
#include <mutex>
#include <optional>
#include <string>
#include <thread>
#include <vector>
#include "common/common_types.h"
#include "common/spin_lock.h"
#include "common/thread.h"
#include "common/threadsafe_queue.h"
#include "common/wall_clock.h"
#include "core/hardware_properties.h"
namespace Core::Timing {
@@ -56,16 +62,40 @@ public:
/// CoreTiming begins at the boundary of timing slice -1. An initial call to Advance() is
/// required to end slice - 1 and start slice 0 before the first cycle of code is executed.
void Initialize();
void Initialize(std::function<void(void)>&& on_thread_init_);
/// Tears down all timing related functionality.
void Shutdown();
/// After the first Advance, the slice lengths and the downcount will be reduced whenever an
/// event is scheduled earlier than the current values.
///
/// Scheduling from a callback will not update the downcount until the Advance() completes.
void ScheduleEvent(s64 cycles_into_future, const std::shared_ptr<EventType>& event_type,
/// Sets if emulation is multicore or single core, must be set before Initialize
void SetMulticore(bool is_multicore) {
this->is_multicore = is_multicore;
}
/// Check if it's using host timing.
bool IsHostTiming() const {
return is_multicore;
}
/// Pauses/Unpauses the execution of the timer thread.
void Pause(bool is_paused);
/// Pauses/Unpauses the execution of the timer thread and waits until paused.
void SyncPause(bool is_paused);
/// Checks if core timing is running.
bool IsRunning() const;
/// Checks if the timer thread has started.
bool HasStarted() const {
return has_started;
}
/// Checks if there are any pending time events.
bool HasPendingEvents() const;
/// Schedules an event in core timing
void ScheduleEvent(s64 ns_into_future, const std::shared_ptr<EventType>& event_type,
u64 userdata = 0);
void UnscheduleEvent(const std::shared_ptr<EventType>& event_type, u64 userdata);
@@ -73,41 +103,30 @@ public:
/// We only permit one event of each type in the queue at a time.
void RemoveEvent(const std::shared_ptr<EventType>& event_type);
void ForceExceptionCheck(s64 cycles);
/// This should only be called from the emu thread, if you are calling it any other thread,
/// you are doing something evil
u64 GetTicks() const;
u64 GetIdleTicks() const;
void AddTicks(u64 ticks);
/// Advance must be called at the beginning of dispatcher loops, not the end. Advance() ends
/// the previous timing slice and begins the next one, you must Advance from the previous
/// slice to the current one before executing any cycles. CoreTiming starts in slice -1 so an
/// Advance() is required to initialize the slice length before the first cycle of emulated
/// instructions is executed.
void Advance();
void ResetTicks();
/// Pretend that the main CPU has executed enough cycles to reach the next event.
void Idle();
s64 GetDowncount() const {
return downcount;
}
/// Returns current time in emulated CPU cycles
u64 GetCPUTicks() const;
/// Returns current time in emulated in Clock cycles
u64 GetClockTicks() const;
/// Returns current time in microseconds.
std::chrono::microseconds GetGlobalTimeUs() const;
void ResetRun();
/// Returns current time in nanoseconds.
std::chrono::nanoseconds GetGlobalTimeNs() const;
s64 GetDowncount() const;
void SwitchContext(u64 new_context) {
current_context = new_context;
}
bool CanCurrentContextRun() const {
return time_slice[current_context] > 0;
}
std::optional<u64> NextAvailableCore(const s64 needed_ticks) const;
/// Checks for events manually and returns time in nanoseconds for next event, threadsafe.
std::optional<s64> Advance();
private:
struct Event;
@@ -115,21 +134,14 @@ private:
/// Clear all pending events. This should ONLY be done on exit.
void ClearPendingEvents();
static constexpr u64 num_cpu_cores = 4;
static void ThreadEntry(CoreTiming& instance);
void ThreadLoop();
s64 global_timer = 0;
s64 idled_cycles = 0;
s64 slice_length = 0;
u64 accumulated_ticks = 0;
std::array<s64, num_cpu_cores> downcounts{};
// Slice of time assigned to each core per run.
std::array<s64, num_cpu_cores> time_slice{};
u64 current_context = 0;
std::unique_ptr<Common::WallClock> clock;
// Are we in a function that has been called from Advance()
// If events are scheduled from a function that gets called from Advance(),
// don't change slice_length and downcount.
bool is_global_timer_sane = false;
u64 global_timer = 0;
std::chrono::nanoseconds start_point;
// The queue is a min-heap using std::make_heap/push_heap/pop_heap.
// We don't use std::priority_queue because we need to be able to serialize, unserialize and
@@ -139,8 +151,23 @@ private:
u64 event_fifo_id = 0;
std::shared_ptr<EventType> ev_lost;
Common::Event event{};
Common::Event pause_event{};
Common::SpinLock basic_lock{};
Common::SpinLock advance_lock{};
std::unique_ptr<std::thread> timer_thread;
std::atomic<bool> paused{};
std::atomic<bool> paused_set{};
std::atomic<bool> wait_set{};
std::atomic<bool> shutting_down{};
std::atomic<bool> has_started{};
std::function<void(void)> on_thread_init{};
std::mutex inner_mutex;
bool is_multicore{};
/// Cycle timing
u64 ticks{};
s64 downcount{};
};
/// Creates a core timing event with the given name and callback.

View File

@@ -38,15 +38,23 @@ s64 usToCycles(std::chrono::microseconds us) {
}
s64 nsToCycles(std::chrono::nanoseconds ns) {
if (static_cast<u64>(ns.count() / 1000000000) > MAX_VALUE_TO_MULTIPLY) {
LOG_ERROR(Core_Timing, "Integer overflow, use max value");
return std::numeric_limits<s64>::max();
}
if (static_cast<u64>(ns.count()) > MAX_VALUE_TO_MULTIPLY) {
LOG_DEBUG(Core_Timing, "Time very big, do rounding");
return Hardware::BASE_CLOCK_RATE * (ns.count() / 1000000000);
}
return (Hardware::BASE_CLOCK_RATE * ns.count()) / 1000000000;
const u128 temporal = Common::Multiply64Into128(ns.count(), Hardware::BASE_CLOCK_RATE);
return Common::Divide128On32(temporal, static_cast<u32>(1000000000)).first;
}
u64 msToClockCycles(std::chrono::milliseconds ns) {
const u128 temp = Common::Multiply64Into128(ns.count(), Hardware::CNTFREQ);
return Common::Divide128On32(temp, 1000).first;
}
u64 usToClockCycles(std::chrono::microseconds ns) {
const u128 temp = Common::Multiply64Into128(ns.count(), Hardware::CNTFREQ);
return Common::Divide128On32(temp, 1000000).first;
}
u64 nsToClockCycles(std::chrono::nanoseconds ns) {
const u128 temp = Common::Multiply64Into128(ns.count(), Hardware::CNTFREQ);
return Common::Divide128On32(temp, 1000000000).first;
}
u64 CpuCyclesToClockCycles(u64 ticks) {
@@ -54,4 +62,22 @@ u64 CpuCyclesToClockCycles(u64 ticks) {
return Common::Divide128On32(temporal, static_cast<u32>(Hardware::BASE_CLOCK_RATE)).first;
}
std::chrono::milliseconds CyclesToMs(s64 cycles) {
const u128 temporal = Common::Multiply64Into128(cycles, 1000);
u64 ms = Common::Divide128On32(temporal, static_cast<u32>(Hardware::BASE_CLOCK_RATE)).first;
return std::chrono::milliseconds(ms);
}
std::chrono::nanoseconds CyclesToNs(s64 cycles) {
const u128 temporal = Common::Multiply64Into128(cycles, 1000000000);
u64 ns = Common::Divide128On32(temporal, static_cast<u32>(Hardware::BASE_CLOCK_RATE)).first;
return std::chrono::nanoseconds(ns);
}
std::chrono::microseconds CyclesToUs(s64 cycles) {
const u128 temporal = Common::Multiply64Into128(cycles, 1000000);
u64 us = Common::Divide128On32(temporal, static_cast<u32>(Hardware::BASE_CLOCK_RATE)).first;
return std::chrono::microseconds(us);
}
} // namespace Core::Timing

View File

@@ -13,18 +13,12 @@ namespace Core::Timing {
s64 msToCycles(std::chrono::milliseconds ms);
s64 usToCycles(std::chrono::microseconds us);
s64 nsToCycles(std::chrono::nanoseconds ns);
inline std::chrono::milliseconds CyclesToMs(s64 cycles) {
return std::chrono::milliseconds(cycles * 1000 / Hardware::BASE_CLOCK_RATE);
}
inline std::chrono::nanoseconds CyclesToNs(s64 cycles) {
return std::chrono::nanoseconds(cycles * 1000000000 / Hardware::BASE_CLOCK_RATE);
}
inline std::chrono::microseconds CyclesToUs(s64 cycles) {
return std::chrono::microseconds(cycles * 1000000 / Hardware::BASE_CLOCK_RATE);
}
u64 msToClockCycles(std::chrono::milliseconds ns);
u64 usToClockCycles(std::chrono::microseconds ns);
u64 nsToClockCycles(std::chrono::nanoseconds ns);
std::chrono::milliseconds CyclesToMs(s64 cycles);
std::chrono::nanoseconds CyclesToNs(s64 cycles);
std::chrono::microseconds CyclesToUs(s64 cycles);
u64 CpuCyclesToClockCycles(u64 ticks);

View File

@@ -2,80 +2,372 @@
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "common/fiber.h"
#include "common/microprofile.h"
#include "common/thread.h"
#include "core/arm/exclusive_monitor.h"
#include "core/core.h"
#include "core/core_manager.h"
#include "core/core_timing.h"
#include "core/cpu_manager.h"
#include "core/gdbstub/gdbstub.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/physical_core.h"
#include "core/hle/kernel/scheduler.h"
#include "core/hle/kernel/thread.h"
#include "video_core/gpu.h"
namespace Core {
CpuManager::CpuManager(System& system) : system{system} {}
CpuManager::~CpuManager() = default;
void CpuManager::ThreadStart(CpuManager& cpu_manager, std::size_t core) {
cpu_manager.RunThread(core);
}
void CpuManager::Initialize() {
for (std::size_t index = 0; index < core_managers.size(); ++index) {
core_managers[index] = std::make_unique<CoreManager>(system, index);
running_mode = true;
if (is_multicore) {
for (std::size_t core = 0; core < Core::Hardware::NUM_CPU_CORES; core++) {
core_data[core].host_thread =
std::make_unique<std::thread>(ThreadStart, std::ref(*this), core);
}
} else {
core_data[0].host_thread = std::make_unique<std::thread>(ThreadStart, std::ref(*this), 0);
}
}
void CpuManager::Shutdown() {
for (auto& cpu_core : core_managers) {
cpu_core.reset();
running_mode = false;
Pause(false);
if (is_multicore) {
for (std::size_t core = 0; core < Core::Hardware::NUM_CPU_CORES; core++) {
core_data[core].host_thread->join();
core_data[core].host_thread.reset();
}
} else {
core_data[0].host_thread->join();
core_data[0].host_thread.reset();
}
}
CoreManager& CpuManager::GetCoreManager(std::size_t index) {
return *core_managers.at(index);
std::function<void(void*)> CpuManager::GetGuestThreadStartFunc() {
return std::function<void(void*)>(GuestThreadFunction);
}
const CoreManager& CpuManager::GetCoreManager(std::size_t index) const {
return *core_managers.at(index);
std::function<void(void*)> CpuManager::GetIdleThreadStartFunc() {
return std::function<void(void*)>(IdleThreadFunction);
}
CoreManager& CpuManager::GetCurrentCoreManager() {
// Otherwise, use single-threaded mode active_core variable
return *core_managers[active_core];
std::function<void(void*)> CpuManager::GetSuspendThreadStartFunc() {
return std::function<void(void*)>(SuspendThreadFunction);
}
const CoreManager& CpuManager::GetCurrentCoreManager() const {
// Otherwise, use single-threaded mode active_core variable
return *core_managers[active_core];
void CpuManager::GuestThreadFunction(void* cpu_manager_) {
CpuManager* cpu_manager = static_cast<CpuManager*>(cpu_manager_);
if (cpu_manager->is_multicore) {
cpu_manager->MultiCoreRunGuestThread();
} else {
cpu_manager->SingleCoreRunGuestThread();
}
}
void CpuManager::RunLoop(bool tight_loop) {
if (GDBStub::IsServerEnabled()) {
GDBStub::HandlePacket();
void CpuManager::GuestRewindFunction(void* cpu_manager_) {
CpuManager* cpu_manager = static_cast<CpuManager*>(cpu_manager_);
if (cpu_manager->is_multicore) {
cpu_manager->MultiCoreRunGuestLoop();
} else {
cpu_manager->SingleCoreRunGuestLoop();
}
}
// If the loop is halted and we want to step, use a tiny (1) number of instructions to
// execute. Otherwise, get out of the loop function.
if (GDBStub::GetCpuHaltFlag()) {
if (GDBStub::GetCpuStepFlag()) {
tight_loop = false;
} else {
return;
void CpuManager::IdleThreadFunction(void* cpu_manager_) {
CpuManager* cpu_manager = static_cast<CpuManager*>(cpu_manager_);
if (cpu_manager->is_multicore) {
cpu_manager->MultiCoreRunIdleThread();
} else {
cpu_manager->SingleCoreRunIdleThread();
}
}
void CpuManager::SuspendThreadFunction(void* cpu_manager_) {
CpuManager* cpu_manager = static_cast<CpuManager*>(cpu_manager_);
if (cpu_manager->is_multicore) {
cpu_manager->MultiCoreRunSuspendThread();
} else {
cpu_manager->SingleCoreRunSuspendThread();
}
}
void* CpuManager::GetStartFuncParamater() {
return static_cast<void*>(this);
}
///////////////////////////////////////////////////////////////////////////////
/// MultiCore ///
///////////////////////////////////////////////////////////////////////////////
void CpuManager::MultiCoreRunGuestThread() {
auto& kernel = system.Kernel();
{
auto& sched = kernel.CurrentScheduler();
sched.OnThreadStart();
}
MultiCoreRunGuestLoop();
}
void CpuManager::MultiCoreRunGuestLoop() {
auto& kernel = system.Kernel();
auto* thread = kernel.CurrentScheduler().GetCurrentThread();
while (true) {
auto* physical_core = &kernel.CurrentPhysicalCore();
auto& arm_interface = thread->ArmInterface();
system.EnterDynarmicProfile();
while (!physical_core->IsInterrupted()) {
arm_interface.Run();
physical_core = &kernel.CurrentPhysicalCore();
}
system.ExitDynarmicProfile();
arm_interface.ClearExclusiveState();
auto& scheduler = kernel.CurrentScheduler();
scheduler.TryDoContextSwitch();
}
}
void CpuManager::MultiCoreRunIdleThread() {
auto& kernel = system.Kernel();
while (true) {
auto& physical_core = kernel.CurrentPhysicalCore();
physical_core.Idle();
auto& scheduler = kernel.CurrentScheduler();
scheduler.TryDoContextSwitch();
}
}
void CpuManager::MultiCoreRunSuspendThread() {
auto& kernel = system.Kernel();
{
auto& sched = kernel.CurrentScheduler();
sched.OnThreadStart();
}
while (true) {
auto core = kernel.GetCurrentHostThreadID();
auto& scheduler = kernel.CurrentScheduler();
Kernel::Thread* current_thread = scheduler.GetCurrentThread();
Common::Fiber::YieldTo(current_thread->GetHostContext(), core_data[core].host_context);
ASSERT(scheduler.ContextSwitchPending());
ASSERT(core == kernel.GetCurrentHostThreadID());
scheduler.TryDoContextSwitch();
}
}
void CpuManager::MultiCorePause(bool paused) {
if (!paused) {
bool all_not_barrier = false;
while (!all_not_barrier) {
all_not_barrier = true;
for (std::size_t core = 0; core < Core::Hardware::NUM_CPU_CORES; core++) {
all_not_barrier &=
!core_data[core].is_running.load() && core_data[core].initialized.load();
}
}
}
auto& core_timing = system.CoreTiming();
core_timing.ResetRun();
bool keep_running{};
do {
keep_running = false;
for (active_core = 0; active_core < NUM_CPU_CORES; ++active_core) {
core_timing.SwitchContext(active_core);
if (core_timing.CanCurrentContextRun()) {
core_managers[active_core]->RunLoop(tight_loop);
}
keep_running |= core_timing.CanCurrentContextRun();
for (std::size_t core = 0; core < Core::Hardware::NUM_CPU_CORES; core++) {
core_data[core].enter_barrier->Set();
}
} while (keep_running);
if (GDBStub::IsServerEnabled()) {
GDBStub::SetCpuStepFlag(false);
if (paused_state.load()) {
bool all_barrier = false;
while (!all_barrier) {
all_barrier = true;
for (std::size_t core = 0; core < Core::Hardware::NUM_CPU_CORES; core++) {
all_barrier &=
core_data[core].is_paused.load() && core_data[core].initialized.load();
}
}
for (std::size_t core = 0; core < Core::Hardware::NUM_CPU_CORES; core++) {
core_data[core].exit_barrier->Set();
}
}
} else {
/// Wait until all cores are paused.
bool all_barrier = false;
while (!all_barrier) {
all_barrier = true;
for (std::size_t core = 0; core < Core::Hardware::NUM_CPU_CORES; core++) {
all_barrier &=
core_data[core].is_paused.load() && core_data[core].initialized.load();
}
}
/// Don't release the barrier
}
paused_state = paused;
}
///////////////////////////////////////////////////////////////////////////////
/// SingleCore ///
///////////////////////////////////////////////////////////////////////////////
void CpuManager::SingleCoreRunGuestThread() {
auto& kernel = system.Kernel();
{
auto& sched = kernel.CurrentScheduler();
sched.OnThreadStart();
}
SingleCoreRunGuestLoop();
}
void CpuManager::SingleCoreRunGuestLoop() {
auto& kernel = system.Kernel();
auto* thread = kernel.CurrentScheduler().GetCurrentThread();
while (true) {
auto* physical_core = &kernel.CurrentPhysicalCore();
auto& arm_interface = thread->ArmInterface();
system.EnterDynarmicProfile();
if (!physical_core->IsInterrupted()) {
arm_interface.Run();
physical_core = &kernel.CurrentPhysicalCore();
}
system.ExitDynarmicProfile();
thread->SetPhantomMode(true);
system.CoreTiming().Advance();
thread->SetPhantomMode(false);
arm_interface.ClearExclusiveState();
PreemptSingleCore();
auto& scheduler = kernel.Scheduler(current_core);
scheduler.TryDoContextSwitch();
}
}
void CpuManager::SingleCoreRunIdleThread() {
auto& kernel = system.Kernel();
while (true) {
auto& physical_core = kernel.CurrentPhysicalCore();
PreemptSingleCore(false);
system.CoreTiming().AddTicks(1000U);
idle_count++;
auto& scheduler = physical_core.Scheduler();
scheduler.TryDoContextSwitch();
}
}
void CpuManager::SingleCoreRunSuspendThread() {
auto& kernel = system.Kernel();
{
auto& sched = kernel.CurrentScheduler();
sched.OnThreadStart();
}
while (true) {
auto core = kernel.GetCurrentHostThreadID();
auto& scheduler = kernel.CurrentScheduler();
Kernel::Thread* current_thread = scheduler.GetCurrentThread();
Common::Fiber::YieldTo(current_thread->GetHostContext(), core_data[0].host_context);
ASSERT(scheduler.ContextSwitchPending());
ASSERT(core == kernel.GetCurrentHostThreadID());
scheduler.TryDoContextSwitch();
}
}
void CpuManager::PreemptSingleCore(bool from_running_enviroment) {
std::size_t old_core = current_core;
auto& scheduler = system.Kernel().Scheduler(old_core);
Kernel::Thread* current_thread = scheduler.GetCurrentThread();
if (idle_count >= 4 || from_running_enviroment) {
if (!from_running_enviroment) {
system.CoreTiming().Idle();
idle_count = 0;
}
current_thread->SetPhantomMode(true);
system.CoreTiming().Advance();
current_thread->SetPhantomMode(false);
}
current_core.store((current_core + 1) % Core::Hardware::NUM_CPU_CORES);
system.CoreTiming().ResetTicks();
scheduler.Unload();
auto& next_scheduler = system.Kernel().Scheduler(current_core);
Common::Fiber::YieldTo(current_thread->GetHostContext(), next_scheduler.ControlContext());
/// May have changed scheduler
auto& current_scheduler = system.Kernel().Scheduler(current_core);
current_scheduler.Reload();
auto* currrent_thread2 = current_scheduler.GetCurrentThread();
if (!currrent_thread2->IsIdleThread()) {
idle_count = 0;
}
}
void CpuManager::SingleCorePause(bool paused) {
if (!paused) {
bool all_not_barrier = false;
while (!all_not_barrier) {
all_not_barrier = !core_data[0].is_running.load() && core_data[0].initialized.load();
}
core_data[0].enter_barrier->Set();
if (paused_state.load()) {
bool all_barrier = false;
while (!all_barrier) {
all_barrier = core_data[0].is_paused.load() && core_data[0].initialized.load();
}
core_data[0].exit_barrier->Set();
}
} else {
/// Wait until all cores are paused.
bool all_barrier = false;
while (!all_barrier) {
all_barrier = core_data[0].is_paused.load() && core_data[0].initialized.load();
}
/// Don't release the barrier
}
paused_state = paused;
}
void CpuManager::Pause(bool paused) {
if (is_multicore) {
MultiCorePause(paused);
} else {
SingleCorePause(paused);
}
}
void CpuManager::RunThread(std::size_t core) {
/// Initialization
system.RegisterCoreThread(core);
std::string name;
if (is_multicore) {
name = "yuzu:CoreCPUThread_" + std::to_string(core);
} else {
name = "yuzu:CPUThread";
}
MicroProfileOnThreadCreate(name.c_str());
Common::SetCurrentThreadName(name.c_str());
Common::SetCurrentThreadPriority(Common::ThreadPriority::High);
auto& data = core_data[core];
data.enter_barrier = std::make_unique<Common::Event>();
data.exit_barrier = std::make_unique<Common::Event>();
data.host_context = Common::Fiber::ThreadToFiber();
data.is_running = false;
data.initialized = true;
const bool sc_sync = !is_async_gpu && !is_multicore;
bool sc_sync_first_use = sc_sync;
/// Running
while (running_mode) {
data.is_running = false;
data.enter_barrier->Wait();
if (sc_sync_first_use) {
system.GPU().ObtainContext();
sc_sync_first_use = false;
}
auto& scheduler = system.Kernel().CurrentScheduler();
Kernel::Thread* current_thread = scheduler.GetCurrentThread();
data.is_running = true;
Common::Fiber::YieldTo(data.host_context, current_thread->GetHostContext());
data.is_running = false;
data.is_paused = true;
data.exit_barrier->Wait();
data.is_paused = false;
}
/// Time to cleanup
data.host_context->Exit();
data.enter_barrier.reset();
data.exit_barrier.reset();
data.initialized = false;
}
} // namespace Core

View File

@@ -5,12 +5,19 @@
#pragma once
#include <array>
#include <atomic>
#include <functional>
#include <memory>
#include <thread>
#include "core/hardware_properties.h"
namespace Common {
class Event;
class Fiber;
} // namespace Common
namespace Core {
class CoreManager;
class System;
class CpuManager {
@@ -24,24 +31,75 @@ public:
CpuManager& operator=(const CpuManager&) = delete;
CpuManager& operator=(CpuManager&&) = delete;
/// Sets if emulation is multicore or single core, must be set before Initialize
void SetMulticore(bool is_multicore) {
this->is_multicore = is_multicore;
}
/// Sets if emulation is using an asynchronous GPU.
void SetAsyncGpu(bool is_async_gpu) {
this->is_async_gpu = is_async_gpu;
}
void Initialize();
void Shutdown();
CoreManager& GetCoreManager(std::size_t index);
const CoreManager& GetCoreManager(std::size_t index) const;
void Pause(bool paused);
CoreManager& GetCurrentCoreManager();
const CoreManager& GetCurrentCoreManager() const;
std::function<void(void*)> GetGuestThreadStartFunc();
std::function<void(void*)> GetIdleThreadStartFunc();
std::function<void(void*)> GetSuspendThreadStartFunc();
void* GetStartFuncParamater();
std::size_t GetActiveCoreIndex() const {
return active_core;
void PreemptSingleCore(bool from_running_enviroment = true);
std::size_t CurrentCore() const {
return current_core.load();
}
void RunLoop(bool tight_loop);
private:
std::array<std::unique_ptr<CoreManager>, Hardware::NUM_CPU_CORES> core_managers;
std::size_t active_core{}; ///< Active core, only used in single thread mode
static void GuestThreadFunction(void* cpu_manager);
static void GuestRewindFunction(void* cpu_manager);
static void IdleThreadFunction(void* cpu_manager);
static void SuspendThreadFunction(void* cpu_manager);
void MultiCoreRunGuestThread();
void MultiCoreRunGuestLoop();
void MultiCoreRunIdleThread();
void MultiCoreRunSuspendThread();
void MultiCorePause(bool paused);
void SingleCoreRunGuestThread();
void SingleCoreRunGuestLoop();
void SingleCoreRunIdleThread();
void SingleCoreRunSuspendThread();
void SingleCorePause(bool paused);
static void ThreadStart(CpuManager& cpu_manager, std::size_t core);
void RunThread(std::size_t core);
struct CoreData {
std::shared_ptr<Common::Fiber> host_context;
std::unique_ptr<Common::Event> enter_barrier;
std::unique_ptr<Common::Event> exit_barrier;
std::atomic<bool> is_running;
std::atomic<bool> is_paused;
std::atomic<bool> initialized;
std::unique_ptr<std::thread> host_thread;
};
std::atomic<bool> running_mode{};
std::atomic<bool> paused_state{};
std::array<CoreData, Core::Hardware::NUM_CPU_CORES> core_data{};
bool is_async_gpu{};
bool is_multicore{};
std::atomic<std::size_t> current_core{};
std::size_t preemption_count{};
std::size_t idle_count{};
static constexpr std::size_t max_cycle_runs = 5;
System& system;
};

View File

@@ -223,7 +223,16 @@ bool operator<(const KeyIndex<KeyType>& lhs, const KeyIndex<KeyType>& rhs) {
class KeyManager {
public:
KeyManager();
static KeyManager& Instance() {
static KeyManager instance;
return instance;
}
KeyManager(const KeyManager&) = delete;
KeyManager& operator=(const KeyManager&) = delete;
KeyManager(KeyManager&&) = delete;
KeyManager& operator=(KeyManager&&) = delete;
bool HasKey(S128KeyType id, u64 field1 = 0, u64 field2 = 0) const;
bool HasKey(S256KeyType id, u64 field1 = 0, u64 field2 = 0) const;
@@ -257,6 +266,8 @@ public:
bool AddTicketPersonalized(Ticket raw);
private:
KeyManager();
std::map<KeyIndex<S128KeyType>, Key128> s128_keys;
std::map<KeyIndex<S256KeyType>, Key256> s256_keys;

View File

@@ -79,7 +79,7 @@ VirtualDir BISFactory::OpenPartition(BisPartitionId id) const {
}
VirtualFile BISFactory::OpenPartitionStorage(BisPartitionId id) const {
Core::Crypto::KeyManager keys;
auto& keys = Core::Crypto::KeyManager::Instance();
Core::Crypto::PartitionDataManager pdm{
Core::System::GetInstance().GetFilesystem()->OpenDirectory(
FileUtil::GetUserPath(FileUtil::UserPath::SysDataDir), Mode::Read)};

View File

@@ -178,7 +178,7 @@ u32 XCI::GetSystemUpdateVersion() {
return 0;
for (const auto& file : update->GetFiles()) {
NCA nca{file, nullptr, 0, keys};
NCA nca{file, nullptr, 0};
if (nca.GetStatus() != Loader::ResultStatus::Success)
continue;
@@ -286,7 +286,7 @@ Loader::ResultStatus XCI::AddNCAFromPartition(XCIPartition part) {
continue;
}
auto nca = std::make_shared<NCA>(file, nullptr, 0, keys);
auto nca = std::make_shared<NCA>(file, nullptr, 0);
if (nca->IsUpdate()) {
continue;
}

View File

@@ -140,6 +140,6 @@ private:
u64 update_normal_partition_end;
Core::Crypto::KeyManager keys;
Core::Crypto::KeyManager& keys = Core::Crypto::KeyManager::Instance();
};
} // namespace FileSys

View File

@@ -118,9 +118,8 @@ static bool IsValidNCA(const NCAHeader& header) {
return header.magic == Common::MakeMagic('N', 'C', 'A', '3');
}
NCA::NCA(VirtualFile file_, VirtualFile bktr_base_romfs_, u64 bktr_base_ivfc_offset,
Core::Crypto::KeyManager keys_)
: file(std::move(file_)), bktr_base_romfs(std::move(bktr_base_romfs_)), keys(std::move(keys_)) {
NCA::NCA(VirtualFile file_, VirtualFile bktr_base_romfs_, u64 bktr_base_ivfc_offset)
: file(std::move(file_)), bktr_base_romfs(std::move(bktr_base_romfs_)) {
if (file == nullptr) {
status = Loader::ResultStatus::ErrorNullFile;
return;

View File

@@ -99,8 +99,7 @@ inline bool IsDirectoryLogoPartition(const VirtualDir& pfs) {
class NCA : public ReadOnlyVfsDirectory {
public:
explicit NCA(VirtualFile file, VirtualFile bktr_base_romfs = nullptr,
u64 bktr_base_ivfc_offset = 0,
Core::Crypto::KeyManager keys = Core::Crypto::KeyManager());
u64 bktr_base_ivfc_offset = 0);
~NCA() override;
Loader::ResultStatus GetStatus() const;
@@ -159,7 +158,7 @@ private:
bool encrypted = false;
bool is_update = false;
Core::Crypto::KeyManager keys;
Core::Crypto::KeyManager& keys = Core::Crypto::KeyManager::Instance();
};
} // namespace FileSys

View File

@@ -408,7 +408,7 @@ void RegisteredCache::ProcessFiles(const std::vector<NcaID>& ids) {
if (file == nullptr)
continue;
const auto nca = std::make_shared<NCA>(parser(file, id), nullptr, 0, keys);
const auto nca = std::make_shared<NCA>(parser(file, id), nullptr, 0);
if (nca->GetStatus() != Loader::ResultStatus::Success ||
nca->GetType() != NCAContentType::Meta) {
continue;
@@ -486,7 +486,7 @@ std::unique_ptr<NCA> RegisteredCache::GetEntry(u64 title_id, ContentRecordType t
const auto raw = GetEntryRaw(title_id, type);
if (raw == nullptr)
return nullptr;
return std::make_unique<NCA>(raw, nullptr, 0, keys);
return std::make_unique<NCA>(raw, nullptr, 0);
}
template <typename T>
@@ -865,7 +865,7 @@ std::unique_ptr<NCA> ManualContentProvider::GetEntry(u64 title_id, ContentRecord
const auto res = GetEntryRaw(title_id, type);
if (res == nullptr)
return nullptr;
return std::make_unique<NCA>(res, nullptr, 0, keys);
return std::make_unique<NCA>(res, nullptr, 0);
}
std::vector<ContentProviderEntry> ManualContentProvider::ListEntriesFilter(

View File

@@ -88,7 +88,7 @@ public:
protected:
// A single instance of KeyManager to be used by GetEntry()
Core::Crypto::KeyManager keys;
Core::Crypto::KeyManager& keys = Core::Crypto::KeyManager::Instance();
};
class PlaceholderCache {

View File

@@ -21,7 +21,7 @@
namespace FileSys {
namespace {
void SetTicketKeys(const std::vector<VirtualFile>& files) {
Core::Crypto::KeyManager keys;
auto& keys = Core::Crypto::KeyManager::Instance();
for (const auto& ticket_file : files) {
if (ticket_file == nullptr) {
@@ -285,7 +285,7 @@ void NSP::ReadNCAs(const std::vector<VirtualFile>& files) {
continue;
}
auto next_nca = std::make_shared<NCA>(std::move(next_file), nullptr, 0, keys);
auto next_nca = std::make_shared<NCA>(std::move(next_file), nullptr, 0);
if (next_nca->GetType() == NCAContentType::Program) {
program_status[cnmt.GetTitleID()] = next_nca->GetStatus();
}

View File

@@ -73,7 +73,7 @@ private:
std::map<u64, std::map<std::pair<TitleType, ContentRecordType>, std::shared_ptr<NCA>>> ncas;
std::vector<VirtualFile> ticket_files;
Core::Crypto::KeyManager keys;
Core::Crypto::KeyManager& keys = Core::Crypto::KeyManager::Instance();
VirtualFile romfs;
VirtualDir exefs;

View File

@@ -40,7 +40,7 @@ VirtualDir MiiModel() {
out->AddFile(std::make_shared<ArrayVfsFile<MiiModelData::SHAPE_MID.size()>>(
MiiModelData::SHAPE_MID, "ShapeMid.dat"));
return std::move(out);
return out;
}
} // namespace FileSys::SystemArchive

View File

@@ -23,7 +23,7 @@ VirtualFile PackBFTTF(const std::array<u8, Size>& data, const std::string& name)
std::vector<u8> bfttf(Size + sizeof(u64));
u64 offset = 0;
size_t offset = 0;
Service::NS::EncryptSharedFont(vec, bfttf, offset);
return std::make_shared<VectorVfsFile>(std::move(bfttf), name);
}

View File

@@ -62,6 +62,6 @@ private:
VirtualFile dec_file;
Core::Crypto::KeyManager keys;
Core::Crypto::KeyManager& keys = Core::Crypto::KeyManager::Instance();
};
} // namespace FileSys

View File

@@ -35,7 +35,6 @@
#include "common/swap.h"
#include "core/arm/arm_interface.h"
#include "core/core.h"
#include "core/core_manager.h"
#include "core/gdbstub/gdbstub.h"
#include "core/hle/kernel/memory/page_table.h"
#include "core/hle/kernel/process.h"

View File

@@ -42,6 +42,10 @@ struct EmuThreadHandle {
constexpr u32 invalid_handle = 0xFFFFFFFF;
return {invalid_handle, invalid_handle};
}
bool IsInvalid() const {
return (*this) == InvalidHandle();
}
};
} // namespace Core

View File

@@ -7,11 +7,15 @@
#include "common/assert.h"
#include "common/common_types.h"
#include "core/arm/exclusive_monitor.h"
#include "core/core.h"
#include "core/hle/kernel/address_arbiter.h"
#include "core/hle/kernel/errors.h"
#include "core/hle/kernel/handle_table.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/scheduler.h"
#include "core/hle/kernel/thread.h"
#include "core/hle/kernel/time_manager.h"
#include "core/hle/result.h"
#include "core/memory.h"
@@ -20,6 +24,7 @@ namespace Kernel {
// Wake up num_to_wake (or all) threads in a vector.
void AddressArbiter::WakeThreads(const std::vector<std::shared_ptr<Thread>>& waiting_threads,
s32 num_to_wake) {
auto& time_manager = system.Kernel().TimeManager();
// Only process up to 'target' threads, unless 'target' is <= 0, in which case process
// them all.
std::size_t last = waiting_threads.size();
@@ -29,12 +34,10 @@ void AddressArbiter::WakeThreads(const std::vector<std::shared_ptr<Thread>>& wai
// Signal the waiting threads.
for (std::size_t i = 0; i < last; i++) {
ASSERT(waiting_threads[i]->GetStatus() == ThreadStatus::WaitArb);
waiting_threads[i]->SetWaitSynchronizationResult(RESULT_SUCCESS);
waiting_threads[i]->SetSynchronizationResults(nullptr, RESULT_SUCCESS);
RemoveThread(waiting_threads[i]);
waiting_threads[i]->SetArbiterWaitAddress(0);
waiting_threads[i]->WaitForArbitration(false);
waiting_threads[i]->ResumeFromWait();
system.PrepareReschedule(waiting_threads[i]->GetProcessorID());
}
}
@@ -56,6 +59,7 @@ ResultCode AddressArbiter::SignalToAddress(VAddr address, SignalType type, s32 v
}
ResultCode AddressArbiter::SignalToAddressOnly(VAddr address, s32 num_to_wake) {
SchedulerLock lock(system.Kernel());
const std::vector<std::shared_ptr<Thread>> waiting_threads =
GetThreadsWaitingOnAddress(address);
WakeThreads(waiting_threads, num_to_wake);
@@ -64,6 +68,7 @@ ResultCode AddressArbiter::SignalToAddressOnly(VAddr address, s32 num_to_wake) {
ResultCode AddressArbiter::IncrementAndSignalToAddressIfEqual(VAddr address, s32 value,
s32 num_to_wake) {
SchedulerLock lock(system.Kernel());
auto& memory = system.Memory();
// Ensure that we can write to the address.
@@ -71,16 +76,24 @@ ResultCode AddressArbiter::IncrementAndSignalToAddressIfEqual(VAddr address, s32
return ERR_INVALID_ADDRESS_STATE;
}
if (static_cast<s32>(memory.Read32(address)) != value) {
return ERR_INVALID_STATE;
}
const std::size_t current_core = system.CurrentCoreIndex();
auto& monitor = system.Monitor();
u32 current_value;
do {
current_value = monitor.ExclusiveRead32(current_core, address);
if (current_value != value) {
return ERR_INVALID_STATE;
}
current_value++;
} while (!monitor.ExclusiveWrite32(current_core, address, current_value));
memory.Write32(address, static_cast<u32>(value + 1));
return SignalToAddressOnly(address, num_to_wake);
}
ResultCode AddressArbiter::ModifyByWaitingCountAndSignalToAddressIfEqual(VAddr address, s32 value,
s32 num_to_wake) {
SchedulerLock lock(system.Kernel());
auto& memory = system.Memory();
// Ensure that we can write to the address.
@@ -92,29 +105,33 @@ ResultCode AddressArbiter::ModifyByWaitingCountAndSignalToAddressIfEqual(VAddr a
const std::vector<std::shared_ptr<Thread>> waiting_threads =
GetThreadsWaitingOnAddress(address);
// Determine the modified value depending on the waiting count.
const std::size_t current_core = system.CurrentCoreIndex();
auto& monitor = system.Monitor();
s32 updated_value;
if (num_to_wake <= 0) {
if (waiting_threads.empty()) {
updated_value = value + 1;
} else {
updated_value = value - 1;
}
} else {
if (waiting_threads.empty()) {
updated_value = value + 1;
} else if (waiting_threads.size() <= static_cast<u32>(num_to_wake)) {
updated_value = value - 1;
} else {
updated_value = value;
}
}
do {
updated_value = monitor.ExclusiveRead32(current_core, address);
if (static_cast<s32>(memory.Read32(address)) != value) {
return ERR_INVALID_STATE;
}
if (updated_value != value) {
return ERR_INVALID_STATE;
}
// Determine the modified value depending on the waiting count.
if (num_to_wake <= 0) {
if (waiting_threads.empty()) {
updated_value = value + 1;
} else {
updated_value = value - 1;
}
} else {
if (waiting_threads.empty()) {
updated_value = value + 1;
} else if (waiting_threads.size() <= static_cast<u32>(num_to_wake)) {
updated_value = value - 1;
} else {
updated_value = value;
}
}
} while (!monitor.ExclusiveWrite32(current_core, address, updated_value));
memory.Write32(address, static_cast<u32>(updated_value));
WakeThreads(waiting_threads, num_to_wake);
return RESULT_SUCCESS;
}
@@ -136,60 +153,127 @@ ResultCode AddressArbiter::WaitForAddress(VAddr address, ArbitrationType type, s
ResultCode AddressArbiter::WaitForAddressIfLessThan(VAddr address, s32 value, s64 timeout,
bool should_decrement) {
auto& memory = system.Memory();
auto& kernel = system.Kernel();
Thread* current_thread = system.CurrentScheduler().GetCurrentThread();
// Ensure that we can read the address.
if (!memory.IsValidVirtualAddress(address)) {
return ERR_INVALID_ADDRESS_STATE;
Handle event_handle = InvalidHandle;
{
SchedulerLockAndSleep lock(kernel, event_handle, current_thread, timeout);
if (current_thread->IsPendingTermination()) {
lock.CancelSleep();
return ERR_THREAD_TERMINATING;
}
// Ensure that we can read the address.
if (!memory.IsValidVirtualAddress(address)) {
lock.CancelSleep();
return ERR_INVALID_ADDRESS_STATE;
}
s32 current_value = static_cast<s32>(memory.Read32(address));
if (current_value >= value) {
lock.CancelSleep();
return ERR_INVALID_STATE;
}
current_thread->SetSynchronizationResults(nullptr, RESULT_TIMEOUT);
s32 decrement_value;
const std::size_t current_core = system.CurrentCoreIndex();
auto& monitor = system.Monitor();
do {
current_value = static_cast<s32>(monitor.ExclusiveRead32(current_core, address));
if (should_decrement) {
decrement_value = current_value - 1;
} else {
decrement_value = current_value;
}
} while (
!monitor.ExclusiveWrite32(current_core, address, static_cast<u32>(decrement_value)));
// Short-circuit without rescheduling, if timeout is zero.
if (timeout == 0) {
lock.CancelSleep();
return RESULT_TIMEOUT;
}
current_thread->SetArbiterWaitAddress(address);
InsertThread(SharedFrom(current_thread));
current_thread->SetStatus(ThreadStatus::WaitArb);
current_thread->WaitForArbitration(true);
}
const s32 cur_value = static_cast<s32>(memory.Read32(address));
if (cur_value >= value) {
return ERR_INVALID_STATE;
if (event_handle != InvalidHandle) {
auto& time_manager = kernel.TimeManager();
time_manager.UnscheduleTimeEvent(event_handle);
}
if (should_decrement) {
memory.Write32(address, static_cast<u32>(cur_value - 1));
{
SchedulerLock lock(kernel);
if (current_thread->IsWaitingForArbitration()) {
RemoveThread(SharedFrom(current_thread));
current_thread->WaitForArbitration(false);
}
}
// Short-circuit without rescheduling, if timeout is zero.
if (timeout == 0) {
return RESULT_TIMEOUT;
}
return WaitForAddressImpl(address, timeout);
return current_thread->GetSignalingResult();
}
ResultCode AddressArbiter::WaitForAddressIfEqual(VAddr address, s32 value, s64 timeout) {
auto& memory = system.Memory();
// Ensure that we can read the address.
if (!memory.IsValidVirtualAddress(address)) {
return ERR_INVALID_ADDRESS_STATE;
}
// Only wait for the address if equal.
if (static_cast<s32>(memory.Read32(address)) != value) {
return ERR_INVALID_STATE;
}
// Short-circuit without rescheduling if timeout is zero.
if (timeout == 0) {
return RESULT_TIMEOUT;
}
return WaitForAddressImpl(address, timeout);
}
ResultCode AddressArbiter::WaitForAddressImpl(VAddr address, s64 timeout) {
auto& kernel = system.Kernel();
Thread* current_thread = system.CurrentScheduler().GetCurrentThread();
current_thread->SetArbiterWaitAddress(address);
InsertThread(SharedFrom(current_thread));
current_thread->SetStatus(ThreadStatus::WaitArb);
current_thread->InvalidateWakeupCallback();
current_thread->WakeAfterDelay(timeout);
system.PrepareReschedule(current_thread->GetProcessorID());
return RESULT_TIMEOUT;
Handle event_handle = InvalidHandle;
{
SchedulerLockAndSleep lock(kernel, event_handle, current_thread, timeout);
if (current_thread->IsPendingTermination()) {
lock.CancelSleep();
return ERR_THREAD_TERMINATING;
}
// Ensure that we can read the address.
if (!memory.IsValidVirtualAddress(address)) {
lock.CancelSleep();
return ERR_INVALID_ADDRESS_STATE;
}
s32 current_value = static_cast<s32>(memory.Read32(address));
if (current_value != value) {
lock.CancelSleep();
return ERR_INVALID_STATE;
}
// Short-circuit without rescheduling, if timeout is zero.
if (timeout == 0) {
lock.CancelSleep();
return RESULT_TIMEOUT;
}
current_thread->SetSynchronizationResults(nullptr, RESULT_TIMEOUT);
current_thread->SetArbiterWaitAddress(address);
InsertThread(SharedFrom(current_thread));
current_thread->SetStatus(ThreadStatus::WaitArb);
current_thread->WaitForArbitration(true);
}
if (event_handle != InvalidHandle) {
auto& time_manager = kernel.TimeManager();
time_manager.UnscheduleTimeEvent(event_handle);
}
{
SchedulerLock lock(kernel);
if (current_thread->IsWaitingForArbitration()) {
RemoveThread(SharedFrom(current_thread));
current_thread->WaitForArbitration(false);
}
}
return current_thread->GetSignalingResult();
}
void AddressArbiter::HandleWakeupThread(std::shared_ptr<Thread> thread) {
@@ -221,9 +305,9 @@ void AddressArbiter::RemoveThread(std::shared_ptr<Thread> thread) {
const auto iter = std::find_if(thread_list.cbegin(), thread_list.cend(),
[&thread](const auto& entry) { return thread == entry; });
ASSERT(iter != thread_list.cend());
thread_list.erase(iter);
if (iter != thread_list.cend()) {
thread_list.erase(iter);
}
}
std::vector<std::shared_ptr<Thread>> AddressArbiter::GetThreadsWaitingOnAddress(

View File

@@ -73,9 +73,6 @@ private:
/// Waits on an address if the value passed is equal to the argument value.
ResultCode WaitForAddressIfEqual(VAddr address, s32 value, s64 timeout);
// Waits on the given address with a timeout in nanoseconds
ResultCode WaitForAddressImpl(VAddr address, s64 timeout);
/// Wake up num_to_wake (or all) threads in a vector.
void WakeThreads(const std::vector<std::shared_ptr<Thread>>& waiting_threads, s32 num_to_wake);

View File

@@ -34,7 +34,7 @@ ResultVal<std::shared_ptr<ClientSession>> ClientPort::Connect() {
}
// Wake the threads waiting on the ServerPort
server_port->WakeupAllWaitingThreads();
server_port->Signal();
return MakeResult(std::move(client));
}

View File

@@ -12,6 +12,7 @@ namespace Kernel {
constexpr ResultCode ERR_MAX_CONNECTIONS_REACHED{ErrorModule::Kernel, 7};
constexpr ResultCode ERR_INVALID_CAPABILITY_DESCRIPTOR{ErrorModule::Kernel, 14};
constexpr ResultCode ERR_THREAD_TERMINATING{ErrorModule::Kernel, 59};
constexpr ResultCode ERR_INVALID_SIZE{ErrorModule::Kernel, 101};
constexpr ResultCode ERR_INVALID_ADDRESS{ErrorModule::Kernel, 102};
constexpr ResultCode ERR_OUT_OF_RESOURCES{ErrorModule::Kernel, 103};

View File

@@ -14,14 +14,17 @@
#include "common/common_types.h"
#include "common/logging/log.h"
#include "core/hle/ipc_helpers.h"
#include "core/hle/kernel/errors.h"
#include "core/hle/kernel/handle_table.h"
#include "core/hle/kernel/hle_ipc.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/object.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/readable_event.h"
#include "core/hle/kernel/scheduler.h"
#include "core/hle/kernel/server_session.h"
#include "core/hle/kernel/thread.h"
#include "core/hle/kernel/time_manager.h"
#include "core/hle/kernel/writable_event.h"
#include "core/memory.h"
@@ -46,15 +49,6 @@ std::shared_ptr<WritableEvent> HLERequestContext::SleepClientThread(
const std::string& reason, u64 timeout, WakeupCallback&& callback,
std::shared_ptr<WritableEvent> writable_event) {
// Put the client thread to sleep until the wait event is signaled or the timeout expires.
thread->SetWakeupCallback(
[context = *this, callback](ThreadWakeupReason reason, std::shared_ptr<Thread> thread,
std::shared_ptr<SynchronizationObject> object,
std::size_t index) mutable -> bool {
ASSERT(thread->GetStatus() == ThreadStatus::WaitHLEEvent);
callback(thread, context, reason);
context.WriteToOutgoingCommandBuffer(*thread);
return true;
});
if (!writable_event) {
// Create event if not provided
@@ -62,14 +56,26 @@ std::shared_ptr<WritableEvent> HLERequestContext::SleepClientThread(
writable_event = pair.writable;
}
const auto readable_event{writable_event->GetReadableEvent()};
writable_event->Clear();
thread->SetStatus(ThreadStatus::WaitHLEEvent);
thread->SetSynchronizationObjects({readable_event});
readable_event->AddWaitingThread(thread);
if (timeout > 0) {
thread->WakeAfterDelay(timeout);
{
Handle event_handle = InvalidHandle;
SchedulerLockAndSleep lock(kernel, event_handle, thread.get(), timeout);
thread->SetHLECallback(
[context = *this, callback](std::shared_ptr<Thread> thread) mutable -> bool {
ThreadWakeupReason reason = thread->GetSignalingResult() == RESULT_TIMEOUT
? ThreadWakeupReason::Timeout
: ThreadWakeupReason::Signal;
callback(thread, context, reason);
context.WriteToOutgoingCommandBuffer(*thread);
return true;
});
const auto readable_event{writable_event->GetReadableEvent()};
writable_event->Clear();
thread->SetHLESyncObject(readable_event.get());
thread->SetStatus(ThreadStatus::WaitHLEEvent);
thread->SetSynchronizationResults(nullptr, RESULT_TIMEOUT);
readable_event->AddWaitingThread(thread);
lock.Release();
thread->SetHLETimeEvent(event_handle);
}
is_thread_waiting = true;
@@ -282,18 +288,18 @@ ResultCode HLERequestContext::WriteToOutgoingCommandBuffer(Thread& thread) {
}
std::vector<u8> HLERequestContext::ReadBuffer(std::size_t buffer_index) const {
std::vector<u8> buffer;
std::vector<u8> buffer{};
const bool is_buffer_a{BufferDescriptorA().size() > buffer_index &&
BufferDescriptorA()[buffer_index].Size()};
if (is_buffer_a) {
ASSERT_MSG(BufferDescriptorA().size() > buffer_index,
"BufferDescriptorA invalid buffer_index {}", buffer_index);
ASSERT_OR_EXECUTE_MSG(BufferDescriptorA().size() > buffer_index, { return buffer; },
"BufferDescriptorA invalid buffer_index {}", buffer_index);
buffer.resize(BufferDescriptorA()[buffer_index].Size());
memory.ReadBlock(BufferDescriptorA()[buffer_index].Address(), buffer.data(), buffer.size());
} else {
ASSERT_MSG(BufferDescriptorX().size() > buffer_index,
"BufferDescriptorX invalid buffer_index {}", buffer_index);
ASSERT_OR_EXECUTE_MSG(BufferDescriptorX().size() > buffer_index, { return buffer; },
"BufferDescriptorX invalid buffer_index {}", buffer_index);
buffer.resize(BufferDescriptorX()[buffer_index].Size());
memory.ReadBlock(BufferDescriptorX()[buffer_index].Address(), buffer.data(), buffer.size());
}
@@ -318,16 +324,16 @@ std::size_t HLERequestContext::WriteBuffer(const void* buffer, std::size_t size,
}
if (is_buffer_b) {
ASSERT_MSG(BufferDescriptorB().size() > buffer_index,
"BufferDescriptorB invalid buffer_index {}", buffer_index);
ASSERT_MSG(BufferDescriptorB()[buffer_index].Size() >= size,
"BufferDescriptorB buffer_index {} is not large enough", buffer_index);
ASSERT_OR_EXECUTE_MSG(BufferDescriptorB().size() > buffer_index &&
BufferDescriptorB()[buffer_index].Size() >= size,
{ return 0; }, "BufferDescriptorB is invalid, index={}, size={}",
buffer_index, size);
memory.WriteBlock(BufferDescriptorB()[buffer_index].Address(), buffer, size);
} else {
ASSERT_MSG(BufferDescriptorC().size() > buffer_index,
"BufferDescriptorC invalid buffer_index {}", buffer_index);
ASSERT_MSG(BufferDescriptorC()[buffer_index].Size() >= size,
"BufferDescriptorC buffer_index {} is not large enough", buffer_index);
ASSERT_OR_EXECUTE_MSG(BufferDescriptorC().size() > buffer_index &&
BufferDescriptorC()[buffer_index].Size() >= size,
{ return 0; }, "BufferDescriptorC is invalid, index={}, size={}",
buffer_index, size);
memory.WriteBlock(BufferDescriptorC()[buffer_index].Address(), buffer, size);
}
@@ -338,16 +344,12 @@ std::size_t HLERequestContext::GetReadBufferSize(std::size_t buffer_index) const
const bool is_buffer_a{BufferDescriptorA().size() > buffer_index &&
BufferDescriptorA()[buffer_index].Size()};
if (is_buffer_a) {
ASSERT_MSG(BufferDescriptorA().size() > buffer_index,
"BufferDescriptorA invalid buffer_index {}", buffer_index);
ASSERT_MSG(BufferDescriptorA()[buffer_index].Size() > 0,
"BufferDescriptorA buffer_index {} is empty", buffer_index);
ASSERT_OR_EXECUTE_MSG(BufferDescriptorA().size() > buffer_index, { return 0; },
"BufferDescriptorA invalid buffer_index {}", buffer_index);
return BufferDescriptorA()[buffer_index].Size();
} else {
ASSERT_MSG(BufferDescriptorX().size() > buffer_index,
"BufferDescriptorX invalid buffer_index {}", buffer_index);
ASSERT_MSG(BufferDescriptorX()[buffer_index].Size() > 0,
"BufferDescriptorX buffer_index {} is empty", buffer_index);
ASSERT_OR_EXECUTE_MSG(BufferDescriptorX().size() > buffer_index, { return 0; },
"BufferDescriptorX invalid buffer_index {}", buffer_index);
return BufferDescriptorX()[buffer_index].Size();
}
}
@@ -356,14 +358,15 @@ std::size_t HLERequestContext::GetWriteBufferSize(std::size_t buffer_index) cons
const bool is_buffer_b{BufferDescriptorB().size() > buffer_index &&
BufferDescriptorB()[buffer_index].Size()};
if (is_buffer_b) {
ASSERT_MSG(BufferDescriptorB().size() > buffer_index,
"BufferDescriptorB invalid buffer_index {}", buffer_index);
ASSERT_OR_EXECUTE_MSG(BufferDescriptorB().size() > buffer_index, { return 0; },
"BufferDescriptorB invalid buffer_index {}", buffer_index);
return BufferDescriptorB()[buffer_index].Size();
} else {
ASSERT_MSG(BufferDescriptorC().size() > buffer_index,
"BufferDescriptorC invalid buffer_index {}", buffer_index);
ASSERT_OR_EXECUTE_MSG(BufferDescriptorC().size() > buffer_index, { return 0; },
"BufferDescriptorC invalid buffer_index {}", buffer_index);
return BufferDescriptorC()[buffer_index].Size();
}
return 0;
}
std::string HLERequestContext::Description() const {

View File

@@ -2,6 +2,7 @@
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <array>
#include <atomic>
#include <bitset>
#include <functional>
@@ -13,11 +14,15 @@
#include "common/assert.h"
#include "common/logging/log.h"
#include "common/microprofile.h"
#include "common/thread.h"
#include "core/arm/arm_interface.h"
#include "core/arm/cpu_interrupt_handler.h"
#include "core/arm/exclusive_monitor.h"
#include "core/core.h"
#include "core/core_timing.h"
#include "core/core_timing_util.h"
#include "core/cpu_manager.h"
#include "core/device_memory.h"
#include "core/hardware_properties.h"
#include "core/hle/kernel/client_port.h"
@@ -39,85 +44,28 @@
#include "core/hle/result.h"
#include "core/memory.h"
MICROPROFILE_DEFINE(Kernel_SVC, "Kernel", "SVC", MP_RGB(70, 200, 70));
namespace Kernel {
/**
* Callback that will wake up the thread it was scheduled for
* @param thread_handle The handle of the thread that's been awoken
* @param cycles_late The number of CPU cycles that have passed since the desired wakeup time
*/
static void ThreadWakeupCallback(u64 thread_handle, [[maybe_unused]] s64 cycles_late) {
const auto proper_handle = static_cast<Handle>(thread_handle);
const auto& system = Core::System::GetInstance();
// Lock the global kernel mutex when we enter the kernel HLE.
std::lock_guard lock{HLE::g_hle_lock};
std::shared_ptr<Thread> thread =
system.Kernel().RetrieveThreadFromGlobalHandleTable(proper_handle);
if (thread == nullptr) {
LOG_CRITICAL(Kernel, "Callback fired for invalid thread {:08X}", proper_handle);
return;
}
bool resume = true;
if (thread->GetStatus() == ThreadStatus::WaitSynch ||
thread->GetStatus() == ThreadStatus::WaitHLEEvent) {
// Remove the thread from each of its waiting objects' waitlists
for (const auto& object : thread->GetSynchronizationObjects()) {
object->RemoveWaitingThread(thread);
}
thread->ClearSynchronizationObjects();
// Invoke the wakeup callback before clearing the wait objects
if (thread->HasWakeupCallback()) {
resume = thread->InvokeWakeupCallback(ThreadWakeupReason::Timeout, thread, nullptr, 0);
}
} else if (thread->GetStatus() == ThreadStatus::WaitMutex ||
thread->GetStatus() == ThreadStatus::WaitCondVar) {
thread->SetMutexWaitAddress(0);
thread->SetWaitHandle(0);
if (thread->GetStatus() == ThreadStatus::WaitCondVar) {
thread->GetOwnerProcess()->RemoveConditionVariableThread(thread);
thread->SetCondVarWaitAddress(0);
}
auto* const lock_owner = thread->GetLockOwner();
// Threads waking up by timeout from WaitProcessWideKey do not perform priority inheritance
// and don't have a lock owner unless SignalProcessWideKey was called first and the thread
// wasn't awakened due to the mutex already being acquired.
if (lock_owner != nullptr) {
lock_owner->RemoveMutexWaiter(thread);
}
}
if (thread->GetStatus() == ThreadStatus::WaitArb) {
auto& address_arbiter = thread->GetOwnerProcess()->GetAddressArbiter();
address_arbiter.HandleWakeupThread(thread);
}
if (resume) {
if (thread->GetStatus() == ThreadStatus::WaitCondVar ||
thread->GetStatus() == ThreadStatus::WaitArb) {
thread->SetWaitSynchronizationResult(RESULT_TIMEOUT);
}
thread->ResumeFromWait();
}
}
struct KernelCore::Impl {
explicit Impl(Core::System& system, KernelCore& kernel)
: global_scheduler{kernel}, synchronization{system}, time_manager{system}, system{system} {}
void SetMulticore(bool is_multicore) {
this->is_multicore = is_multicore;
}
void Initialize(KernelCore& kernel) {
Shutdown();
RegisterHostThread();
InitializePhysicalCores();
InitializeSystemResourceLimit(kernel);
InitializeMemoryLayout();
InitializeThreads();
InitializePreemption();
InitializePreemption(kernel);
InitializeSchedulers();
InitializeSuspendThreads();
}
void Shutdown() {
@@ -126,13 +74,26 @@ struct KernelCore::Impl {
next_user_process_id = Process::ProcessIDMin;
next_thread_id = 1;
for (std::size_t i = 0; i < Core::Hardware::NUM_CPU_CORES; i++) {
if (suspend_threads[i]) {
suspend_threads[i].reset();
}
}
for (std::size_t i = 0; i < cores.size(); i++) {
cores[i].Shutdown();
schedulers[i].reset();
}
cores.clear();
registered_core_threads.reset();
process_list.clear();
current_process = nullptr;
system_resource_limit = nullptr;
global_handle_table.Clear();
thread_wakeup_event_type = nullptr;
preemption_event = nullptr;
global_scheduler.Shutdown();
@@ -145,13 +106,21 @@ struct KernelCore::Impl {
cores.clear();
exclusive_monitor.reset();
host_thread_ids.clear();
}
void InitializePhysicalCores() {
exclusive_monitor =
Core::MakeExclusiveMonitor(system.Memory(), Core::Hardware::NUM_CPU_CORES);
for (std::size_t i = 0; i < Core::Hardware::NUM_CPU_CORES; i++) {
cores.emplace_back(system, i, *exclusive_monitor);
schedulers[i] = std::make_unique<Kernel::Scheduler>(system, i);
cores.emplace_back(system, i, *schedulers[i], interrupts[i]);
}
}
void InitializeSchedulers() {
for (std::size_t i = 0; i < Core::Hardware::NUM_CPU_CORES; i++) {
cores[i].Scheduler().Initialize();
}
}
@@ -173,15 +142,13 @@ struct KernelCore::Impl {
}
}
void InitializeThreads() {
thread_wakeup_event_type =
Core::Timing::CreateEvent("ThreadWakeupCallback", ThreadWakeupCallback);
}
void InitializePreemption() {
preemption_event =
Core::Timing::CreateEvent("PreemptionCallback", [this](u64 userdata, s64 cycles_late) {
global_scheduler.PreemptThreads();
void InitializePreemption(KernelCore& kernel) {
preemption_event = Core::Timing::CreateEvent(
"PreemptionCallback", [this, &kernel](u64 userdata, s64 cycles_late) {
{
SchedulerLock lock(kernel);
global_scheduler.PreemptThreads();
}
s64 time_interval = Core::Timing::msToCycles(std::chrono::milliseconds(10));
system.CoreTiming().ScheduleEvent(time_interval, preemption_event);
});
@@ -190,6 +157,20 @@ struct KernelCore::Impl {
system.CoreTiming().ScheduleEvent(time_interval, preemption_event);
}
void InitializeSuspendThreads() {
for (std::size_t i = 0; i < Core::Hardware::NUM_CPU_CORES; i++) {
std::string name = "Suspend Thread Id:" + std::to_string(i);
std::function<void(void*)> init_func =
system.GetCpuManager().GetSuspendThreadStartFunc();
void* init_func_parameter = system.GetCpuManager().GetStartFuncParamater();
ThreadType type =
static_cast<ThreadType>(THREADTYPE_KERNEL | THREADTYPE_HLE | THREADTYPE_SUSPEND);
auto thread_res = Thread::Create(system, type, name, 0, 0, 0, static_cast<u32>(i), 0,
nullptr, std::move(init_func), init_func_parameter);
suspend_threads[i] = std::move(thread_res).Unwrap();
}
}
void MakeCurrentProcess(Process* process) {
current_process = process;
@@ -197,15 +178,17 @@ struct KernelCore::Impl {
return;
}
for (auto& core : cores) {
core.SetIs64Bit(process->Is64BitProcess());
u32 core_id = GetCurrentHostThreadID();
if (core_id < Core::Hardware::NUM_CPU_CORES) {
system.Memory().SetCurrentPageTable(*process, core_id);
}
system.Memory().SetCurrentPageTable(*process);
}
void RegisterCoreThread(std::size_t core_id) {
std::unique_lock lock{register_thread_mutex};
if (!is_multicore) {
single_core_thread_id = std::this_thread::get_id();
}
const std::thread::id this_id = std::this_thread::get_id();
const auto it = host_thread_ids.find(this_id);
ASSERT(core_id < Core::Hardware::NUM_CPU_CORES);
@@ -219,12 +202,19 @@ struct KernelCore::Impl {
std::unique_lock lock{register_thread_mutex};
const std::thread::id this_id = std::this_thread::get_id();
const auto it = host_thread_ids.find(this_id);
ASSERT(it == host_thread_ids.end());
if (it != host_thread_ids.end()) {
return;
}
host_thread_ids[this_id] = registered_thread_ids++;
}
u32 GetCurrentHostThreadID() const {
const std::thread::id this_id = std::this_thread::get_id();
if (!is_multicore) {
if (single_core_thread_id == this_id) {
return static_cast<u32>(system.GetCpuManager().CurrentCore());
}
}
const auto it = host_thread_ids.find(this_id);
if (it == host_thread_ids.end()) {
return Core::INVALID_HOST_THREAD_ID;
@@ -240,7 +230,7 @@ struct KernelCore::Impl {
}
const Kernel::Scheduler& sched = cores[result.host_handle].Scheduler();
const Kernel::Thread* current = sched.GetCurrentThread();
if (current != nullptr) {
if (current != nullptr && !current->IsPhantomMode()) {
result.guest_handle = current->GetGlobalHandle();
} else {
result.guest_handle = InvalidHandle;
@@ -313,7 +303,6 @@ struct KernelCore::Impl {
std::shared_ptr<ResourceLimit> system_resource_limit;
std::shared_ptr<Core::Timing::EventType> thread_wakeup_event_type;
std::shared_ptr<Core::Timing::EventType> preemption_event;
// This is the kernel's handle table or supervisor handle table which
@@ -343,6 +332,15 @@ struct KernelCore::Impl {
std::shared_ptr<Kernel::SharedMemory> irs_shared_mem;
std::shared_ptr<Kernel::SharedMemory> time_shared_mem;
std::array<std::shared_ptr<Thread>, Core::Hardware::NUM_CPU_CORES> suspend_threads{};
std::array<Core::CPUInterruptHandler, Core::Hardware::NUM_CPU_CORES> interrupts{};
std::array<std::unique_ptr<Kernel::Scheduler>, Core::Hardware::NUM_CPU_CORES> schedulers{};
bool is_multicore{};
std::thread::id single_core_thread_id{};
std::array<u64, Core::Hardware::NUM_CPU_CORES> svc_ticks{};
// System context
Core::System& system;
};
@@ -352,6 +350,10 @@ KernelCore::~KernelCore() {
Shutdown();
}
void KernelCore::SetMulticore(bool is_multicore) {
impl->SetMulticore(is_multicore);
}
void KernelCore::Initialize() {
impl->Initialize(*this);
}
@@ -397,11 +399,11 @@ const Kernel::GlobalScheduler& KernelCore::GlobalScheduler() const {
}
Kernel::Scheduler& KernelCore::Scheduler(std::size_t id) {
return impl->cores[id].Scheduler();
return *impl->schedulers[id];
}
const Kernel::Scheduler& KernelCore::Scheduler(std::size_t id) const {
return impl->cores[id].Scheduler();
return *impl->schedulers[id];
}
Kernel::PhysicalCore& KernelCore::PhysicalCore(std::size_t id) {
@@ -412,6 +414,39 @@ const Kernel::PhysicalCore& KernelCore::PhysicalCore(std::size_t id) const {
return impl->cores[id];
}
Kernel::PhysicalCore& KernelCore::CurrentPhysicalCore() {
u32 core_id = impl->GetCurrentHostThreadID();
ASSERT(core_id < Core::Hardware::NUM_CPU_CORES);
return impl->cores[core_id];
}
const Kernel::PhysicalCore& KernelCore::CurrentPhysicalCore() const {
u32 core_id = impl->GetCurrentHostThreadID();
ASSERT(core_id < Core::Hardware::NUM_CPU_CORES);
return impl->cores[core_id];
}
Kernel::Scheduler& KernelCore::CurrentScheduler() {
u32 core_id = impl->GetCurrentHostThreadID();
ASSERT(core_id < Core::Hardware::NUM_CPU_CORES);
return *impl->schedulers[core_id];
}
const Kernel::Scheduler& KernelCore::CurrentScheduler() const {
u32 core_id = impl->GetCurrentHostThreadID();
ASSERT(core_id < Core::Hardware::NUM_CPU_CORES);
return *impl->schedulers[core_id];
}
std::array<Core::CPUInterruptHandler, Core::Hardware::NUM_CPU_CORES>& KernelCore::Interrupts() {
return impl->interrupts;
}
const std::array<Core::CPUInterruptHandler, Core::Hardware::NUM_CPU_CORES>& KernelCore::Interrupts()
const {
return impl->interrupts;
}
Kernel::Synchronization& KernelCore::Synchronization() {
return impl->synchronization;
}
@@ -437,15 +472,17 @@ const Core::ExclusiveMonitor& KernelCore::GetExclusiveMonitor() const {
}
void KernelCore::InvalidateAllInstructionCaches() {
for (std::size_t i = 0; i < impl->global_scheduler.CpuCoresCount(); i++) {
PhysicalCore(i).ArmInterface().ClearInstructionCache();
auto& threads = GlobalScheduler().GetThreadList();
for (auto& thread : threads) {
if (!thread->IsHLEThread()) {
auto& arm_interface = thread->ArmInterface();
arm_interface.ClearInstructionCache();
}
}
}
void KernelCore::PrepareReschedule(std::size_t id) {
if (id < impl->global_scheduler.CpuCoresCount()) {
impl->cores[id].Stop();
}
// TODO: Reimplement, this
}
void KernelCore::AddNamedPort(std::string name, std::shared_ptr<ClientPort> port) {
@@ -481,10 +518,6 @@ u64 KernelCore::CreateNewUserProcessID() {
return impl->next_user_process_id++;
}
const std::shared_ptr<Core::Timing::EventType>& KernelCore::ThreadWakeupCallbackEventType() const {
return impl->thread_wakeup_event_type;
}
Kernel::HandleTable& KernelCore::GlobalHandleTable() {
return impl->global_handle_table;
}
@@ -557,4 +590,34 @@ const Kernel::SharedMemory& KernelCore::GetTimeSharedMem() const {
return *impl->time_shared_mem;
}
void KernelCore::Suspend(bool in_suspention) {
const bool should_suspend = exception_exited || in_suspention;
{
SchedulerLock lock(*this);
ThreadStatus status = should_suspend ? ThreadStatus::Ready : ThreadStatus::WaitSleep;
for (std::size_t i = 0; i < Core::Hardware::NUM_CPU_CORES; i++) {
impl->suspend_threads[i]->SetStatus(status);
}
}
}
bool KernelCore::IsMulticore() const {
return impl->is_multicore;
}
void KernelCore::ExceptionalExit() {
exception_exited = true;
Suspend(true);
}
void KernelCore::EnterSVCProfile() {
std::size_t core = impl->GetCurrentHostThreadID();
impl->svc_ticks[core] = MicroProfileEnter(MICROPROFILE_TOKEN(Kernel_SVC));
}
void KernelCore::ExitSVCProfile() {
std::size_t core = impl->GetCurrentHostThreadID();
MicroProfileLeave(MICROPROFILE_TOKEN(Kernel_SVC), impl->svc_ticks[core]);
}
} // namespace Kernel

View File

@@ -4,15 +4,17 @@
#pragma once
#include <array>
#include <memory>
#include <string>
#include <unordered_map>
#include <vector>
#include "core/hardware_properties.h"
#include "core/hle/kernel/memory/memory_types.h"
#include "core/hle/kernel/object.h"
namespace Core {
struct EmuThreadHandle;
class CPUInterruptHandler;
class ExclusiveMonitor;
class System;
} // namespace Core
@@ -65,6 +67,9 @@ public:
KernelCore(KernelCore&&) = delete;
KernelCore& operator=(KernelCore&&) = delete;
/// Sets if emulation is multicore or single core, must be set before Initialize
void SetMulticore(bool is_multicore);
/// Resets the kernel to a clean slate for use.
void Initialize();
@@ -110,6 +115,18 @@ public:
/// Gets the an instance of the respective physical CPU core.
const Kernel::PhysicalCore& PhysicalCore(std::size_t id) const;
/// Gets the sole instance of the Scheduler at the current running core.
Kernel::Scheduler& CurrentScheduler();
/// Gets the sole instance of the Scheduler at the current running core.
const Kernel::Scheduler& CurrentScheduler() const;
/// Gets the an instance of the current physical CPU core.
Kernel::PhysicalCore& CurrentPhysicalCore();
/// Gets the an instance of the current physical CPU core.
const Kernel::PhysicalCore& CurrentPhysicalCore() const;
/// Gets the an instance of the Synchronization Interface.
Kernel::Synchronization& Synchronization();
@@ -129,6 +146,10 @@ public:
const Core::ExclusiveMonitor& GetExclusiveMonitor() const;
std::array<Core::CPUInterruptHandler, Core::Hardware::NUM_CPU_CORES>& Interrupts();
const std::array<Core::CPUInterruptHandler, Core::Hardware::NUM_CPU_CORES>& Interrupts() const;
void InvalidateAllInstructionCaches();
/// Adds a port to the named port table
@@ -191,6 +212,18 @@ public:
/// Gets the shared memory object for Time services.
const Kernel::SharedMemory& GetTimeSharedMem() const;
/// Suspend/unsuspend the OS.
void Suspend(bool in_suspention);
/// Exceptional exit the OS.
void ExceptionalExit();
bool IsMulticore() const;
void EnterSVCProfile();
void ExitSVCProfile();
private:
friend class Object;
friend class Process;
@@ -208,9 +241,6 @@ private:
/// Creates a new thread ID, incrementing the internal thread ID counter.
u64 CreateNewThreadID();
/// Retrieves the event type used for thread wakeup callbacks.
const std::shared_ptr<Core::Timing::EventType>& ThreadWakeupCallbackEventType() const;
/// Provides a reference to the global handle table.
Kernel::HandleTable& GlobalHandleTable();
@@ -219,6 +249,7 @@ private:
struct Impl;
std::unique_ptr<Impl> impl;
bool exception_exited{};
};
} // namespace Kernel

View File

@@ -104,7 +104,7 @@ ResultCode MemoryManager::Allocate(PageLinkedList& page_list, std::size_t num_pa
// Ensure that we don't leave anything un-freed
auto group_guard = detail::ScopeExit([&] {
for (const auto& it : page_list.Nodes()) {
const auto min_num_pages{std::min(
const auto min_num_pages{std::min<size_t>(
it.GetNumPages(), (chosen_manager.GetEndAddress() - it.GetAddress()) / PageSize)};
chosen_manager.Free(it.GetAddress(), min_num_pages);
}
@@ -139,7 +139,6 @@ ResultCode MemoryManager::Allocate(PageLinkedList& page_list, std::size_t num_pa
}
// Only succeed if we allocated as many pages as we wanted
ASSERT(num_pages >= 0);
if (num_pages) {
return ERR_OUT_OF_MEMORY;
}
@@ -165,7 +164,7 @@ ResultCode MemoryManager::Free(PageLinkedList& page_list, std::size_t num_pages,
// Free all of the pages
for (const auto& it : page_list.Nodes()) {
const auto min_num_pages{std::min(
const auto min_num_pages{std::min<size_t>(
it.GetNumPages(), (chosen_manager.GetEndAddress() - it.GetAddress()) / PageSize)};
chosen_manager.Free(it.GetAddress(), min_num_pages);
}

View File

@@ -34,8 +34,6 @@ static std::pair<std::shared_ptr<Thread>, u32> GetHighestPriorityMutexWaitingThr
if (thread->GetMutexWaitAddress() != mutex_addr)
continue;
ASSERT(thread->GetStatus() == ThreadStatus::WaitMutex);
++num_waiters;
if (highest_priority_thread == nullptr ||
thread->GetPriority() < highest_priority_thread->GetPriority()) {
@@ -49,6 +47,7 @@ static std::pair<std::shared_ptr<Thread>, u32> GetHighestPriorityMutexWaitingThr
/// Update the mutex owner field of all threads waiting on the mutex to point to the new owner.
static void TransferMutexOwnership(VAddr mutex_addr, std::shared_ptr<Thread> current_thread,
std::shared_ptr<Thread> new_owner) {
current_thread->RemoveMutexWaiter(new_owner);
const auto threads = current_thread->GetMutexWaitingThreads();
for (const auto& thread : threads) {
if (thread->GetMutexWaitAddress() != mutex_addr)
@@ -72,85 +71,100 @@ ResultCode Mutex::TryAcquire(VAddr address, Handle holding_thread_handle,
return ERR_INVALID_ADDRESS;
}
const auto& handle_table = system.Kernel().CurrentProcess()->GetHandleTable();
auto& kernel = system.Kernel();
std::shared_ptr<Thread> current_thread =
SharedFrom(system.CurrentScheduler().GetCurrentThread());
std::shared_ptr<Thread> holding_thread = handle_table.Get<Thread>(holding_thread_handle);
std::shared_ptr<Thread> requesting_thread = handle_table.Get<Thread>(requesting_thread_handle);
SharedFrom(kernel.CurrentScheduler().GetCurrentThread());
{
SchedulerLock lock(kernel);
// The mutex address must be 4-byte aligned
if ((address % sizeof(u32)) != 0) {
return ERR_INVALID_ADDRESS;
}
// TODO(Subv): It is currently unknown if it is possible to lock a mutex in behalf of another
// thread.
ASSERT(requesting_thread == current_thread);
const auto& handle_table = kernel.CurrentProcess()->GetHandleTable();
std::shared_ptr<Thread> holding_thread = handle_table.Get<Thread>(holding_thread_handle);
std::shared_ptr<Thread> requesting_thread =
handle_table.Get<Thread>(requesting_thread_handle);
const u32 addr_value = system.Memory().Read32(address);
// TODO(Subv): It is currently unknown if it is possible to lock a mutex in behalf of
// another thread.
ASSERT(requesting_thread == current_thread);
// If the mutex isn't being held, just return success.
if (addr_value != (holding_thread_handle | Mutex::MutexHasWaitersFlag)) {
return RESULT_SUCCESS;
current_thread->SetSynchronizationResults(nullptr, RESULT_SUCCESS);
const u32 addr_value = system.Memory().Read32(address);
// If the mutex isn't being held, just return success.
if (addr_value != (holding_thread_handle | Mutex::MutexHasWaitersFlag)) {
return RESULT_SUCCESS;
}
if (holding_thread == nullptr) {
return ERR_INVALID_HANDLE;
}
// Wait until the mutex is released
current_thread->SetMutexWaitAddress(address);
current_thread->SetWaitHandle(requesting_thread_handle);
current_thread->SetStatus(ThreadStatus::WaitMutex);
// Update the lock holder thread's priority to prevent priority inversion.
holding_thread->AddMutexWaiter(current_thread);
}
if (holding_thread == nullptr) {
LOG_ERROR(Kernel, "Holding thread does not exist! thread_handle={:08X}",
holding_thread_handle);
return ERR_INVALID_HANDLE;
{
SchedulerLock lock(kernel);
auto* owner = current_thread->GetLockOwner();
if (owner != nullptr) {
owner->RemoveMutexWaiter(current_thread);
}
}
// Wait until the mutex is released
current_thread->SetMutexWaitAddress(address);
current_thread->SetWaitHandle(requesting_thread_handle);
current_thread->SetStatus(ThreadStatus::WaitMutex);
current_thread->InvalidateWakeupCallback();
// Update the lock holder thread's priority to prevent priority inversion.
holding_thread->AddMutexWaiter(current_thread);
system.PrepareReschedule();
return RESULT_SUCCESS;
return current_thread->GetSignalingResult();
}
ResultCode Mutex::Release(VAddr address) {
std::pair<ResultCode, std::shared_ptr<Thread>> Mutex::Unlock(std::shared_ptr<Thread> owner,
VAddr address) {
// The mutex address must be 4-byte aligned
if ((address % sizeof(u32)) != 0) {
LOG_ERROR(Kernel, "Address is not 4-byte aligned! address={:016X}", address);
return ERR_INVALID_ADDRESS;
return {ERR_INVALID_ADDRESS, nullptr};
}
std::shared_ptr<Thread> current_thread =
SharedFrom(system.CurrentScheduler().GetCurrentThread());
auto [thread, num_waiters] = GetHighestPriorityMutexWaitingThread(current_thread, address);
// There are no more threads waiting for the mutex, release it completely.
if (thread == nullptr) {
auto [new_owner, num_waiters] = GetHighestPriorityMutexWaitingThread(owner, address);
if (new_owner == nullptr) {
system.Memory().Write32(address, 0);
return RESULT_SUCCESS;
return {RESULT_SUCCESS, nullptr};
}
// Transfer the ownership of the mutex from the previous owner to the new one.
TransferMutexOwnership(address, current_thread, thread);
u32 mutex_value = thread->GetWaitHandle();
TransferMutexOwnership(address, owner, new_owner);
u32 mutex_value = new_owner->GetWaitHandle();
if (num_waiters >= 2) {
// Notify the guest that there are still some threads waiting for the mutex
mutex_value |= Mutex::MutexHasWaitersFlag;
}
new_owner->SetSynchronizationResults(nullptr, RESULT_SUCCESS);
new_owner->SetLockOwner(nullptr);
new_owner->ResumeFromWait();
// Grant the mutex to the next waiting thread and resume it.
system.Memory().Write32(address, mutex_value);
ASSERT(thread->GetStatus() == ThreadStatus::WaitMutex);
thread->ResumeFromWait();
thread->SetLockOwner(nullptr);
thread->SetCondVarWaitAddress(0);
thread->SetMutexWaitAddress(0);
thread->SetWaitHandle(0);
thread->SetWaitSynchronizationResult(RESULT_SUCCESS);
system.PrepareReschedule();
return RESULT_SUCCESS;
return {RESULT_SUCCESS, new_owner};
}
ResultCode Mutex::Release(VAddr address) {
auto& kernel = system.Kernel();
SchedulerLock lock(kernel);
std::shared_ptr<Thread> current_thread =
SharedFrom(kernel.CurrentScheduler().GetCurrentThread());
auto [result, new_owner] = Unlock(current_thread, address);
if (result != RESULT_SUCCESS && new_owner != nullptr) {
new_owner->SetSynchronizationResults(nullptr, result);
}
return result;
}
} // namespace Kernel

View File

@@ -28,6 +28,10 @@ public:
ResultCode TryAcquire(VAddr address, Handle holding_thread_handle,
Handle requesting_thread_handle);
/// Unlocks a mutex for owner at address
std::pair<ResultCode, std::shared_ptr<Thread>> Unlock(std::shared_ptr<Thread> owner,
VAddr address);
/// Releases the mutex at the specified address.
ResultCode Release(VAddr address);

View File

@@ -2,12 +2,15 @@
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "common/assert.h"
#include "common/logging/log.h"
#include "common/spin_lock.h"
#include "core/arm/arm_interface.h"
#ifdef ARCHITECTURE_x86_64
#include "core/arm/dynarmic/arm_dynarmic_32.h"
#include "core/arm/dynarmic/arm_dynarmic_64.h"
#endif
#include "core/arm/cpu_interrupt_handler.h"
#include "core/arm/exclusive_monitor.h"
#include "core/arm/unicorn/arm_unicorn.h"
#include "core/core.h"
@@ -17,50 +20,37 @@
namespace Kernel {
PhysicalCore::PhysicalCore(Core::System& system, std::size_t id,
Core::ExclusiveMonitor& exclusive_monitor)
: core_index{id} {
#ifdef ARCHITECTURE_x86_64
arm_interface_32 =
std::make_unique<Core::ARM_Dynarmic_32>(system, exclusive_monitor, core_index);
arm_interface_64 =
std::make_unique<Core::ARM_Dynarmic_64>(system, exclusive_monitor, core_index);
PhysicalCore::PhysicalCore(Core::System& system, std::size_t id, Kernel::Scheduler& scheduler,
Core::CPUInterruptHandler& interrupt_handler)
: interrupt_handler{interrupt_handler}, core_index{id}, scheduler{scheduler} {
#else
using Core::ARM_Unicorn;
arm_interface_32 = std::make_unique<ARM_Unicorn>(system, ARM_Unicorn::Arch::AArch32);
arm_interface_64 = std::make_unique<ARM_Unicorn>(system, ARM_Unicorn::Arch::AArch64);
LOG_WARNING(Core, "CPU JIT requested, but Dynarmic not available");
#endif
scheduler = std::make_unique<Kernel::Scheduler>(system, core_index);
guard = std::make_unique<Common::SpinLock>();
}
PhysicalCore::~PhysicalCore() = default;
void PhysicalCore::Run() {
arm_interface->Run();
arm_interface->ClearExclusiveState();
}
void PhysicalCore::Step() {
arm_interface->Step();
}
void PhysicalCore::Stop() {
arm_interface->PrepareReschedule();
void PhysicalCore::Idle() {
interrupt_handler.AwaitInterrupt();
}
void PhysicalCore::Shutdown() {
scheduler->Shutdown();
scheduler.Shutdown();
}
void PhysicalCore::SetIs64Bit(bool is_64_bit) {
if (is_64_bit) {
arm_interface = arm_interface_64.get();
} else {
arm_interface = arm_interface_32.get();
}
bool PhysicalCore::IsInterrupted() const {
return interrupt_handler.IsInterrupted();
}
void PhysicalCore::Interrupt() {
guard->lock();
interrupt_handler.SetInterrupt(true);
guard->unlock();
}
void PhysicalCore::ClearInterrupt() {
guard->lock();
interrupt_handler.SetInterrupt(false);
guard->unlock();
}
} // namespace Kernel

View File

@@ -7,12 +7,17 @@
#include <cstddef>
#include <memory>
namespace Common {
class SpinLock;
}
namespace Kernel {
class Scheduler;
} // namespace Kernel
namespace Core {
class ARM_Interface;
class CPUInterruptHandler;
class ExclusiveMonitor;
class System;
} // namespace Core
@@ -21,7 +26,8 @@ namespace Kernel {
class PhysicalCore {
public:
PhysicalCore(Core::System& system, std::size_t id, Core::ExclusiveMonitor& exclusive_monitor);
PhysicalCore(Core::System& system, std::size_t id, Kernel::Scheduler& scheduler,
Core::CPUInterruptHandler& interrupt_handler);
~PhysicalCore();
PhysicalCore(const PhysicalCore&) = delete;
@@ -30,24 +36,19 @@ public:
PhysicalCore(PhysicalCore&&) = default;
PhysicalCore& operator=(PhysicalCore&&) = default;
/// Execute current jit state
void Run();
/// Execute a single instruction in current jit.
void Step();
/// Stop JIT execution/exit
void Stop();
void Idle();
/// Interrupt this physical core.
void Interrupt();
/// Clear this core's interrupt
void ClearInterrupt();
/// Check if this core is interrupted
bool IsInterrupted() const;
// Shutdown this physical core.
void Shutdown();
Core::ARM_Interface& ArmInterface() {
return *arm_interface;
}
const Core::ARM_Interface& ArmInterface() const {
return *arm_interface;
}
bool IsMainCore() const {
return core_index == 0;
}
@@ -61,21 +62,18 @@ public:
}
Kernel::Scheduler& Scheduler() {
return *scheduler;
return scheduler;
}
const Kernel::Scheduler& Scheduler() const {
return *scheduler;
return scheduler;
}
void SetIs64Bit(bool is_64_bit);
private:
Core::CPUInterruptHandler& interrupt_handler;
std::size_t core_index;
std::unique_ptr<Core::ARM_Interface> arm_interface_32;
std::unique_ptr<Core::ARM_Interface> arm_interface_64;
std::unique_ptr<Kernel::Scheduler> scheduler;
Core::ARM_Interface* arm_interface{};
Kernel::Scheduler& scheduler;
std::unique_ptr<Common::SpinLock> guard;
};
} // namespace Kernel

View File

@@ -22,6 +22,7 @@
#include "core/hle/kernel/resource_limit.h"
#include "core/hle/kernel/scheduler.h"
#include "core/hle/kernel/thread.h"
#include "core/hle/lock.h"
#include "core/memory.h"
#include "core/settings.h"
@@ -30,14 +31,15 @@ namespace {
/**
* Sets up the primary application thread
*
* @param system The system instance to create the main thread under.
* @param owner_process The parent process for the main thread
* @param kernel The kernel instance to create the main thread under.
* @param priority The priority to give the main thread
*/
void SetupMainThread(Process& owner_process, KernelCore& kernel, u32 priority, VAddr stack_top) {
void SetupMainThread(Core::System& system, Process& owner_process, u32 priority, VAddr stack_top) {
const VAddr entry_point = owner_process.PageTable().GetCodeRegionStart();
auto thread_res = Thread::Create(kernel, "main", entry_point, priority, 0,
owner_process.GetIdealCore(), stack_top, owner_process);
ThreadType type = THREADTYPE_USER;
auto thread_res = Thread::Create(system, type, "main", entry_point, priority, 0,
owner_process.GetIdealCore(), stack_top, &owner_process);
std::shared_ptr<Thread> thread = std::move(thread_res).Unwrap();
@@ -48,8 +50,12 @@ void SetupMainThread(Process& owner_process, KernelCore& kernel, u32 priority, V
thread->GetContext32().cpu_registers[1] = thread_handle;
thread->GetContext64().cpu_registers[1] = thread_handle;
auto& kernel = system.Kernel();
// Threads by default are dormant, wake up the main thread so it runs when the scheduler fires
thread->ResumeFromWait();
{
SchedulerLock lock{kernel};
thread->SetStatus(ThreadStatus::Ready);
}
}
} // Anonymous namespace
@@ -132,7 +138,8 @@ std::shared_ptr<ResourceLimit> Process::GetResourceLimit() const {
u64 Process::GetTotalPhysicalMemoryAvailable() const {
const u64 capacity{resource_limit->GetCurrentResourceValue(ResourceType::PhysicalMemory) +
page_table->GetTotalHeapSize() + image_size + main_thread_stack_size};
page_table->GetTotalHeapSize() + GetSystemResourceSize() + image_size +
main_thread_stack_size};
if (capacity < memory_usage_capacity) {
return capacity;
@@ -146,7 +153,8 @@ u64 Process::GetTotalPhysicalMemoryAvailableWithoutSystemResource() const {
}
u64 Process::GetTotalPhysicalMemoryUsed() const {
return image_size + main_thread_stack_size + page_table->GetTotalHeapSize();
return image_size + main_thread_stack_size + page_table->GetTotalHeapSize() +
GetSystemResourceSize();
}
u64 Process::GetTotalPhysicalMemoryUsedWithoutSystemResource() const {
@@ -180,7 +188,6 @@ void Process::RemoveConditionVariableThread(std::shared_ptr<Thread> thread) {
}
++it;
}
UNREACHABLE();
}
std::vector<std::shared_ptr<Thread>> Process::GetConditionVariableThreads(
@@ -205,6 +212,7 @@ void Process::UnregisterThread(const Thread* thread) {
}
ResultCode Process::ClearSignalState() {
SchedulerLock lock(system.Kernel());
if (status == ProcessStatus::Exited) {
LOG_ERROR(Kernel, "called on a terminated process instance.");
return ERR_INVALID_STATE;
@@ -292,7 +300,7 @@ void Process::Run(s32 main_thread_priority, u64 stack_size) {
ChangeStatus(ProcessStatus::Running);
SetupMainThread(*this, kernel, main_thread_priority, main_thread_stack_top);
SetupMainThread(system, *this, main_thread_priority, main_thread_stack_top);
resource_limit->Reserve(ResourceType::Threads, 1);
resource_limit->Reserve(ResourceType::PhysicalMemory, main_thread_stack_size);
}
@@ -338,6 +346,7 @@ static auto FindTLSPageWithAvailableSlots(std::vector<TLSPage>& tls_pages) {
}
VAddr Process::CreateTLSRegion() {
SchedulerLock lock(system.Kernel());
if (auto tls_page_iter{FindTLSPageWithAvailableSlots(tls_pages)};
tls_page_iter != tls_pages.cend()) {
return *tls_page_iter->ReserveSlot();
@@ -368,6 +377,7 @@ VAddr Process::CreateTLSRegion() {
}
void Process::FreeTLSRegion(VAddr tls_address) {
SchedulerLock lock(system.Kernel());
const VAddr aligned_address = Common::AlignDown(tls_address, Core::Memory::PAGE_SIZE);
auto iter =
std::find_if(tls_pages.begin(), tls_pages.end(), [aligned_address](const auto& page) {
@@ -382,6 +392,7 @@ void Process::FreeTLSRegion(VAddr tls_address) {
}
void Process::LoadModule(CodeSet code_set, VAddr base_addr) {
std::lock_guard lock{HLE::g_hle_lock};
const auto ReprotectSegment = [&](const CodeSet::Segment& segment,
Memory::MemoryPermission permission) {
page_table->SetCodeMemoryPermission(segment.addr + base_addr, segment.size, permission);

View File

@@ -6,8 +6,10 @@
#include "common/assert.h"
#include "common/logging/log.h"
#include "core/hle/kernel/errors.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/object.h"
#include "core/hle/kernel/readable_event.h"
#include "core/hle/kernel/scheduler.h"
#include "core/hle/kernel/thread.h"
namespace Kernel {
@@ -37,6 +39,7 @@ void ReadableEvent::Clear() {
}
ResultCode ReadableEvent::Reset() {
SchedulerLock lock(kernel);
if (!is_signaled) {
LOG_TRACE(Kernel, "Handle is not signaled! object_id={}, object_type={}, object_name={}",
GetObjectId(), GetTypeName(), GetName());

View File

@@ -11,11 +11,15 @@
#include <utility>
#include "common/assert.h"
#include "common/bit_util.h"
#include "common/fiber.h"
#include "common/logging/log.h"
#include "core/arm/arm_interface.h"
#include "core/core.h"
#include "core/core_timing.h"
#include "core/cpu_manager.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/physical_core.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/scheduler.h"
#include "core/hle/kernel/time_manager.h"
@@ -27,103 +31,151 @@ GlobalScheduler::GlobalScheduler(KernelCore& kernel) : kernel{kernel} {}
GlobalScheduler::~GlobalScheduler() = default;
void GlobalScheduler::AddThread(std::shared_ptr<Thread> thread) {
global_list_guard.lock();
thread_list.push_back(std::move(thread));
global_list_guard.unlock();
}
void GlobalScheduler::RemoveThread(std::shared_ptr<Thread> thread) {
global_list_guard.lock();
thread_list.erase(std::remove(thread_list.begin(), thread_list.end(), thread),
thread_list.end());
global_list_guard.unlock();
}
void GlobalScheduler::UnloadThread(std::size_t core) {
Scheduler& sched = kernel.Scheduler(core);
sched.UnloadThread();
}
void GlobalScheduler::SelectThread(std::size_t core) {
u32 GlobalScheduler::SelectThreads() {
ASSERT(is_locked);
const auto update_thread = [](Thread* thread, Scheduler& sched) {
if (thread != sched.selected_thread.get()) {
sched.guard.lock();
if (thread != sched.selected_thread_set.get()) {
if (thread == nullptr) {
++sched.idle_selection_count;
}
sched.selected_thread = SharedFrom(thread);
sched.selected_thread_set = SharedFrom(thread);
}
sched.is_context_switch_pending = sched.selected_thread != sched.current_thread;
const bool reschedule_pending =
sched.is_context_switch_pending || (sched.selected_thread_set != sched.current_thread);
sched.is_context_switch_pending = reschedule_pending;
std::atomic_thread_fence(std::memory_order_seq_cst);
sched.guard.unlock();
return reschedule_pending;
};
Scheduler& sched = kernel.Scheduler(core);
Thread* current_thread = nullptr;
if (!is_reselection_pending.load()) {
return 0;
}
std::array<Thread*, Core::Hardware::NUM_CPU_CORES> top_threads{};
u32 idle_cores{};
// Step 1: Get top thread in schedule queue.
current_thread = scheduled_queue[core].empty() ? nullptr : scheduled_queue[core].front();
if (current_thread) {
update_thread(current_thread, sched);
return;
}
// Step 2: Try selecting a suggested thread.
Thread* winner = nullptr;
std::set<s32> sug_cores;
for (auto thread : suggested_queue[core]) {
s32 this_core = thread->GetProcessorID();
Thread* thread_on_core = nullptr;
if (this_core >= 0) {
thread_on_core = scheduled_queue[this_core].front();
for (u32 core = 0; core < Core::Hardware::NUM_CPU_CORES; core++) {
Thread* top_thread =
scheduled_queue[core].empty() ? nullptr : scheduled_queue[core].front();
if (top_thread != nullptr) {
// TODO(Blinkhawk): Implement Thread Pinning
} else {
idle_cores |= (1ul << core);
}
if (this_core < 0 || thread != thread_on_core) {
winner = thread;
break;
}
sug_cores.insert(this_core);
top_threads[core] = top_thread;
}
// if we got a suggested thread, select it, else do a second pass.
if (winner && winner->GetPriority() > 2) {
if (winner->IsRunning()) {
UnloadThread(static_cast<u32>(winner->GetProcessorID()));
}
TransferToCore(winner->GetPriority(), static_cast<s32>(core), winner);
update_thread(winner, sched);
return;
}
// Step 3: Select a suggested thread from another core
for (auto& src_core : sug_cores) {
auto it = scheduled_queue[src_core].begin();
it++;
if (it != scheduled_queue[src_core].end()) {
Thread* thread_on_core = scheduled_queue[src_core].front();
Thread* to_change = *it;
if (thread_on_core->IsRunning() || to_change->IsRunning()) {
UnloadThread(static_cast<u32>(src_core));
while (idle_cores != 0) {
u32 core_id = Common::CountTrailingZeroes32(idle_cores);
if (!suggested_queue[core_id].empty()) {
std::array<s32, Core::Hardware::NUM_CPU_CORES> migration_candidates{};
std::size_t num_candidates = 0;
auto iter = suggested_queue[core_id].begin();
Thread* suggested = nullptr;
// Step 2: Try selecting a suggested thread.
while (iter != suggested_queue[core_id].end()) {
suggested = *iter;
iter++;
s32 suggested_core_id = suggested->GetProcessorID();
Thread* top_thread =
suggested_core_id >= 0 ? top_threads[suggested_core_id] : nullptr;
if (top_thread != suggested) {
if (top_thread != nullptr &&
top_thread->GetPriority() < THREADPRIO_MAX_CORE_MIGRATION) {
suggested = nullptr;
break;
// There's a too high thread to do core migration, cancel
}
TransferToCore(suggested->GetPriority(), static_cast<s32>(core_id), suggested);
break;
}
suggested = nullptr;
migration_candidates[num_candidates++] = suggested_core_id;
}
TransferToCore(thread_on_core->GetPriority(), static_cast<s32>(core), thread_on_core);
current_thread = thread_on_core;
break;
// Step 3: Select a suggested thread from another core
if (suggested == nullptr) {
for (std::size_t i = 0; i < num_candidates; i++) {
s32 candidate_core = migration_candidates[i];
suggested = top_threads[candidate_core];
auto it = scheduled_queue[candidate_core].begin();
it++;
Thread* next = it != scheduled_queue[candidate_core].end() ? *it : nullptr;
if (next != nullptr) {
TransferToCore(suggested->GetPriority(), static_cast<s32>(core_id),
suggested);
top_threads[candidate_core] = next;
break;
} else {
suggested = nullptr;
}
}
}
top_threads[core_id] = suggested;
}
idle_cores &= ~(1ul << core_id);
}
u32 cores_needing_context_switch{};
for (u32 core = 0; core < Core::Hardware::NUM_CPU_CORES; core++) {
Scheduler& sched = kernel.Scheduler(core);
ASSERT(top_threads[core] == nullptr || top_threads[core]->GetProcessorID() == core);
if (update_thread(top_threads[core], sched)) {
cores_needing_context_switch |= (1ul << core);
}
}
update_thread(current_thread, sched);
return cores_needing_context_switch;
}
bool GlobalScheduler::YieldThread(Thread* yielding_thread) {
ASSERT(is_locked);
// Note: caller should use critical section, etc.
if (!yielding_thread->IsRunnable()) {
// Normally this case shouldn't happen except for SetThreadActivity.
is_reselection_pending.store(true, std::memory_order_release);
return false;
}
const u32 core_id = static_cast<u32>(yielding_thread->GetProcessorID());
const u32 priority = yielding_thread->GetPriority();
// Yield the thread
const Thread* const winner = scheduled_queue[core_id].front(priority);
ASSERT_MSG(yielding_thread == winner, "Thread yielding without being in front");
scheduled_queue[core_id].yield(priority);
Reschedule(priority, core_id, yielding_thread);
const Thread* const winner = scheduled_queue[core_id].front();
if (kernel.GetCurrentHostThreadID() != core_id) {
is_reselection_pending.store(true, std::memory_order_release);
}
return AskForReselectionOrMarkRedundant(yielding_thread, winner);
}
bool GlobalScheduler::YieldThreadAndBalanceLoad(Thread* yielding_thread) {
ASSERT(is_locked);
// Note: caller should check if !thread.IsSchedulerOperationRedundant and use critical section,
// etc.
if (!yielding_thread->IsRunnable()) {
// Normally this case shouldn't happen except for SetThreadActivity.
is_reselection_pending.store(true, std::memory_order_release);
return false;
}
const u32 core_id = static_cast<u32>(yielding_thread->GetProcessorID());
const u32 priority = yielding_thread->GetPriority();
// Yield the thread
ASSERT_MSG(yielding_thread == scheduled_queue[core_id].front(priority),
"Thread yielding without being in front");
scheduled_queue[core_id].yield(priority);
Reschedule(priority, core_id, yielding_thread);
std::array<Thread*, Core::Hardware::NUM_CPU_CORES> current_threads;
for (std::size_t i = 0; i < current_threads.size(); i++) {
@@ -153,21 +205,28 @@ bool GlobalScheduler::YieldThreadAndBalanceLoad(Thread* yielding_thread) {
if (winner != nullptr) {
if (winner != yielding_thread) {
if (winner->IsRunning()) {
UnloadThread(static_cast<u32>(winner->GetProcessorID()));
}
TransferToCore(winner->GetPriority(), s32(core_id), winner);
}
} else {
winner = next_thread;
}
if (kernel.GetCurrentHostThreadID() != core_id) {
is_reselection_pending.store(true, std::memory_order_release);
}
return AskForReselectionOrMarkRedundant(yielding_thread, winner);
}
bool GlobalScheduler::YieldThreadAndWaitForLoadBalancing(Thread* yielding_thread) {
ASSERT(is_locked);
// Note: caller should check if !thread.IsSchedulerOperationRedundant and use critical section,
// etc.
if (!yielding_thread->IsRunnable()) {
// Normally this case shouldn't happen except for SetThreadActivity.
is_reselection_pending.store(true, std::memory_order_release);
return false;
}
Thread* winner = nullptr;
const u32 core_id = static_cast<u32>(yielding_thread->GetProcessorID());
@@ -195,25 +254,31 @@ bool GlobalScheduler::YieldThreadAndWaitForLoadBalancing(Thread* yielding_thread
}
if (winner != nullptr) {
if (winner != yielding_thread) {
if (winner->IsRunning()) {
UnloadThread(static_cast<u32>(winner->GetProcessorID()));
}
TransferToCore(winner->GetPriority(), static_cast<s32>(core_id), winner);
}
} else {
winner = yielding_thread;
}
} else {
winner = scheduled_queue[core_id].front();
}
if (kernel.GetCurrentHostThreadID() != core_id) {
is_reselection_pending.store(true, std::memory_order_release);
}
return AskForReselectionOrMarkRedundant(yielding_thread, winner);
}
void GlobalScheduler::PreemptThreads() {
ASSERT(is_locked);
for (std::size_t core_id = 0; core_id < Core::Hardware::NUM_CPU_CORES; core_id++) {
const u32 priority = preemption_priorities[core_id];
if (scheduled_queue[core_id].size(priority) > 0) {
scheduled_queue[core_id].front(priority)->IncrementYieldCount();
if (scheduled_queue[core_id].size(priority) > 1) {
scheduled_queue[core_id].front(priority)->IncrementYieldCount();
}
scheduled_queue[core_id].yield(priority);
if (scheduled_queue[core_id].size(priority) > 1) {
scheduled_queue[core_id].front(priority)->IncrementYieldCount();
@@ -247,9 +312,6 @@ void GlobalScheduler::PreemptThreads() {
}
if (winner != nullptr) {
if (winner->IsRunning()) {
UnloadThread(static_cast<u32>(winner->GetProcessorID()));
}
TransferToCore(winner->GetPriority(), s32(core_id), winner);
current_thread =
winner->GetPriority() <= current_thread->GetPriority() ? winner : current_thread;
@@ -280,9 +342,6 @@ void GlobalScheduler::PreemptThreads() {
}
if (winner != nullptr) {
if (winner->IsRunning()) {
UnloadThread(static_cast<u32>(winner->GetProcessorID()));
}
TransferToCore(winner->GetPriority(), s32(core_id), winner);
current_thread = winner;
}
@@ -292,34 +351,65 @@ void GlobalScheduler::PreemptThreads() {
}
}
void GlobalScheduler::EnableInterruptAndSchedule(u32 cores_pending_reschedule,
Core::EmuThreadHandle global_thread) {
u32 current_core = global_thread.host_handle;
bool must_context_switch = global_thread.guest_handle != InvalidHandle &&
(current_core < Core::Hardware::NUM_CPU_CORES);
while (cores_pending_reschedule != 0) {
u32 core = Common::CountTrailingZeroes32(cores_pending_reschedule);
ASSERT(core < Core::Hardware::NUM_CPU_CORES);
if (!must_context_switch || core != current_core) {
auto& phys_core = kernel.PhysicalCore(core);
phys_core.Interrupt();
} else {
must_context_switch = true;
}
cores_pending_reschedule &= ~(1ul << core);
}
if (must_context_switch) {
auto& core_scheduler = kernel.CurrentScheduler();
kernel.ExitSVCProfile();
core_scheduler.TryDoContextSwitch();
kernel.EnterSVCProfile();
}
}
void GlobalScheduler::Suggest(u32 priority, std::size_t core, Thread* thread) {
ASSERT(is_locked);
suggested_queue[core].add(thread, priority);
}
void GlobalScheduler::Unsuggest(u32 priority, std::size_t core, Thread* thread) {
ASSERT(is_locked);
suggested_queue[core].remove(thread, priority);
}
void GlobalScheduler::Schedule(u32 priority, std::size_t core, Thread* thread) {
ASSERT(is_locked);
ASSERT_MSG(thread->GetProcessorID() == s32(core), "Thread must be assigned to this core.");
scheduled_queue[core].add(thread, priority);
}
void GlobalScheduler::SchedulePrepend(u32 priority, std::size_t core, Thread* thread) {
ASSERT(is_locked);
ASSERT_MSG(thread->GetProcessorID() == s32(core), "Thread must be assigned to this core.");
scheduled_queue[core].add(thread, priority, false);
}
void GlobalScheduler::Reschedule(u32 priority, std::size_t core, Thread* thread) {
ASSERT(is_locked);
scheduled_queue[core].remove(thread, priority);
scheduled_queue[core].add(thread, priority);
}
void GlobalScheduler::Unschedule(u32 priority, std::size_t core, Thread* thread) {
ASSERT(is_locked);
scheduled_queue[core].remove(thread, priority);
}
void GlobalScheduler::TransferToCore(u32 priority, s32 destination_core, Thread* thread) {
ASSERT(is_locked);
const bool schedulable = thread->GetPriority() < THREADPRIO_COUNT;
const s32 source_core = thread->GetProcessorID();
if (source_core == destination_core || !schedulable) {
@@ -349,6 +439,108 @@ bool GlobalScheduler::AskForReselectionOrMarkRedundant(Thread* current_thread,
}
}
void GlobalScheduler::AdjustSchedulingOnStatus(Thread* thread, u32 old_flags) {
if (old_flags == thread->scheduling_state) {
return;
}
ASSERT(is_locked);
if (old_flags == static_cast<u32>(ThreadSchedStatus::Runnable)) {
// In this case the thread was running, now it's pausing/exitting
if (thread->processor_id >= 0) {
Unschedule(thread->current_priority, static_cast<u32>(thread->processor_id), thread);
}
for (u32 core = 0; core < Core::Hardware::NUM_CPU_CORES; core++) {
if (core != static_cast<u32>(thread->processor_id) &&
((thread->affinity_mask >> core) & 1) != 0) {
Unsuggest(thread->current_priority, core, thread);
}
}
} else if (thread->scheduling_state == static_cast<u32>(ThreadSchedStatus::Runnable)) {
// The thread is now set to running from being stopped
if (thread->processor_id >= 0) {
Schedule(thread->current_priority, static_cast<u32>(thread->processor_id), thread);
}
for (u32 core = 0; core < Core::Hardware::NUM_CPU_CORES; core++) {
if (core != static_cast<u32>(thread->processor_id) &&
((thread->affinity_mask >> core) & 1) != 0) {
Suggest(thread->current_priority, core, thread);
}
}
}
SetReselectionPending();
}
void GlobalScheduler::AdjustSchedulingOnPriority(Thread* thread, u32 old_priority) {
if (thread->scheduling_state != static_cast<u32>(ThreadSchedStatus::Runnable)) {
return;
}
ASSERT(is_locked);
if (thread->processor_id >= 0) {
Unschedule(old_priority, static_cast<u32>(thread->processor_id), thread);
}
for (u32 core = 0; core < Core::Hardware::NUM_CPU_CORES; core++) {
if (core != static_cast<u32>(thread->processor_id) &&
((thread->affinity_mask >> core) & 1) != 0) {
Unsuggest(old_priority, core, thread);
}
}
if (thread->processor_id >= 0) {
if (thread == kernel.CurrentScheduler().GetCurrentThread()) {
SchedulePrepend(thread->current_priority, static_cast<u32>(thread->processor_id),
thread);
} else {
Schedule(thread->current_priority, static_cast<u32>(thread->processor_id), thread);
}
}
for (u32 core = 0; core < Core::Hardware::NUM_CPU_CORES; core++) {
if (core != static_cast<u32>(thread->processor_id) &&
((thread->affinity_mask >> core) & 1) != 0) {
Suggest(thread->current_priority, core, thread);
}
}
thread->IncrementYieldCount();
SetReselectionPending();
}
void GlobalScheduler::AdjustSchedulingOnAffinity(Thread* thread, u64 old_affinity_mask,
s32 old_core) {
if (thread->scheduling_state != static_cast<u32>(ThreadSchedStatus::Runnable) ||
thread->current_priority >= THREADPRIO_COUNT) {
return;
}
ASSERT(is_locked);
for (u32 core = 0; core < Core::Hardware::NUM_CPU_CORES; core++) {
if (((old_affinity_mask >> core) & 1) != 0) {
if (core == static_cast<u32>(old_core)) {
Unschedule(thread->current_priority, core, thread);
} else {
Unsuggest(thread->current_priority, core, thread);
}
}
}
for (u32 core = 0; core < Core::Hardware::NUM_CPU_CORES; core++) {
if (((thread->affinity_mask >> core) & 1) != 0) {
if (core == static_cast<u32>(thread->processor_id)) {
Schedule(thread->current_priority, core, thread);
} else {
Suggest(thread->current_priority, core, thread);
}
}
}
thread->IncrementYieldCount();
SetReselectionPending();
}
void GlobalScheduler::Shutdown() {
for (std::size_t core = 0; core < Core::Hardware::NUM_CPU_CORES; core++) {
scheduled_queue[core].clear();
@@ -359,10 +551,12 @@ void GlobalScheduler::Shutdown() {
void GlobalScheduler::Lock() {
Core::EmuThreadHandle current_thread = kernel.GetCurrentEmuThreadID();
ASSERT(!current_thread.IsInvalid());
if (current_thread == current_owner) {
++scope_lock;
} else {
inner_lock.lock();
is_locked = true;
current_owner = current_thread;
ASSERT(current_owner != Core::EmuThreadHandle::InvalidHandle());
scope_lock = 1;
@@ -374,17 +568,18 @@ void GlobalScheduler::Unlock() {
ASSERT(scope_lock > 0);
return;
}
for (std::size_t i = 0; i < Core::Hardware::NUM_CPU_CORES; i++) {
SelectThread(i);
}
u32 cores_pending_reschedule = SelectThreads();
Core::EmuThreadHandle leaving_thread = current_owner;
current_owner = Core::EmuThreadHandle::InvalidHandle();
scope_lock = 1;
is_locked = false;
inner_lock.unlock();
// TODO(Blinkhawk): Setup the interrupts and change context on current core.
EnableInterruptAndSchedule(cores_pending_reschedule, leaving_thread);
}
Scheduler::Scheduler(Core::System& system, std::size_t core_id)
: system{system}, core_id{core_id} {}
Scheduler::Scheduler(Core::System& system, std::size_t core_id) : system(system), core_id(core_id) {
switch_fiber = std::make_shared<Common::Fiber>(std::function<void(void*)>(OnSwitch), this);
}
Scheduler::~Scheduler() = default;
@@ -393,56 +588,128 @@ bool Scheduler::HaveReadyThreads() const {
}
Thread* Scheduler::GetCurrentThread() const {
return current_thread.get();
if (current_thread) {
return current_thread.get();
}
return idle_thread.get();
}
Thread* Scheduler::GetSelectedThread() const {
return selected_thread.get();
}
void Scheduler::SelectThreads() {
system.GlobalScheduler().SelectThread(core_id);
}
u64 Scheduler::GetLastContextSwitchTicks() const {
return last_context_switch_time;
}
void Scheduler::TryDoContextSwitch() {
auto& phys_core = system.Kernel().CurrentPhysicalCore();
if (phys_core.IsInterrupted()) {
phys_core.ClearInterrupt();
}
guard.lock();
if (is_context_switch_pending) {
SwitchContext();
} else {
guard.unlock();
}
}
void Scheduler::UnloadThread() {
Thread* const previous_thread = GetCurrentThread();
Process* const previous_process = system.Kernel().CurrentProcess();
void Scheduler::OnThreadStart() {
SwitchContextStep2();
}
UpdateLastContextSwitchTime(previous_thread, previous_process);
// Save context for previous thread
if (previous_thread) {
system.ArmInterface(core_id).SaveContext(previous_thread->GetContext32());
system.ArmInterface(core_id).SaveContext(previous_thread->GetContext64());
// Save the TPIDR_EL0 system register in case it was modified.
previous_thread->SetTPIDR_EL0(system.ArmInterface(core_id).GetTPIDR_EL0());
if (previous_thread->GetStatus() == ThreadStatus::Running) {
// This is only the case when a reschedule is triggered without the current thread
// yielding execution (i.e. an event triggered, system core time-sliced, etc)
previous_thread->SetStatus(ThreadStatus::Ready);
void Scheduler::Unload() {
Thread* thread = current_thread.get();
if (thread) {
thread->SetContinuousOnSVC(false);
thread->last_running_ticks = system.CoreTiming().GetCPUTicks();
thread->SetIsRunning(false);
if (!thread->IsHLEThread() && !thread->HasExited()) {
Core::ARM_Interface& cpu_core = thread->ArmInterface();
cpu_core.SaveContext(thread->GetContext32());
cpu_core.SaveContext(thread->GetContext64());
// Save the TPIDR_EL0 system register in case it was modified.
thread->SetTPIDR_EL0(cpu_core.GetTPIDR_EL0());
cpu_core.ClearExclusiveState();
}
previous_thread->SetIsRunning(false);
thread->context_guard.unlock();
}
current_thread = nullptr;
}
void Scheduler::Reload() {
Thread* thread = current_thread.get();
if (thread) {
ASSERT_MSG(thread->GetSchedulingStatus() == ThreadSchedStatus::Runnable,
"Thread must be runnable.");
// Cancel any outstanding wakeup events for this thread
thread->SetIsRunning(true);
thread->SetWasRunning(false);
thread->last_running_ticks = system.CoreTiming().GetCPUTicks();
auto* const thread_owner_process = thread->GetOwnerProcess();
if (thread_owner_process != nullptr) {
system.Kernel().MakeCurrentProcess(thread_owner_process);
}
if (!thread->IsHLEThread()) {
Core::ARM_Interface& cpu_core = thread->ArmInterface();
cpu_core.LoadContext(thread->GetContext32());
cpu_core.LoadContext(thread->GetContext64());
cpu_core.SetTlsAddress(thread->GetTLSAddress());
cpu_core.SetTPIDR_EL0(thread->GetTPIDR_EL0());
cpu_core.ChangeProcessorID(this->core_id);
cpu_core.ClearExclusiveState();
}
}
}
void Scheduler::SwitchContextStep2() {
Thread* previous_thread = current_thread_prev.get();
Thread* new_thread = selected_thread.get();
// Load context of new thread
Process* const previous_process =
previous_thread != nullptr ? previous_thread->GetOwnerProcess() : nullptr;
if (new_thread) {
ASSERT_MSG(new_thread->GetSchedulingStatus() == ThreadSchedStatus::Runnable,
"Thread must be runnable.");
// Cancel any outstanding wakeup events for this thread
new_thread->SetIsRunning(true);
new_thread->last_running_ticks = system.CoreTiming().GetCPUTicks();
new_thread->SetWasRunning(false);
auto* const thread_owner_process = current_thread->GetOwnerProcess();
if (thread_owner_process != nullptr) {
system.Kernel().MakeCurrentProcess(thread_owner_process);
}
if (!new_thread->IsHLEThread()) {
Core::ARM_Interface& cpu_core = new_thread->ArmInterface();
cpu_core.LoadContext(new_thread->GetContext32());
cpu_core.LoadContext(new_thread->GetContext64());
cpu_core.SetTlsAddress(new_thread->GetTLSAddress());
cpu_core.SetTPIDR_EL0(new_thread->GetTPIDR_EL0());
cpu_core.ChangeProcessorID(this->core_id);
cpu_core.ClearExclusiveState();
}
}
TryDoContextSwitch();
}
void Scheduler::SwitchContext() {
Thread* const previous_thread = GetCurrentThread();
Thread* const new_thread = GetSelectedThread();
current_thread_prev = current_thread;
selected_thread = selected_thread_set;
Thread* previous_thread = current_thread_prev.get();
Thread* new_thread = selected_thread.get();
current_thread = selected_thread;
is_context_switch_pending = false;
if (new_thread == previous_thread) {
guard.unlock();
return;
}
@@ -452,51 +719,75 @@ void Scheduler::SwitchContext() {
// Save context for previous thread
if (previous_thread) {
system.ArmInterface(core_id).SaveContext(previous_thread->GetContext32());
system.ArmInterface(core_id).SaveContext(previous_thread->GetContext64());
// Save the TPIDR_EL0 system register in case it was modified.
previous_thread->SetTPIDR_EL0(system.ArmInterface(core_id).GetTPIDR_EL0());
if (previous_thread->GetStatus() == ThreadStatus::Running) {
// This is only the case when a reschedule is triggered without the current thread
// yielding execution (i.e. an event triggered, system core time-sliced, etc)
previous_thread->SetStatus(ThreadStatus::Ready);
if (new_thread != nullptr && new_thread->IsSuspendThread()) {
previous_thread->SetWasRunning(true);
}
previous_thread->SetContinuousOnSVC(false);
previous_thread->last_running_ticks = system.CoreTiming().GetCPUTicks();
previous_thread->SetIsRunning(false);
if (!previous_thread->IsHLEThread() && !previous_thread->HasExited()) {
Core::ARM_Interface& cpu_core = previous_thread->ArmInterface();
cpu_core.SaveContext(previous_thread->GetContext32());
cpu_core.SaveContext(previous_thread->GetContext64());
// Save the TPIDR_EL0 system register in case it was modified.
previous_thread->SetTPIDR_EL0(cpu_core.GetTPIDR_EL0());
cpu_core.ClearExclusiveState();
}
previous_thread->context_guard.unlock();
}
// Load context of new thread
if (new_thread) {
ASSERT_MSG(new_thread->GetProcessorID() == s32(this->core_id),
"Thread must be assigned to this core.");
ASSERT_MSG(new_thread->GetStatus() == ThreadStatus::Ready,
"Thread must be ready to become running.");
// Cancel any outstanding wakeup events for this thread
new_thread->CancelWakeupTimer();
current_thread = SharedFrom(new_thread);
new_thread->SetStatus(ThreadStatus::Running);
new_thread->SetIsRunning(true);
auto* const thread_owner_process = current_thread->GetOwnerProcess();
if (previous_process != thread_owner_process) {
system.Kernel().MakeCurrentProcess(thread_owner_process);
}
system.ArmInterface(core_id).LoadContext(new_thread->GetContext32());
system.ArmInterface(core_id).LoadContext(new_thread->GetContext64());
system.ArmInterface(core_id).SetTlsAddress(new_thread->GetTLSAddress());
system.ArmInterface(core_id).SetTPIDR_EL0(new_thread->GetTPIDR_EL0());
std::shared_ptr<Common::Fiber>* old_context;
if (previous_thread != nullptr) {
old_context = &previous_thread->GetHostContext();
} else {
current_thread = nullptr;
// Note: We do not reset the current process and current page table when idling because
// technically we haven't changed processes, our threads are just paused.
old_context = &idle_thread->GetHostContext();
}
guard.unlock();
Common::Fiber::YieldTo(*old_context, switch_fiber);
/// When a thread wakes up, the scheduler may have changed to other in another core.
auto& next_scheduler = system.Kernel().CurrentScheduler();
next_scheduler.SwitchContextStep2();
}
void Scheduler::OnSwitch(void* this_scheduler) {
Scheduler* sched = static_cast<Scheduler*>(this_scheduler);
sched->SwitchToCurrent();
}
void Scheduler::SwitchToCurrent() {
while (true) {
guard.lock();
selected_thread = selected_thread_set;
current_thread = selected_thread;
is_context_switch_pending = false;
guard.unlock();
while (!is_context_switch_pending) {
if (current_thread != nullptr && !current_thread->IsHLEThread()) {
current_thread->context_guard.lock();
if (!current_thread->IsRunnable()) {
current_thread->context_guard.unlock();
break;
}
if (current_thread->GetProcessorID() != core_id) {
current_thread->context_guard.unlock();
break;
}
}
std::shared_ptr<Common::Fiber>* next_context;
if (current_thread != nullptr) {
next_context = &current_thread->GetHostContext();
} else {
next_context = &idle_thread->GetHostContext();
}
Common::Fiber::YieldTo(switch_fiber, *next_context);
}
}
}
void Scheduler::UpdateLastContextSwitchTime(Thread* thread, Process* process) {
const u64 prev_switch_ticks = last_context_switch_time;
const u64 most_recent_switch_ticks = system.CoreTiming().GetTicks();
const u64 most_recent_switch_ticks = system.CoreTiming().GetCPUTicks();
const u64 update_ticks = most_recent_switch_ticks - prev_switch_ticks;
if (thread != nullptr) {
@@ -510,6 +801,16 @@ void Scheduler::UpdateLastContextSwitchTime(Thread* thread, Process* process) {
last_context_switch_time = most_recent_switch_ticks;
}
void Scheduler::Initialize() {
std::string name = "Idle Thread Id:" + std::to_string(core_id);
std::function<void(void*)> init_func = system.GetCpuManager().GetIdleThreadStartFunc();
void* init_func_parameter = system.GetCpuManager().GetStartFuncParamater();
ThreadType type = static_cast<ThreadType>(THREADTYPE_KERNEL | THREADTYPE_HLE | THREADTYPE_IDLE);
auto thread_res = Thread::Create(system, type, name, 0, 64, 0, static_cast<u32>(core_id), 0,
nullptr, std::move(init_func), init_func_parameter);
idle_thread = std::move(thread_res).Unwrap();
}
void Scheduler::Shutdown() {
current_thread = nullptr;
selected_thread = nullptr;
@@ -538,4 +839,13 @@ SchedulerLockAndSleep::~SchedulerLockAndSleep() {
time_manager.ScheduleTimeEvent(event_handle, time_task, nanoseconds);
}
void SchedulerLockAndSleep::Release() {
if (sleep_cancelled) {
return;
}
auto& time_manager = kernel.TimeManager();
time_manager.ScheduleTimeEvent(event_handle, time_task, nanoseconds);
sleep_cancelled = true;
}
} // namespace Kernel

View File

@@ -11,9 +11,14 @@
#include "common/common_types.h"
#include "common/multi_level_queue.h"
#include "common/spin_lock.h"
#include "core/hardware_properties.h"
#include "core/hle/kernel/thread.h"
namespace Common {
class Fiber;
}
namespace Core {
class ARM_Interface;
class System;
@@ -41,41 +46,17 @@ public:
return thread_list;
}
/**
* Add a thread to the suggested queue of a cpu core. Suggested threads may be
* picked if no thread is scheduled to run on the core.
*/
void Suggest(u32 priority, std::size_t core, Thread* thread);
/// Notify the scheduler a thread's status has changed.
void AdjustSchedulingOnStatus(Thread* thread, u32 old_flags);
/// Notify the scheduler a thread's priority has changed.
void AdjustSchedulingOnPriority(Thread* thread, u32 old_priority);
/// Notify the scheduler a thread's core and/or affinity mask has changed.
void AdjustSchedulingOnAffinity(Thread* thread, u64 old_affinity_mask, s32 old_core);
/**
* Remove a thread to the suggested queue of a cpu core. Suggested threads may be
* picked if no thread is scheduled to run on the core.
*/
void Unsuggest(u32 priority, std::size_t core, Thread* thread);
/**
* Add a thread to the scheduling queue of a cpu core. The thread is added at the
* back the queue in its priority level.
*/
void Schedule(u32 priority, std::size_t core, Thread* thread);
/**
* Add a thread to the scheduling queue of a cpu core. The thread is added at the
* front the queue in its priority level.
*/
void SchedulePrepend(u32 priority, std::size_t core, Thread* thread);
/// Reschedule an already scheduled thread based on a new priority
void Reschedule(u32 priority, std::size_t core, Thread* thread);
/// Unschedules a thread.
void Unschedule(u32 priority, std::size_t core, Thread* thread);
/// Selects a core and forces it to unload its current thread's context
void UnloadThread(std::size_t core);
/**
* Takes care of selecting the new scheduled thread in three steps:
* Takes care of selecting the new scheduled threads in three steps:
*
* 1. First a thread is selected from the top of the priority queue. If no thread
* is obtained then we move to step two, else we are done.
@@ -85,8 +66,10 @@ public:
*
* 3. Third is no suggested thread is found, we do a second pass and pick a running
* thread in another core and swap it with its current thread.
*
* returns the cores needing scheduling.
*/
void SelectThread(std::size_t core);
u32 SelectThreads();
bool HaveReadyThreads(std::size_t core_id) const {
return !scheduled_queue[core_id].empty();
@@ -149,6 +132,40 @@ private:
/// Unlocks the scheduler, reselects threads, interrupts cores for rescheduling
/// and reschedules current core if needed.
void Unlock();
void EnableInterruptAndSchedule(u32 cores_pending_reschedule,
Core::EmuThreadHandle global_thread);
/**
* Add a thread to the suggested queue of a cpu core. Suggested threads may be
* picked if no thread is scheduled to run on the core.
*/
void Suggest(u32 priority, std::size_t core, Thread* thread);
/**
* Remove a thread to the suggested queue of a cpu core. Suggested threads may be
* picked if no thread is scheduled to run on the core.
*/
void Unsuggest(u32 priority, std::size_t core, Thread* thread);
/**
* Add a thread to the scheduling queue of a cpu core. The thread is added at the
* back the queue in its priority level.
*/
void Schedule(u32 priority, std::size_t core, Thread* thread);
/**
* Add a thread to the scheduling queue of a cpu core. The thread is added at the
* front the queue in its priority level.
*/
void SchedulePrepend(u32 priority, std::size_t core, Thread* thread);
/// Reschedule an already scheduled thread based on a new priority
void Reschedule(u32 priority, std::size_t core, Thread* thread);
/// Unschedules a thread.
void Unschedule(u32 priority, std::size_t core, Thread* thread);
/**
* Transfers a thread into an specific core. If the destination_core is -1
* it will be unscheduled from its source code and added into its suggested
@@ -170,10 +187,13 @@ private:
std::array<u32, Core::Hardware::NUM_CPU_CORES> preemption_priorities = {59, 59, 59, 62};
/// Scheduler lock mechanisms.
std::mutex inner_lock{}; // TODO(Blinkhawk): Replace for a SpinLock
bool is_locked{};
Common::SpinLock inner_lock{};
std::atomic<s64> scope_lock{};
Core::EmuThreadHandle current_owner{Core::EmuThreadHandle::InvalidHandle()};
Common::SpinLock global_list_guard{};
/// Lists all thread ids that aren't deleted/etc.
std::vector<std::shared_ptr<Thread>> thread_list;
KernelCore& kernel;
@@ -190,11 +210,11 @@ public:
/// Reschedules to the next available thread (call after current thread is suspended)
void TryDoContextSwitch();
/// Unloads currently running thread
void UnloadThread();
/// Select the threads in top of the scheduling multilist.
void SelectThreads();
/// The next two are for SingleCore Only.
/// Unload current thread before preempting core.
void Unload();
/// Reload current thread after core preemption.
void Reload();
/// Gets the current running thread
Thread* GetCurrentThread() const;
@@ -209,15 +229,30 @@ public:
return is_context_switch_pending;
}
void Initialize();
/// Shutdowns the scheduler.
void Shutdown();
void OnThreadStart();
std::shared_ptr<Common::Fiber>& ControlContext() {
return switch_fiber;
}
const std::shared_ptr<Common::Fiber>& ControlContext() const {
return switch_fiber;
}
private:
friend class GlobalScheduler;
/// Switches the CPU's active thread context to that of the specified thread
void SwitchContext();
/// When a thread wakes up, it must run this through it's new scheduler
void SwitchContextStep2();
/**
* Called on every context switch to update the internal timestamp
* This also updates the running time ticks for the given thread and
@@ -231,14 +266,24 @@ private:
*/
void UpdateLastContextSwitchTime(Thread* thread, Process* process);
static void OnSwitch(void* this_scheduler);
void SwitchToCurrent();
std::shared_ptr<Thread> current_thread = nullptr;
std::shared_ptr<Thread> selected_thread = nullptr;
std::shared_ptr<Thread> current_thread_prev = nullptr;
std::shared_ptr<Thread> selected_thread_set = nullptr;
std::shared_ptr<Thread> idle_thread = nullptr;
std::shared_ptr<Common::Fiber> switch_fiber = nullptr;
Core::System& system;
u64 last_context_switch_time = 0;
u64 idle_selection_count = 0;
const std::size_t core_id;
Common::SpinLock guard{};
bool is_context_switch_pending = false;
};
@@ -261,6 +306,8 @@ public:
sleep_cancelled = true;
}
void Release();
private:
Handle& event_handle;
Thread* time_task;

View File

@@ -17,6 +17,7 @@
#include "core/hle/kernel/hle_ipc.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/scheduler.h"
#include "core/hle/kernel/server_session.h"
#include "core/hle/kernel/session.h"
#include "core/hle/kernel/thread.h"
@@ -168,9 +169,12 @@ ResultCode ServerSession::CompleteSyncRequest() {
}
// Some service requests require the thread to block
if (!context.IsThreadWaiting()) {
context.GetThread().ResumeFromWait();
context.GetThread().SetWaitSynchronizationResult(result);
{
SchedulerLock lock(kernel);
if (!context.IsThreadWaiting()) {
context.GetThread().ResumeFromWait();
context.GetThread().SetSynchronizationResults(nullptr, result);
}
}
request_queue.Pop();
@@ -180,8 +184,10 @@ ResultCode ServerSession::CompleteSyncRequest() {
ResultCode ServerSession::HandleSyncRequest(std::shared_ptr<Thread> thread,
Core::Memory::Memory& memory) {
Core::System::GetInstance().CoreTiming().ScheduleEvent(20000, request_event, {});
return QueueSyncRequest(std::move(thread), memory);
ResultCode result = QueueSyncRequest(std::move(thread), memory);
const u64 delay = kernel.IsMulticore() ? 0U : 20000U;
Core::System::GetInstance().CoreTiming().ScheduleEvent(delay, request_event, {});
return result;
}
} // namespace Kernel

View File

@@ -10,14 +10,15 @@
#include "common/alignment.h"
#include "common/assert.h"
#include "common/fiber.h"
#include "common/logging/log.h"
#include "common/microprofile.h"
#include "common/string_util.h"
#include "core/arm/exclusive_monitor.h"
#include "core/core.h"
#include "core/core_manager.h"
#include "core/core_timing.h"
#include "core/core_timing_util.h"
#include "core/cpu_manager.h"
#include "core/hle/kernel/address_arbiter.h"
#include "core/hle/kernel/client_port.h"
#include "core/hle/kernel/client_session.h"
@@ -27,6 +28,7 @@
#include "core/hle/kernel/memory/memory_block.h"
#include "core/hle/kernel/memory/page_table.h"
#include "core/hle/kernel/mutex.h"
#include "core/hle/kernel/physical_core.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/readable_event.h"
#include "core/hle/kernel/resource_limit.h"
@@ -37,6 +39,7 @@
#include "core/hle/kernel/svc_wrap.h"
#include "core/hle/kernel/synchronization.h"
#include "core/hle/kernel/thread.h"
#include "core/hle/kernel/time_manager.h"
#include "core/hle/kernel/transfer_memory.h"
#include "core/hle/kernel/writable_event.h"
#include "core/hle/lock.h"
@@ -133,6 +136,7 @@ enum class ResourceLimitValueType {
ResultVal<s64> RetrieveResourceLimitValue(Core::System& system, Handle resource_limit,
u32 resource_type, ResourceLimitValueType value_type) {
std::lock_guard lock{HLE::g_hle_lock};
const auto type = static_cast<ResourceType>(resource_type);
if (!IsValidResourceType(type)) {
LOG_ERROR(Kernel_SVC, "Invalid resource limit type: '{}'", resource_type);
@@ -160,6 +164,7 @@ ResultVal<s64> RetrieveResourceLimitValue(Core::System& system, Handle resource_
/// Set the process heap to a given Size. It can both extend and shrink the heap.
static ResultCode SetHeapSize(Core::System& system, VAddr* heap_addr, u64 heap_size) {
std::lock_guard lock{HLE::g_hle_lock};
LOG_TRACE(Kernel_SVC, "called, heap_size=0x{:X}", heap_size);
// Size must be a multiple of 0x200000 (2MB) and be equal to or less than 8GB.
@@ -190,6 +195,7 @@ static ResultCode SetHeapSize32(Core::System& system, u32* heap_addr, u32 heap_s
static ResultCode SetMemoryAttribute(Core::System& system, VAddr address, u64 size, u32 mask,
u32 attribute) {
std::lock_guard lock{HLE::g_hle_lock};
LOG_DEBUG(Kernel_SVC,
"called, address=0x{:016X}, size=0x{:X}, mask=0x{:08X}, attribute=0x{:08X}", address,
size, mask, attribute);
@@ -226,8 +232,15 @@ static ResultCode SetMemoryAttribute(Core::System& system, VAddr address, u64 si
static_cast<Memory::MemoryAttribute>(attribute));
}
static ResultCode SetMemoryAttribute32(Core::System& system, u32 address, u32 size, u32 mask,
u32 attribute) {
return SetMemoryAttribute(system, static_cast<VAddr>(address), static_cast<std::size_t>(size),
mask, attribute);
}
/// Maps a memory range into a different range.
static ResultCode MapMemory(Core::System& system, VAddr dst_addr, VAddr src_addr, u64 size) {
std::lock_guard lock{HLE::g_hle_lock};
LOG_TRACE(Kernel_SVC, "called, dst_addr=0x{:X}, src_addr=0x{:X}, size=0x{:X}", dst_addr,
src_addr, size);
@@ -241,8 +254,14 @@ static ResultCode MapMemory(Core::System& system, VAddr dst_addr, VAddr src_addr
return page_table.Map(dst_addr, src_addr, size);
}
static ResultCode MapMemory32(Core::System& system, u32 dst_addr, u32 src_addr, u32 size) {
return MapMemory(system, static_cast<VAddr>(dst_addr), static_cast<VAddr>(src_addr),
static_cast<std::size_t>(size));
}
/// Unmaps a region that was previously mapped with svcMapMemory
static ResultCode UnmapMemory(Core::System& system, VAddr dst_addr, VAddr src_addr, u64 size) {
std::lock_guard lock{HLE::g_hle_lock};
LOG_TRACE(Kernel_SVC, "called, dst_addr=0x{:X}, src_addr=0x{:X}, size=0x{:X}", dst_addr,
src_addr, size);
@@ -256,9 +275,15 @@ static ResultCode UnmapMemory(Core::System& system, VAddr dst_addr, VAddr src_ad
return page_table.Unmap(dst_addr, src_addr, size);
}
static ResultCode UnmapMemory32(Core::System& system, u32 dst_addr, u32 src_addr, u32 size) {
return UnmapMemory(system, static_cast<VAddr>(dst_addr), static_cast<VAddr>(src_addr),
static_cast<std::size_t>(size));
}
/// Connect to an OS service given the port name, returns the handle to the port to out
static ResultCode ConnectToNamedPort(Core::System& system, Handle* out_handle,
VAddr port_name_address) {
std::lock_guard lock{HLE::g_hle_lock};
auto& memory = system.Memory();
if (!memory.IsValidVirtualAddress(port_name_address)) {
@@ -317,11 +342,30 @@ static ResultCode SendSyncRequest(Core::System& system, Handle handle) {
LOG_TRACE(Kernel_SVC, "called handle=0x{:08X}({})", handle, session->GetName());
auto thread = system.CurrentScheduler().GetCurrentThread();
thread->InvalidateWakeupCallback();
thread->SetStatus(ThreadStatus::WaitIPC);
system.PrepareReschedule(thread->GetProcessorID());
{
SchedulerLock lock(system.Kernel());
thread->InvalidateHLECallback();
thread->SetStatus(ThreadStatus::WaitIPC);
session->SendSyncRequest(SharedFrom(thread), system.Memory());
}
return session->SendSyncRequest(SharedFrom(thread), system.Memory());
if (thread->HasHLECallback()) {
Handle event_handle = thread->GetHLETimeEvent();
if (event_handle != InvalidHandle) {
auto& time_manager = system.Kernel().TimeManager();
time_manager.UnscheduleTimeEvent(event_handle);
}
{
SchedulerLock lock(system.Kernel());
auto* sync_object = thread->GetHLESyncObject();
sync_object->RemoveWaitingThread(SharedFrom(thread));
}
thread->InvokeHLECallback(SharedFrom(thread));
}
return thread->GetSignalingResult();
}
static ResultCode SendSyncRequest32(Core::System& system, Handle handle) {
@@ -383,6 +427,15 @@ static ResultCode GetProcessId(Core::System& system, u64* process_id, Handle han
return ERR_INVALID_HANDLE;
}
static ResultCode GetProcessId32(Core::System& system, u32* process_id_low, u32* process_id_high,
Handle handle) {
u64 process_id{};
const auto result = GetProcessId(system, &process_id, handle);
*process_id_low = static_cast<u32>(process_id);
*process_id_high = static_cast<u32>(process_id >> 32);
return result;
}
/// Wait for the given handles to synchronize, timeout after the specified nanoseconds
static ResultCode WaitSynchronization(Core::System& system, Handle* index, VAddr handles_address,
u64 handle_count, s64 nano_seconds) {
@@ -447,10 +500,13 @@ static ResultCode CancelSynchronization(Core::System& system, Handle thread_hand
}
thread->CancelWait();
system.PrepareReschedule(thread->GetProcessorID());
return RESULT_SUCCESS;
}
static ResultCode CancelSynchronization32(Core::System& system, Handle thread_handle) {
return CancelSynchronization(system, thread_handle);
}
/// Attempts to locks a mutex, creating it if it does not already exist
static ResultCode ArbitrateLock(Core::System& system, Handle holding_thread_handle,
VAddr mutex_addr, Handle requesting_thread_handle) {
@@ -475,6 +531,12 @@ static ResultCode ArbitrateLock(Core::System& system, Handle holding_thread_hand
requesting_thread_handle);
}
static ResultCode ArbitrateLock32(Core::System& system, Handle holding_thread_handle,
u32 mutex_addr, Handle requesting_thread_handle) {
return ArbitrateLock(system, holding_thread_handle, static_cast<VAddr>(mutex_addr),
requesting_thread_handle);
}
/// Unlock a mutex
static ResultCode ArbitrateUnlock(Core::System& system, VAddr mutex_addr) {
LOG_TRACE(Kernel_SVC, "called mutex_addr=0x{:X}", mutex_addr);
@@ -494,6 +556,10 @@ static ResultCode ArbitrateUnlock(Core::System& system, VAddr mutex_addr) {
return current_process->GetMutex().Release(mutex_addr);
}
static ResultCode ArbitrateUnlock32(Core::System& system, u32 mutex_addr) {
return ArbitrateUnlock(system, static_cast<VAddr>(mutex_addr));
}
enum class BreakType : u32 {
Panic = 0,
AssertionFailed = 1,
@@ -594,6 +660,7 @@ static void Break(Core::System& system, u32 reason, u64 info1, u64 info2) {
info2, has_dumped_buffer ? std::make_optional(debug_buffer) : std::nullopt);
if (!break_reason.signal_debugger) {
SchedulerLock lock(system.Kernel());
LOG_CRITICAL(
Debug_Emulated,
"Emulated program broke execution! reason=0x{:016X}, info1=0x{:016X}, info2=0x{:016X}",
@@ -605,14 +672,16 @@ static void Break(Core::System& system, u32 reason, u64 info1, u64 info2) {
const auto thread_processor_id = current_thread->GetProcessorID();
system.ArmInterface(static_cast<std::size_t>(thread_processor_id)).LogBacktrace();
system.Kernel().CurrentProcess()->PrepareForTermination();
// Kill the current thread
system.Kernel().ExceptionalExit();
current_thread->Stop();
system.PrepareReschedule();
}
}
static void Break32(Core::System& system, u32 reason, u32 info1, u32 info2) {
Break(system, reason, static_cast<u64>(info1), static_cast<u64>(info2));
}
/// Used to output a message on a debug hardware unit - does nothing on a retail unit
static void OutputDebugString([[maybe_unused]] Core::System& system, VAddr address, u64 len) {
if (len == 0) {
@@ -627,6 +696,7 @@ static void OutputDebugString([[maybe_unused]] Core::System& system, VAddr addre
/// Gets system/memory information for the current process
static ResultCode GetInfo(Core::System& system, u64* result, u64 info_id, u64 handle,
u64 info_sub_id) {
std::lock_guard lock{HLE::g_hle_lock};
LOG_TRACE(Kernel_SVC, "called info_id=0x{:X}, info_sub_id=0x{:X}, handle=0x{:08X}", info_id,
info_sub_id, handle);
@@ -863,9 +933,9 @@ static ResultCode GetInfo(Core::System& system, u64* result, u64 info_id, u64 ha
if (same_thread && info_sub_id == 0xFFFFFFFFFFFFFFFF) {
const u64 thread_ticks = current_thread->GetTotalCPUTimeTicks();
out_ticks = thread_ticks + (core_timing.GetTicks() - prev_ctx_ticks);
out_ticks = thread_ticks + (core_timing.GetCPUTicks() - prev_ctx_ticks);
} else if (same_thread && info_sub_id == system.CurrentCoreIndex()) {
out_ticks = core_timing.GetTicks() - prev_ctx_ticks;
out_ticks = core_timing.GetCPUTicks() - prev_ctx_ticks;
}
*result = out_ticks;
@@ -892,6 +962,7 @@ static ResultCode GetInfo32(Core::System& system, u32* result_low, u32* result_h
/// Maps memory at a desired address
static ResultCode MapPhysicalMemory(Core::System& system, VAddr addr, u64 size) {
std::lock_guard lock{HLE::g_hle_lock};
LOG_DEBUG(Kernel_SVC, "called, addr=0x{:016X}, size=0x{:X}", addr, size);
if (!Common::Is4KBAligned(addr)) {
@@ -939,8 +1010,13 @@ static ResultCode MapPhysicalMemory(Core::System& system, VAddr addr, u64 size)
return page_table.MapPhysicalMemory(addr, size);
}
static ResultCode MapPhysicalMemory32(Core::System& system, u32 addr, u32 size) {
return MapPhysicalMemory(system, static_cast<VAddr>(addr), static_cast<std::size_t>(size));
}
/// Unmaps memory previously mapped via MapPhysicalMemory
static ResultCode UnmapPhysicalMemory(Core::System& system, VAddr addr, u64 size) {
std::lock_guard lock{HLE::g_hle_lock};
LOG_DEBUG(Kernel_SVC, "called, addr=0x{:016X}, size=0x{:X}", addr, size);
if (!Common::Is4KBAligned(addr)) {
@@ -988,6 +1064,10 @@ static ResultCode UnmapPhysicalMemory(Core::System& system, VAddr addr, u64 size
return page_table.UnmapPhysicalMemory(addr, size);
}
static ResultCode UnmapPhysicalMemory32(Core::System& system, u32 addr, u32 size) {
return UnmapPhysicalMemory(system, static_cast<VAddr>(addr), static_cast<std::size_t>(size));
}
/// Sets the thread activity
static ResultCode SetThreadActivity(Core::System& system, Handle handle, u32 activity) {
LOG_DEBUG(Kernel_SVC, "called, handle=0x{:08X}, activity=0x{:08X}", handle, activity);
@@ -1017,10 +1097,11 @@ static ResultCode SetThreadActivity(Core::System& system, Handle handle, u32 act
return ERR_BUSY;
}
thread->SetActivity(static_cast<ThreadActivity>(activity));
return thread->SetActivity(static_cast<ThreadActivity>(activity));
}
system.PrepareReschedule(thread->GetProcessorID());
return RESULT_SUCCESS;
static ResultCode SetThreadActivity32(Core::System& system, Handle handle, u32 activity) {
return SetThreadActivity(system, handle, activity);
}
/// Gets the thread context
@@ -1064,6 +1145,10 @@ static ResultCode GetThreadContext(Core::System& system, VAddr thread_context, H
return RESULT_SUCCESS;
}
static ResultCode GetThreadContext32(Core::System& system, u32 thread_context, Handle handle) {
return GetThreadContext(system, static_cast<VAddr>(thread_context), handle);
}
/// Gets the priority for the specified thread
static ResultCode GetThreadPriority(Core::System& system, u32* priority, Handle handle) {
LOG_TRACE(Kernel_SVC, "called");
@@ -1071,6 +1156,7 @@ static ResultCode GetThreadPriority(Core::System& system, u32* priority, Handle
const auto& handle_table = system.Kernel().CurrentProcess()->GetHandleTable();
const std::shared_ptr<Thread> thread = handle_table.Get<Thread>(handle);
if (!thread) {
*priority = 0;
LOG_ERROR(Kernel_SVC, "Thread handle does not exist, handle=0x{:08X}", handle);
return ERR_INVALID_HANDLE;
}
@@ -1105,18 +1191,26 @@ static ResultCode SetThreadPriority(Core::System& system, Handle handle, u32 pri
thread->SetPriority(priority);
system.PrepareReschedule(thread->GetProcessorID());
return RESULT_SUCCESS;
}
static ResultCode SetThreadPriority32(Core::System& system, Handle handle, u32 priority) {
return SetThreadPriority(system, handle, priority);
}
/// Get which CPU core is executing the current thread
static u32 GetCurrentProcessorNumber(Core::System& system) {
LOG_TRACE(Kernel_SVC, "called");
return system.CurrentScheduler().GetCurrentThread()->GetProcessorID();
return static_cast<u32>(system.CurrentPhysicalCore().CoreIndex());
}
static u32 GetCurrentProcessorNumber32(Core::System& system) {
return GetCurrentProcessorNumber(system);
}
static ResultCode MapSharedMemory(Core::System& system, Handle shared_memory_handle, VAddr addr,
u64 size, u32 permissions) {
std::lock_guard lock{HLE::g_hle_lock};
LOG_TRACE(Kernel_SVC,
"called, shared_memory_handle=0x{:X}, addr=0x{:X}, size=0x{:X}, permissions=0x{:08X}",
shared_memory_handle, addr, size, permissions);
@@ -1187,9 +1281,16 @@ static ResultCode MapSharedMemory(Core::System& system, Handle shared_memory_han
return shared_memory->Map(*current_process, addr, size, permission_type);
}
static ResultCode MapSharedMemory32(Core::System& system, Handle shared_memory_handle, u32 addr,
u32 size, u32 permissions) {
return MapSharedMemory(system, shared_memory_handle, static_cast<VAddr>(addr),
static_cast<std::size_t>(size), permissions);
}
static ResultCode QueryProcessMemory(Core::System& system, VAddr memory_info_address,
VAddr page_info_address, Handle process_handle,
VAddr address) {
std::lock_guard lock{HLE::g_hle_lock};
LOG_TRACE(Kernel_SVC, "called process=0x{:08X} address={:X}", process_handle, address);
const auto& handle_table = system.Kernel().CurrentProcess()->GetHandleTable();
std::shared_ptr<Process> process = handle_table.Get<Process>(process_handle);
@@ -1372,6 +1473,7 @@ static ResultCode UnmapProcessCodeMemory(Core::System& system, Handle process_ha
/// Exits the current process
static void ExitProcess(Core::System& system) {
auto* current_process = system.Kernel().CurrentProcess();
UNIMPLEMENTED();
LOG_INFO(Kernel_SVC, "Process {} exiting", current_process->GetProcessID());
ASSERT_MSG(current_process->GetStatus() == ProcessStatus::Running,
@@ -1381,8 +1483,10 @@ static void ExitProcess(Core::System& system) {
// Kill the current thread
system.CurrentScheduler().GetCurrentThread()->Stop();
}
system.PrepareReschedule();
static void ExitProcess32(Core::System& system) {
ExitProcess(system);
}
/// Creates a new thread
@@ -1428,9 +1532,10 @@ static ResultCode CreateThread(Core::System& system, Handle* out_handle, VAddr e
ASSERT(kernel.CurrentProcess()->GetResourceLimit()->Reserve(ResourceType::Threads, 1));
ThreadType type = THREADTYPE_USER;
CASCADE_RESULT(std::shared_ptr<Thread> thread,
Thread::Create(kernel, "", entry_point, priority, arg, processor_id, stack_top,
*current_process));
Thread::Create(system, type, "", entry_point, priority, arg, processor_id,
stack_top, current_process));
const auto new_thread_handle = current_process->GetHandleTable().Create(thread);
if (new_thread_handle.Failed()) {
@@ -1444,11 +1549,15 @@ static ResultCode CreateThread(Core::System& system, Handle* out_handle, VAddr e
thread->SetName(
fmt::format("thread[entry_point={:X}, handle={:X}]", entry_point, *new_thread_handle));
system.PrepareReschedule(thread->GetProcessorID());
return RESULT_SUCCESS;
}
static ResultCode CreateThread32(Core::System& system, Handle* out_handle, u32 priority,
u32 entry_point, u32 arg, u32 stack_top, s32 processor_id) {
return CreateThread(system, out_handle, static_cast<VAddr>(entry_point), static_cast<u64>(arg),
static_cast<VAddr>(stack_top), priority, processor_id);
}
/// Starts the thread for the provided handle
static ResultCode StartThread(Core::System& system, Handle thread_handle) {
LOG_DEBUG(Kernel_SVC, "called thread=0x{:08X}", thread_handle);
@@ -1463,13 +1572,11 @@ static ResultCode StartThread(Core::System& system, Handle thread_handle) {
ASSERT(thread->GetStatus() == ThreadStatus::Dormant);
thread->ResumeFromWait();
return thread->Start();
}
if (thread->GetStatus() == ThreadStatus::Ready) {
system.PrepareReschedule(thread->GetProcessorID());
}
return RESULT_SUCCESS;
static ResultCode StartThread32(Core::System& system, Handle thread_handle) {
return StartThread(system, thread_handle);
}
/// Called when a thread exits
@@ -1477,9 +1584,12 @@ static void ExitThread(Core::System& system) {
LOG_DEBUG(Kernel_SVC, "called, pc=0x{:08X}", system.CurrentArmInterface().GetPC());
auto* const current_thread = system.CurrentScheduler().GetCurrentThread();
current_thread->Stop();
system.GlobalScheduler().RemoveThread(SharedFrom(current_thread));
system.PrepareReschedule();
current_thread->Stop();
}
static void ExitThread32(Core::System& system) {
ExitThread(system);
}
/// Sleep the current thread
@@ -1498,15 +1608,21 @@ static void SleepThread(Core::System& system, s64 nanoseconds) {
if (nanoseconds <= 0) {
switch (static_cast<SleepType>(nanoseconds)) {
case SleepType::YieldWithoutLoadBalancing:
is_redundant = current_thread->YieldSimple();
case SleepType::YieldWithoutLoadBalancing: {
auto pair = current_thread->YieldSimple();
is_redundant = pair.second;
break;
case SleepType::YieldWithLoadBalancing:
is_redundant = current_thread->YieldAndBalanceLoad();
}
case SleepType::YieldWithLoadBalancing: {
auto pair = current_thread->YieldAndBalanceLoad();
is_redundant = pair.second;
break;
case SleepType::YieldAndWaitForLoadBalancing:
is_redundant = current_thread->YieldAndWaitForLoadBalancing();
}
case SleepType::YieldAndWaitForLoadBalancing: {
auto pair = current_thread->YieldAndWaitForLoadBalancing();
is_redundant = pair.second;
break;
}
default:
UNREACHABLE_MSG("Unimplemented sleep yield type '{:016X}'!", nanoseconds);
}
@@ -1514,13 +1630,18 @@ static void SleepThread(Core::System& system, s64 nanoseconds) {
current_thread->Sleep(nanoseconds);
}
if (is_redundant) {
// If it's redundant, the core is pretty much idle. Some games keep idling
// a core while it's doing nothing, we advance timing to avoid costly continuous
// calls.
system.CoreTiming().AddTicks(2000);
if (is_redundant && !system.Kernel().IsMulticore()) {
system.Kernel().ExitSVCProfile();
system.CoreTiming().AddTicks(1000U);
system.GetCpuManager().PreemptSingleCore();
system.Kernel().EnterSVCProfile();
}
system.PrepareReschedule(current_thread->GetProcessorID());
}
static void SleepThread32(Core::System& system, u32 nanoseconds_low, u32 nanoseconds_high) {
const s64 nanoseconds = static_cast<s64>(static_cast<u64>(nanoseconds_low) |
(static_cast<u64>(nanoseconds_high) << 32));
SleepThread(system, nanoseconds);
}
/// Wait process wide key atomic
@@ -1547,31 +1668,69 @@ static ResultCode WaitProcessWideKeyAtomic(Core::System& system, VAddr mutex_add
}
ASSERT(condition_variable_addr == Common::AlignDown(condition_variable_addr, 4));
auto& kernel = system.Kernel();
Handle event_handle;
Thread* current_thread = system.CurrentScheduler().GetCurrentThread();
auto* const current_process = system.Kernel().CurrentProcess();
const auto& handle_table = current_process->GetHandleTable();
std::shared_ptr<Thread> thread = handle_table.Get<Thread>(thread_handle);
ASSERT(thread);
{
SchedulerLockAndSleep lock(kernel, event_handle, current_thread, nano_seconds);
const auto& handle_table = current_process->GetHandleTable();
std::shared_ptr<Thread> thread = handle_table.Get<Thread>(thread_handle);
ASSERT(thread);
const auto release_result = current_process->GetMutex().Release(mutex_addr);
if (release_result.IsError()) {
return release_result;
current_thread->SetSynchronizationResults(nullptr, RESULT_TIMEOUT);
if (thread->IsPendingTermination()) {
lock.CancelSleep();
return ERR_THREAD_TERMINATING;
}
const auto release_result = current_process->GetMutex().Release(mutex_addr);
if (release_result.IsError()) {
lock.CancelSleep();
return release_result;
}
if (nano_seconds == 0) {
lock.CancelSleep();
return RESULT_TIMEOUT;
}
current_thread->SetCondVarWaitAddress(condition_variable_addr);
current_thread->SetMutexWaitAddress(mutex_addr);
current_thread->SetWaitHandle(thread_handle);
current_thread->SetStatus(ThreadStatus::WaitCondVar);
current_process->InsertConditionVariableThread(SharedFrom(current_thread));
}
Thread* current_thread = system.CurrentScheduler().GetCurrentThread();
current_thread->SetCondVarWaitAddress(condition_variable_addr);
current_thread->SetMutexWaitAddress(mutex_addr);
current_thread->SetWaitHandle(thread_handle);
current_thread->SetStatus(ThreadStatus::WaitCondVar);
current_thread->InvalidateWakeupCallback();
current_process->InsertConditionVariableThread(SharedFrom(current_thread));
if (event_handle != InvalidHandle) {
auto& time_manager = kernel.TimeManager();
time_manager.UnscheduleTimeEvent(event_handle);
}
current_thread->WakeAfterDelay(nano_seconds);
{
SchedulerLock lock(kernel);
auto* owner = current_thread->GetLockOwner();
if (owner != nullptr) {
owner->RemoveMutexWaiter(SharedFrom(current_thread));
}
current_process->RemoveConditionVariableThread(SharedFrom(current_thread));
}
// Note: Deliberately don't attempt to inherit the lock owner's priority.
system.PrepareReschedule(current_thread->GetProcessorID());
return RESULT_SUCCESS;
return current_thread->GetSignalingResult();
}
static ResultCode WaitProcessWideKeyAtomic32(Core::System& system, u32 mutex_addr,
u32 condition_variable_addr, Handle thread_handle,
u32 nanoseconds_low, u32 nanoseconds_high) {
const s64 nanoseconds =
static_cast<s64>(nanoseconds_low | (static_cast<u64>(nanoseconds_high) << 32));
return WaitProcessWideKeyAtomic(system, static_cast<VAddr>(mutex_addr),
static_cast<VAddr>(condition_variable_addr), thread_handle,
nanoseconds);
}
/// Signal process wide key
@@ -1582,7 +1741,9 @@ static void SignalProcessWideKey(Core::System& system, VAddr condition_variable_
ASSERT(condition_variable_addr == Common::AlignDown(condition_variable_addr, 4));
// Retrieve a list of all threads that are waiting for this condition variable.
auto* const current_process = system.Kernel().CurrentProcess();
auto& kernel = system.Kernel();
SchedulerLock lock(kernel);
auto* const current_process = kernel.CurrentProcess();
std::vector<std::shared_ptr<Thread>> waiting_threads =
current_process->GetConditionVariableThreads(condition_variable_addr);
@@ -1591,7 +1752,7 @@ static void SignalProcessWideKey(Core::System& system, VAddr condition_variable_
std::size_t last = waiting_threads.size();
if (target > 0)
last = std::min(waiting_threads.size(), static_cast<std::size_t>(target));
auto& time_manager = kernel.TimeManager();
for (std::size_t index = 0; index < last; ++index) {
auto& thread = waiting_threads[index];
@@ -1599,7 +1760,6 @@ static void SignalProcessWideKey(Core::System& system, VAddr condition_variable_
// liberate Cond Var Thread.
current_process->RemoveConditionVariableThread(thread);
thread->SetCondVarWaitAddress(0);
const std::size_t current_core = system.CurrentCoreIndex();
auto& monitor = system.Monitor();
@@ -1610,10 +1770,8 @@ static void SignalProcessWideKey(Core::System& system, VAddr condition_variable_
u32 update_val = 0;
const VAddr mutex_address = thread->GetMutexWaitAddress();
do {
monitor.SetExclusive(current_core, mutex_address);
// If the mutex is not yet acquired, acquire it.
mutex_val = memory.Read32(mutex_address);
mutex_val = monitor.ExclusiveRead32(current_core, mutex_address);
if (mutex_val != 0) {
update_val = mutex_val | Mutex::MutexHasWaitersFlag;
@@ -1621,33 +1779,28 @@ static void SignalProcessWideKey(Core::System& system, VAddr condition_variable_
update_val = thread->GetWaitHandle();
}
} while (!monitor.ExclusiveWrite32(current_core, mutex_address, update_val));
monitor.ClearExclusive();
if (mutex_val == 0) {
// We were able to acquire the mutex, resume this thread.
ASSERT(thread->GetStatus() == ThreadStatus::WaitCondVar);
thread->ResumeFromWait();
auto* const lock_owner = thread->GetLockOwner();
if (lock_owner != nullptr) {
lock_owner->RemoveMutexWaiter(thread);
}
thread->SetLockOwner(nullptr);
thread->SetMutexWaitAddress(0);
thread->SetWaitHandle(0);
thread->SetWaitSynchronizationResult(RESULT_SUCCESS);
system.PrepareReschedule(thread->GetProcessorID());
thread->SetSynchronizationResults(nullptr, RESULT_SUCCESS);
thread->ResumeFromWait();
} else {
// The mutex is already owned by some other thread, make this thread wait on it.
const Handle owner_handle = static_cast<Handle>(mutex_val & Mutex::MutexOwnerMask);
const auto& handle_table = system.Kernel().CurrentProcess()->GetHandleTable();
auto owner = handle_table.Get<Thread>(owner_handle);
ASSERT(owner);
ASSERT(thread->GetStatus() == ThreadStatus::WaitCondVar);
thread->InvalidateWakeupCallback();
thread->SetStatus(ThreadStatus::WaitMutex);
if (thread->GetStatus() == ThreadStatus::WaitCondVar) {
thread->SetStatus(ThreadStatus::WaitMutex);
}
owner->AddMutexWaiter(thread);
system.PrepareReschedule(thread->GetProcessorID());
}
}
}
@@ -1678,12 +1831,15 @@ static ResultCode WaitForAddress(Core::System& system, VAddr address, u32 type,
auto& address_arbiter = system.Kernel().CurrentProcess()->GetAddressArbiter();
const ResultCode result =
address_arbiter.WaitForAddress(address, arbitration_type, value, timeout);
if (result == RESULT_SUCCESS) {
system.PrepareReschedule();
}
return result;
}
static ResultCode WaitForAddress32(Core::System& system, u32 address, u32 type, s32 value,
u32 timeout_low, u32 timeout_high) {
s64 timeout = static_cast<s64>(timeout_low | (static_cast<u64>(timeout_high) << 32));
return WaitForAddress(system, static_cast<VAddr>(address), type, value, timeout);
}
// Signals to an address (via Address Arbiter)
static ResultCode SignalToAddress(Core::System& system, VAddr address, u32 type, s32 value,
s32 num_to_wake) {
@@ -1707,6 +1863,11 @@ static ResultCode SignalToAddress(Core::System& system, VAddr address, u32 type,
return address_arbiter.SignalToAddress(address, signal_type, value, num_to_wake);
}
static ResultCode SignalToAddress32(Core::System& system, u32 address, u32 type, s32 value,
s32 num_to_wake) {
return SignalToAddress(system, static_cast<VAddr>(address), type, value, num_to_wake);
}
static void KernelDebug([[maybe_unused]] Core::System& system,
[[maybe_unused]] u32 kernel_debug_type, [[maybe_unused]] u64 param1,
[[maybe_unused]] u64 param2, [[maybe_unused]] u64 param3) {
@@ -1725,14 +1886,21 @@ static u64 GetSystemTick(Core::System& system) {
auto& core_timing = system.CoreTiming();
// Returns the value of cntpct_el0 (https://switchbrew.org/wiki/SVC#svcGetSystemTick)
const u64 result{Core::Timing::CpuCyclesToClockCycles(system.CoreTiming().GetTicks())};
const u64 result{system.CoreTiming().GetClockTicks()};
// Advance time to defeat dumb games that busy-wait for the frame to end.
core_timing.AddTicks(400);
if (!system.Kernel().IsMulticore()) {
core_timing.AddTicks(400U);
}
return result;
}
static void GetSystemTick32(Core::System& system, u32* time_low, u32* time_high) {
u64 time = GetSystemTick(system);
*time_low = static_cast<u32>(time);
*time_high = static_cast<u32>(time >> 32);
}
/// Close a handle
static ResultCode CloseHandle(Core::System& system, Handle handle) {
LOG_TRACE(Kernel_SVC, "Closing handle 0x{:08X}", handle);
@@ -1765,9 +1933,14 @@ static ResultCode ResetSignal(Core::System& system, Handle handle) {
return ERR_INVALID_HANDLE;
}
static ResultCode ResetSignal32(Core::System& system, Handle handle) {
return ResetSignal(system, handle);
}
/// Creates a TransferMemory object
static ResultCode CreateTransferMemory(Core::System& system, Handle* handle, VAddr addr, u64 size,
u32 permissions) {
std::lock_guard lock{HLE::g_hle_lock};
LOG_DEBUG(Kernel_SVC, "called addr=0x{:X}, size=0x{:X}, perms=0x{:08X}", addr, size,
permissions);
@@ -1812,6 +1985,12 @@ static ResultCode CreateTransferMemory(Core::System& system, Handle* handle, VAd
return RESULT_SUCCESS;
}
static ResultCode CreateTransferMemory32(Core::System& system, Handle* handle, u32 addr, u32 size,
u32 permissions) {
return CreateTransferMemory(system, handle, static_cast<VAddr>(addr),
static_cast<std::size_t>(size), permissions);
}
static ResultCode GetThreadCoreMask(Core::System& system, Handle thread_handle, u32* core,
u64* mask) {
LOG_TRACE(Kernel_SVC, "called, handle=0x{:08X}", thread_handle);
@@ -1821,6 +2000,8 @@ static ResultCode GetThreadCoreMask(Core::System& system, Handle thread_handle,
if (!thread) {
LOG_ERROR(Kernel_SVC, "Thread handle does not exist, thread_handle=0x{:08X}",
thread_handle);
*core = 0;
*mask = 0;
return ERR_INVALID_HANDLE;
}
@@ -1830,6 +2011,15 @@ static ResultCode GetThreadCoreMask(Core::System& system, Handle thread_handle,
return RESULT_SUCCESS;
}
static ResultCode GetThreadCoreMask32(Core::System& system, Handle thread_handle, u32* core,
u32* mask_low, u32* mask_high) {
u64 mask{};
const auto result = GetThreadCoreMask(system, thread_handle, core, &mask);
*mask_high = static_cast<u32>(mask >> 32);
*mask_low = static_cast<u32>(mask);
return result;
}
static ResultCode SetThreadCoreMask(Core::System& system, Handle thread_handle, u32 core,
u64 affinity_mask) {
LOG_DEBUG(Kernel_SVC, "called, handle=0x{:08X}, core=0x{:X}, affinity_mask=0x{:016X}",
@@ -1861,7 +2051,7 @@ static ResultCode SetThreadCoreMask(Core::System& system, Handle thread_handle,
return ERR_INVALID_COMBINATION;
}
if (core < Core::NUM_CPU_CORES) {
if (core < Core::Hardware::NUM_CPU_CORES) {
if ((affinity_mask & (1ULL << core)) == 0) {
LOG_ERROR(Kernel_SVC,
"Core is not enabled for the current mask, core={}, mask={:016X}", core,
@@ -1883,11 +2073,14 @@ static ResultCode SetThreadCoreMask(Core::System& system, Handle thread_handle,
return ERR_INVALID_HANDLE;
}
system.PrepareReschedule(thread->GetProcessorID());
thread->ChangeCore(core, affinity_mask);
system.PrepareReschedule(thread->GetProcessorID());
return thread->SetCoreAndAffinityMask(core, affinity_mask);
}
return RESULT_SUCCESS;
static ResultCode SetThreadCoreMask32(Core::System& system, Handle thread_handle, u32 core,
u32 affinity_mask_low, u32 affinity_mask_high) {
const u64 affinity_mask =
static_cast<u64>(affinity_mask_low) | (static_cast<u64>(affinity_mask_high) << 32);
return SetThreadCoreMask(system, thread_handle, core, affinity_mask);
}
static ResultCode CreateEvent(Core::System& system, Handle* write_handle, Handle* read_handle) {
@@ -1918,6 +2111,10 @@ static ResultCode CreateEvent(Core::System& system, Handle* write_handle, Handle
return RESULT_SUCCESS;
}
static ResultCode CreateEvent32(Core::System& system, Handle* write_handle, Handle* read_handle) {
return CreateEvent(system, write_handle, read_handle);
}
static ResultCode ClearEvent(Core::System& system, Handle handle) {
LOG_TRACE(Kernel_SVC, "called, event=0x{:08X}", handle);
@@ -1939,6 +2136,10 @@ static ResultCode ClearEvent(Core::System& system, Handle handle) {
return ERR_INVALID_HANDLE;
}
static ResultCode ClearEvent32(Core::System& system, Handle handle) {
return ClearEvent(system, handle);
}
static ResultCode SignalEvent(Core::System& system, Handle handle) {
LOG_DEBUG(Kernel_SVC, "called. Handle=0x{:08X}", handle);
@@ -1951,10 +2152,13 @@ static ResultCode SignalEvent(Core::System& system, Handle handle) {
}
writable_event->Signal();
system.PrepareReschedule();
return RESULT_SUCCESS;
}
static ResultCode SignalEvent32(Core::System& system, Handle handle) {
return SignalEvent(system, handle);
}
static ResultCode GetProcessInfo(Core::System& system, u64* out, Handle process_handle, u32 type) {
LOG_DEBUG(Kernel_SVC, "called, handle=0x{:08X}, type=0x{:X}", process_handle, type);
@@ -1982,6 +2186,7 @@ static ResultCode GetProcessInfo(Core::System& system, u64* out, Handle process_
}
static ResultCode CreateResourceLimit(Core::System& system, Handle* out_handle) {
std::lock_guard lock{HLE::g_hle_lock};
LOG_DEBUG(Kernel_SVC, "called");
auto& kernel = system.Kernel();
@@ -2139,6 +2344,15 @@ static ResultCode GetThreadList(Core::System& system, u32* out_num_threads, VAdd
return RESULT_SUCCESS;
}
static ResultCode FlushProcessDataCache32(Core::System& system, Handle handle, u32 address,
u32 size) {
// Note(Blinkhawk): For emulation purposes of the data cache this is mostly a nope
// as all emulation is done in the same cache level in host architecture, thus data cache
// does not need flushing.
LOG_DEBUG(Kernel_SVC, "called");
return RESULT_SUCCESS;
}
namespace {
struct FunctionDef {
using Func = void(Core::System&);
@@ -2153,57 +2367,57 @@ static const FunctionDef SVC_Table_32[] = {
{0x00, nullptr, "Unknown"},
{0x01, SvcWrap32<SetHeapSize32>, "SetHeapSize32"},
{0x02, nullptr, "Unknown"},
{0x03, nullptr, "SetMemoryAttribute32"},
{0x04, nullptr, "MapMemory32"},
{0x05, nullptr, "UnmapMemory32"},
{0x03, SvcWrap32<SetMemoryAttribute32>, "SetMemoryAttribute32"},
{0x04, SvcWrap32<MapMemory32>, "MapMemory32"},
{0x05, SvcWrap32<UnmapMemory32>, "UnmapMemory32"},
{0x06, SvcWrap32<QueryMemory32>, "QueryMemory32"},
{0x07, nullptr, "ExitProcess32"},
{0x08, nullptr, "CreateThread32"},
{0x09, nullptr, "StartThread32"},
{0x0a, nullptr, "ExitThread32"},
{0x0b, nullptr, "SleepThread32"},
{0x07, SvcWrap32<ExitProcess32>, "ExitProcess32"},
{0x08, SvcWrap32<CreateThread32>, "CreateThread32"},
{0x09, SvcWrap32<StartThread32>, "StartThread32"},
{0x0a, SvcWrap32<ExitThread32>, "ExitThread32"},
{0x0b, SvcWrap32<SleepThread32>, "SleepThread32"},
{0x0c, SvcWrap32<GetThreadPriority32>, "GetThreadPriority32"},
{0x0d, nullptr, "SetThreadPriority32"},
{0x0e, nullptr, "GetThreadCoreMask32"},
{0x0f, nullptr, "SetThreadCoreMask32"},
{0x10, nullptr, "GetCurrentProcessorNumber32"},
{0x11, nullptr, "SignalEvent32"},
{0x12, nullptr, "ClearEvent32"},
{0x13, nullptr, "MapSharedMemory32"},
{0x0d, SvcWrap32<SetThreadPriority32>, "SetThreadPriority32"},
{0x0e, SvcWrap32<GetThreadCoreMask32>, "GetThreadCoreMask32"},
{0x0f, SvcWrap32<SetThreadCoreMask32>, "SetThreadCoreMask32"},
{0x10, SvcWrap32<GetCurrentProcessorNumber32>, "GetCurrentProcessorNumber32"},
{0x11, SvcWrap32<SignalEvent32>, "SignalEvent32"},
{0x12, SvcWrap32<ClearEvent32>, "ClearEvent32"},
{0x13, SvcWrap32<MapSharedMemory32>, "MapSharedMemory32"},
{0x14, nullptr, "UnmapSharedMemory32"},
{0x15, nullptr, "CreateTransferMemory32"},
{0x15, SvcWrap32<CreateTransferMemory32>, "CreateTransferMemory32"},
{0x16, SvcWrap32<CloseHandle32>, "CloseHandle32"},
{0x17, nullptr, "ResetSignal32"},
{0x17, SvcWrap32<ResetSignal32>, "ResetSignal32"},
{0x18, SvcWrap32<WaitSynchronization32>, "WaitSynchronization32"},
{0x19, nullptr, "CancelSynchronization32"},
{0x1a, nullptr, "ArbitrateLock32"},
{0x1b, nullptr, "ArbitrateUnlock32"},
{0x1c, nullptr, "WaitProcessWideKeyAtomic32"},
{0x19, SvcWrap32<CancelSynchronization32>, "CancelSynchronization32"},
{0x1a, SvcWrap32<ArbitrateLock32>, "ArbitrateLock32"},
{0x1b, SvcWrap32<ArbitrateUnlock32>, "ArbitrateUnlock32"},
{0x1c, SvcWrap32<WaitProcessWideKeyAtomic32>, "WaitProcessWideKeyAtomic32"},
{0x1d, SvcWrap32<SignalProcessWideKey32>, "SignalProcessWideKey32"},
{0x1e, nullptr, "GetSystemTick32"},
{0x1e, SvcWrap32<GetSystemTick32>, "GetSystemTick32"},
{0x1f, SvcWrap32<ConnectToNamedPort32>, "ConnectToNamedPort32"},
{0x20, nullptr, "Unknown"},
{0x21, SvcWrap32<SendSyncRequest32>, "SendSyncRequest32"},
{0x22, nullptr, "SendSyncRequestWithUserBuffer32"},
{0x23, nullptr, "Unknown"},
{0x24, nullptr, "GetProcessId32"},
{0x24, SvcWrap32<GetProcessId32>, "GetProcessId32"},
{0x25, SvcWrap32<GetThreadId32>, "GetThreadId32"},
{0x26, nullptr, "Break32"},
{0x26, SvcWrap32<Break32>, "Break32"},
{0x27, nullptr, "OutputDebugString32"},
{0x28, nullptr, "Unknown"},
{0x29, SvcWrap32<GetInfo32>, "GetInfo32"},
{0x2a, nullptr, "Unknown"},
{0x2b, nullptr, "Unknown"},
{0x2c, nullptr, "MapPhysicalMemory32"},
{0x2d, nullptr, "UnmapPhysicalMemory32"},
{0x2c, SvcWrap32<MapPhysicalMemory32>, "MapPhysicalMemory32"},
{0x2d, SvcWrap32<UnmapPhysicalMemory32>, "UnmapPhysicalMemory32"},
{0x2e, nullptr, "Unknown"},
{0x2f, nullptr, "Unknown"},
{0x30, nullptr, "Unknown"},
{0x31, nullptr, "Unknown"},
{0x32, nullptr, "SetThreadActivity32"},
{0x33, nullptr, "GetThreadContext32"},
{0x34, nullptr, "WaitForAddress32"},
{0x35, nullptr, "SignalToAddress32"},
{0x32, SvcWrap32<SetThreadActivity32>, "SetThreadActivity32"},
{0x33, SvcWrap32<GetThreadContext32>, "GetThreadContext32"},
{0x34, SvcWrap32<WaitForAddress32>, "WaitForAddress32"},
{0x35, SvcWrap32<SignalToAddress32>, "SignalToAddress32"},
{0x36, nullptr, "Unknown"},
{0x37, nullptr, "Unknown"},
{0x38, nullptr, "Unknown"},
@@ -2219,7 +2433,7 @@ static const FunctionDef SVC_Table_32[] = {
{0x42, nullptr, "Unknown"},
{0x43, nullptr, "ReplyAndReceive32"},
{0x44, nullptr, "Unknown"},
{0x45, nullptr, "CreateEvent32"},
{0x45, SvcWrap32<CreateEvent32>, "CreateEvent32"},
{0x46, nullptr, "Unknown"},
{0x47, nullptr, "Unknown"},
{0x48, nullptr, "Unknown"},
@@ -2245,7 +2459,7 @@ static const FunctionDef SVC_Table_32[] = {
{0x5c, nullptr, "Unknown"},
{0x5d, nullptr, "Unknown"},
{0x5e, nullptr, "Unknown"},
{0x5F, nullptr, "FlushProcessDataCache32"},
{0x5F, SvcWrap32<FlushProcessDataCache32>, "FlushProcessDataCache32"},
{0x60, nullptr, "Unknown"},
{0x61, nullptr, "Unknown"},
{0x62, nullptr, "Unknown"},
@@ -2423,13 +2637,10 @@ static const FunctionDef* GetSVCInfo64(u32 func_num) {
return &SVC_Table_64[func_num];
}
MICROPROFILE_DEFINE(Kernel_SVC, "Kernel", "SVC", MP_RGB(70, 200, 70));
void Call(Core::System& system, u32 immediate) {
MICROPROFILE_SCOPE(Kernel_SVC);
// Lock the global kernel mutex when we enter the kernel HLE.
std::lock_guard lock{HLE::g_hle_lock};
system.ExitDynarmicProfile();
auto& kernel = system.Kernel();
kernel.EnterSVCProfile();
const FunctionDef* info = system.CurrentProcess()->Is64BitProcess() ? GetSVCInfo64(immediate)
: GetSVCInfo32(immediate);
@@ -2442,6 +2653,9 @@ void Call(Core::System& system, u32 immediate) {
} else {
LOG_CRITICAL(Kernel_SVC, "Unknown SVC function 0x{:X}", immediate);
}
kernel.ExitSVCProfile();
system.EnterDynarmicProfile();
}
} // namespace Kernel::Svc

View File

@@ -350,13 +350,50 @@ void SvcWrap64(Core::System& system) {
func(system, static_cast<u32>(Param(system, 0)), Param(system, 1), Param(system, 2));
}
// Used by QueryMemory32
// Used by QueryMemory32, ArbitrateLock32
template <ResultCode func(Core::System&, u32, u32, u32)>
void SvcWrap32(Core::System& system) {
FuncReturn32(system,
func(system, Param32(system, 0), Param32(system, 1), Param32(system, 2)).raw);
}
// Used by Break32
template <void func(Core::System&, u32, u32, u32)>
void SvcWrap32(Core::System& system) {
func(system, Param32(system, 0), Param32(system, 1), Param32(system, 2));
}
// Used by ExitProcess32, ExitThread32
template <void func(Core::System&)>
void SvcWrap32(Core::System& system) {
func(system);
}
// Used by GetCurrentProcessorNumber32
template <u32 func(Core::System&)>
void SvcWrap32(Core::System& system) {
FuncReturn32(system, func(system));
}
// Used by SleepThread32
template <void func(Core::System&, u32, u32)>
void SvcWrap32(Core::System& system) {
func(system, Param32(system, 0), Param32(system, 1));
}
// Used by CreateThread32
template <ResultCode func(Core::System&, Handle*, u32, u32, u32, u32, s32)>
void SvcWrap32(Core::System& system) {
Handle param_1 = 0;
const u32 retval = func(system, &param_1, Param32(system, 0), Param32(system, 1),
Param32(system, 2), Param32(system, 3), Param32(system, 4))
.raw;
system.CurrentArmInterface().SetReg(1, param_1);
FuncReturn(system, retval);
}
// Used by GetInfo32
template <ResultCode func(Core::System&, u32*, u32*, u32, u32, u32, u32)>
void SvcWrap32(Core::System& system) {
@@ -393,18 +430,114 @@ void SvcWrap32(Core::System& system) {
FuncReturn(system, retval);
}
// Used by GetSystemTick32
template <void func(Core::System&, u32*, u32*)>
void SvcWrap32(Core::System& system) {
u32 param_1 = 0;
u32 param_2 = 0;
func(system, &param_1, &param_2);
system.CurrentArmInterface().SetReg(0, param_1);
system.CurrentArmInterface().SetReg(1, param_2);
}
// Used by CreateEvent32
template <ResultCode func(Core::System&, Handle*, Handle*)>
void SvcWrap32(Core::System& system) {
Handle param_1 = 0;
Handle param_2 = 0;
const u32 retval = func(system, &param_1, &param_2).raw;
system.CurrentArmInterface().SetReg(1, param_1);
system.CurrentArmInterface().SetReg(2, param_2);
FuncReturn(system, retval);
}
// Used by GetThreadId32
template <ResultCode func(Core::System&, Handle, u32*, u32*, u32*)>
void SvcWrap32(Core::System& system) {
u32 param_1 = 0;
u32 param_2 = 0;
u32 param_3 = 0;
const u32 retval = func(system, Param32(system, 2), &param_1, &param_2, &param_3).raw;
system.CurrentArmInterface().SetReg(1, param_1);
system.CurrentArmInterface().SetReg(2, param_2);
system.CurrentArmInterface().SetReg(3, param_3);
FuncReturn(system, retval);
}
// Used by SignalProcessWideKey32
template <void func(Core::System&, u32, s32)>
void SvcWrap32(Core::System& system) {
func(system, static_cast<u32>(Param(system, 0)), static_cast<s32>(Param(system, 1)));
}
// Used by SendSyncRequest32
// Used by SetThreadPriority32
template <ResultCode func(Core::System&, Handle, u32)>
void SvcWrap32(Core::System& system) {
const u32 retval =
func(system, static_cast<Handle>(Param(system, 0)), static_cast<u32>(Param(system, 1))).raw;
FuncReturn(system, retval);
}
// Used by SetThreadCoreMask32
template <ResultCode func(Core::System&, Handle, u32, u32, u32)>
void SvcWrap32(Core::System& system) {
const u32 retval =
func(system, static_cast<Handle>(Param(system, 0)), static_cast<u32>(Param(system, 1)),
static_cast<u32>(Param(system, 2)), static_cast<u32>(Param(system, 3)))
.raw;
FuncReturn(system, retval);
}
// Used by WaitProcessWideKeyAtomic32
template <ResultCode func(Core::System&, u32, u32, Handle, u32, u32)>
void SvcWrap32(Core::System& system) {
const u32 retval =
func(system, static_cast<u32>(Param(system, 0)), static_cast<u32>(Param(system, 1)),
static_cast<Handle>(Param(system, 2)), static_cast<u32>(Param(system, 3)),
static_cast<u32>(Param(system, 4)))
.raw;
FuncReturn(system, retval);
}
// Used by WaitForAddress32
template <ResultCode func(Core::System&, u32, u32, s32, u32, u32)>
void SvcWrap32(Core::System& system) {
const u32 retval = func(system, static_cast<u32>(Param(system, 0)),
static_cast<u32>(Param(system, 1)), static_cast<s32>(Param(system, 2)),
static_cast<u32>(Param(system, 3)), static_cast<u32>(Param(system, 4)))
.raw;
FuncReturn(system, retval);
}
// Used by SignalToAddress32
template <ResultCode func(Core::System&, u32, u32, s32, s32)>
void SvcWrap32(Core::System& system) {
const u32 retval =
func(system, static_cast<u32>(Param(system, 0)), static_cast<u32>(Param(system, 1)),
static_cast<s32>(Param(system, 2)), static_cast<s32>(Param(system, 3)))
.raw;
FuncReturn(system, retval);
}
// Used by SendSyncRequest32, ArbitrateUnlock32
template <ResultCode func(Core::System&, u32)>
void SvcWrap32(Core::System& system) {
FuncReturn(system, func(system, static_cast<u32>(Param(system, 0))).raw);
}
// Used by CreateTransferMemory32
template <ResultCode func(Core::System&, Handle*, u32, u32, u32)>
void SvcWrap32(Core::System& system) {
Handle handle = 0;
const u32 retval =
func(system, &handle, Param32(system, 1), Param32(system, 2), Param32(system, 3)).raw;
system.CurrentArmInterface().SetReg(1, handle);
FuncReturn(system, retval);
}
// Used by WaitSynchronization32
template <ResultCode func(Core::System&, u32, u32, s32, u32, Handle*)>
void SvcWrap32(Core::System& system) {

View File

@@ -10,78 +10,107 @@
#include "core/hle/kernel/synchronization.h"
#include "core/hle/kernel/synchronization_object.h"
#include "core/hle/kernel/thread.h"
#include "core/hle/kernel/time_manager.h"
namespace Kernel {
/// Default thread wakeup callback for WaitSynchronization
static bool DefaultThreadWakeupCallback(ThreadWakeupReason reason, std::shared_ptr<Thread> thread,
std::shared_ptr<SynchronizationObject> object,
std::size_t index) {
ASSERT(thread->GetStatus() == ThreadStatus::WaitSynch);
if (reason == ThreadWakeupReason::Timeout) {
thread->SetWaitSynchronizationResult(RESULT_TIMEOUT);
return true;
}
ASSERT(reason == ThreadWakeupReason::Signal);
thread->SetWaitSynchronizationResult(RESULT_SUCCESS);
thread->SetWaitSynchronizationOutput(static_cast<u32>(index));
return true;
}
Synchronization::Synchronization(Core::System& system) : system{system} {}
void Synchronization::SignalObject(SynchronizationObject& obj) const {
auto& kernel = system.Kernel();
SchedulerLock lock(kernel);
auto& time_manager = kernel.TimeManager();
if (obj.IsSignaled()) {
obj.WakeupAllWaitingThreads();
for (auto thread : obj.GetWaitingThreads()) {
if (thread->GetSchedulingStatus() == ThreadSchedStatus::Paused) {
if (thread->GetStatus() != ThreadStatus::WaitHLEEvent) {
ASSERT(thread->GetStatus() == ThreadStatus::WaitSynch);
ASSERT(thread->IsWaitingSync());
}
thread->SetSynchronizationResults(&obj, RESULT_SUCCESS);
thread->ResumeFromWait();
}
}
obj.ClearWaitingThreads();
}
}
std::pair<ResultCode, Handle> Synchronization::WaitFor(
std::vector<std::shared_ptr<SynchronizationObject>>& sync_objects, s64 nano_seconds) {
auto& kernel = system.Kernel();
auto* const thread = system.CurrentScheduler().GetCurrentThread();
// Find the first object that is acquirable in the provided list of objects
const auto itr = std::find_if(sync_objects.begin(), sync_objects.end(),
[thread](const std::shared_ptr<SynchronizationObject>& object) {
return object->IsSignaled();
});
Handle event_handle = InvalidHandle;
{
SchedulerLockAndSleep lock(kernel, event_handle, thread, nano_seconds);
const auto itr =
std::find_if(sync_objects.begin(), sync_objects.end(),
[thread](const std::shared_ptr<SynchronizationObject>& object) {
return object->IsSignaled();
});
if (itr != sync_objects.end()) {
// We found a ready object, acquire it and set the result value
SynchronizationObject* object = itr->get();
object->Acquire(thread);
const u32 index = static_cast<s32>(std::distance(sync_objects.begin(), itr));
return {RESULT_SUCCESS, index};
if (itr != sync_objects.end()) {
// We found a ready object, acquire it and set the result value
SynchronizationObject* object = itr->get();
object->Acquire(thread);
const u32 index = static_cast<s32>(std::distance(sync_objects.begin(), itr));
lock.CancelSleep();
return {RESULT_SUCCESS, index};
}
if (nano_seconds == 0) {
lock.CancelSleep();
return {RESULT_TIMEOUT, InvalidHandle};
}
if (thread->IsPendingTermination()) {
lock.CancelSleep();
return {ERR_THREAD_TERMINATING, InvalidHandle};
}
if (thread->IsSyncCancelled()) {
thread->SetSyncCancelled(false);
lock.CancelSleep();
return {ERR_SYNCHRONIZATION_CANCELED, InvalidHandle};
}
for (auto& object : sync_objects) {
object->AddWaitingThread(SharedFrom(thread));
}
thread->SetSynchronizationObjects(&sync_objects);
thread->SetSynchronizationResults(nullptr, RESULT_TIMEOUT);
thread->SetStatus(ThreadStatus::WaitSynch);
thread->SetWaitingSync(true);
}
thread->SetWaitingSync(false);
if (event_handle != InvalidHandle) {
auto& time_manager = kernel.TimeManager();
time_manager.UnscheduleTimeEvent(event_handle);
}
// No objects were ready to be acquired, prepare to suspend the thread.
// If a timeout value of 0 was provided, just return the Timeout error code instead of
// suspending the thread.
if (nano_seconds == 0) {
return {RESULT_TIMEOUT, InvalidHandle};
{
SchedulerLock lock(kernel);
ResultCode signaling_result = thread->GetSignalingResult();
SynchronizationObject* signaling_object = thread->GetSignalingObject();
thread->SetSynchronizationObjects(nullptr);
auto shared_thread = SharedFrom(thread);
for (auto& obj : sync_objects) {
obj->RemoveWaitingThread(shared_thread);
}
if (signaling_object != nullptr) {
const auto itr = std::find_if(
sync_objects.begin(), sync_objects.end(),
[signaling_object](const std::shared_ptr<SynchronizationObject>& object) {
return object.get() == signaling_object;
});
ASSERT(itr != sync_objects.end());
signaling_object->Acquire(thread);
const u32 index = static_cast<s32>(std::distance(sync_objects.begin(), itr));
return {signaling_result, index};
}
return {signaling_result, -1};
}
if (thread->IsSyncCancelled()) {
thread->SetSyncCancelled(false);
return {ERR_SYNCHRONIZATION_CANCELED, InvalidHandle};
}
for (auto& object : sync_objects) {
object->AddWaitingThread(SharedFrom(thread));
}
thread->SetSynchronizationObjects(std::move(sync_objects));
thread->SetStatus(ThreadStatus::WaitSynch);
// Create an event to wake the thread up after the specified nanosecond delay has passed
thread->WakeAfterDelay(nano_seconds);
thread->SetWakeupCallback(DefaultThreadWakeupCallback);
system.PrepareReschedule(thread->GetProcessorID());
return {RESULT_TIMEOUT, InvalidHandle};
}
} // namespace Kernel

View File

@@ -38,68 +38,8 @@ void SynchronizationObject::RemoveWaitingThread(std::shared_ptr<Thread> thread)
waiting_threads.erase(itr);
}
std::shared_ptr<Thread> SynchronizationObject::GetHighestPriorityReadyThread() const {
Thread* candidate = nullptr;
u32 candidate_priority = THREADPRIO_LOWEST + 1;
for (const auto& thread : waiting_threads) {
const ThreadStatus thread_status = thread->GetStatus();
// The list of waiting threads must not contain threads that are not waiting to be awakened.
ASSERT_MSG(thread_status == ThreadStatus::WaitSynch ||
thread_status == ThreadStatus::WaitHLEEvent,
"Inconsistent thread statuses in waiting_threads");
if (thread->GetPriority() >= candidate_priority)
continue;
if (ShouldWait(thread.get()))
continue;
candidate = thread.get();
candidate_priority = thread->GetPriority();
}
return SharedFrom(candidate);
}
void SynchronizationObject::WakeupWaitingThread(std::shared_ptr<Thread> thread) {
ASSERT(!ShouldWait(thread.get()));
if (!thread) {
return;
}
if (thread->IsSleepingOnWait()) {
for (const auto& object : thread->GetSynchronizationObjects()) {
ASSERT(!object->ShouldWait(thread.get()));
object->Acquire(thread.get());
}
} else {
Acquire(thread.get());
}
const std::size_t index = thread->GetSynchronizationObjectIndex(SharedFrom(this));
thread->ClearSynchronizationObjects();
thread->CancelWakeupTimer();
bool resume = true;
if (thread->HasWakeupCallback()) {
resume = thread->InvokeWakeupCallback(ThreadWakeupReason::Signal, thread, SharedFrom(this),
index);
}
if (resume) {
thread->ResumeFromWait();
kernel.PrepareReschedule(thread->GetProcessorID());
}
}
void SynchronizationObject::WakeupAllWaitingThreads() {
while (auto thread = GetHighestPriorityReadyThread()) {
WakeupWaitingThread(thread);
}
void SynchronizationObject::ClearWaitingThreads() {
waiting_threads.clear();
}
const std::vector<std::shared_ptr<Thread>>& SynchronizationObject::GetWaitingThreads() const {

View File

@@ -12,6 +12,7 @@
namespace Kernel {
class KernelCore;
class Synchronization;
class Thread;
/// Class that represents a Kernel object that a thread can be waiting on
@@ -49,24 +50,11 @@ public:
*/
void RemoveWaitingThread(std::shared_ptr<Thread> thread);
/**
* Wake up all threads waiting on this object that can be awoken, in priority order,
* and set the synchronization result and output of the thread.
*/
void WakeupAllWaitingThreads();
/**
* Wakes up a single thread waiting on this object.
* @param thread Thread that is waiting on this object to wakeup.
*/
void WakeupWaitingThread(std::shared_ptr<Thread> thread);
/// Obtains the highest priority thread that is ready to run from this object's waiting list.
std::shared_ptr<Thread> GetHighestPriorityReadyThread() const;
/// Get a const reference to the waiting threads list for debug use
const std::vector<std::shared_ptr<Thread>>& GetWaitingThreads() const;
void ClearWaitingThreads();
protected:
bool is_signaled{}; // Tells if this sync object is signalled;

View File

@@ -9,12 +9,21 @@
#include "common/assert.h"
#include "common/common_types.h"
#include "common/fiber.h"
#include "common/logging/log.h"
#include "common/thread_queue_list.h"
#include "core/arm/arm_interface.h"
#ifdef ARCHITECTURE_x86_64
#include "core/arm/dynarmic/arm_dynarmic_32.h"
#include "core/arm/dynarmic/arm_dynarmic_64.h"
#endif
#include "core/arm/cpu_interrupt_handler.h"
#include "core/arm/exclusive_monitor.h"
#include "core/arm/unicorn/arm_unicorn.h"
#include "core/core.h"
#include "core/core_timing.h"
#include "core/core_timing_util.h"
#include "core/cpu_manager.h"
#include "core/hardware_properties.h"
#include "core/hle/kernel/errors.h"
#include "core/hle/kernel/handle_table.h"
@@ -23,6 +32,7 @@
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/scheduler.h"
#include "core/hle/kernel/thread.h"
#include "core/hle/kernel/time_manager.h"
#include "core/hle/result.h"
#include "core/memory.h"
@@ -44,46 +54,26 @@ Thread::Thread(KernelCore& kernel) : SynchronizationObject{kernel} {}
Thread::~Thread() = default;
void Thread::Stop() {
// Cancel any outstanding wakeup events for this thread
Core::System::GetInstance().CoreTiming().UnscheduleEvent(kernel.ThreadWakeupCallbackEventType(),
global_handle);
kernel.GlobalHandleTable().Close(global_handle);
global_handle = 0;
SetStatus(ThreadStatus::Dead);
Signal();
{
SchedulerLock lock(kernel);
SetStatus(ThreadStatus::Dead);
Signal();
kernel.GlobalHandleTable().Close(global_handle);
// Clean up any dangling references in objects that this thread was waiting for
for (auto& wait_object : wait_objects) {
wait_object->RemoveWaitingThread(SharedFrom(this));
if (owner_process) {
owner_process->UnregisterThread(this);
// Mark the TLS slot in the thread's page as free.
owner_process->FreeTLSRegion(tls_address);
}
arm_interface.reset();
has_exited = true;
}
wait_objects.clear();
owner_process->UnregisterThread(this);
// Mark the TLS slot in the thread's page as free.
owner_process->FreeTLSRegion(tls_address);
}
void Thread::WakeAfterDelay(s64 nanoseconds) {
// Don't schedule a wakeup if the thread wants to wait forever
if (nanoseconds == -1)
return;
// This function might be called from any thread so we have to be cautious and use the
// thread-safe version of ScheduleEvent.
const s64 cycles = Core::Timing::nsToCycles(std::chrono::nanoseconds{nanoseconds});
Core::System::GetInstance().CoreTiming().ScheduleEvent(
cycles, kernel.ThreadWakeupCallbackEventType(), global_handle);
}
void Thread::CancelWakeupTimer() {
Core::System::GetInstance().CoreTiming().UnscheduleEvent(kernel.ThreadWakeupCallbackEventType(),
global_handle);
global_handle = 0;
}
void Thread::ResumeFromWait() {
ASSERT_MSG(wait_objects.empty(), "Thread is waking up while waiting for objects");
SchedulerLock lock(kernel);
switch (status) {
case ThreadStatus::Paused:
case ThreadStatus::WaitSynch:
@@ -99,7 +89,7 @@ void Thread::ResumeFromWait() {
case ThreadStatus::Ready:
// The thread's wakeup callback must have already been cleared when the thread was first
// awoken.
ASSERT(wakeup_callback == nullptr);
ASSERT(hle_callback == nullptr);
// If the thread is waiting on multiple wait objects, it might be awoken more than once
// before actually resuming. We can ignore subsequent wakeups if the thread status has
// already been set to ThreadStatus::Ready.
@@ -115,24 +105,31 @@ void Thread::ResumeFromWait() {
return;
}
wakeup_callback = nullptr;
SetStatus(ThreadStatus::Ready);
}
if (activity == ThreadActivity::Paused) {
SetStatus(ThreadStatus::Paused);
return;
}
void Thread::OnWakeUp() {
SchedulerLock lock(kernel);
SetStatus(ThreadStatus::Ready);
}
ResultCode Thread::Start() {
SchedulerLock lock(kernel);
SetStatus(ThreadStatus::Ready);
return RESULT_SUCCESS;
}
void Thread::CancelWait() {
if (GetSchedulingStatus() != ThreadSchedStatus::Paused) {
SchedulerLock lock(kernel);
if (GetSchedulingStatus() != ThreadSchedStatus::Paused || !is_waiting_on_sync) {
is_sync_cancelled = true;
return;
}
// TODO(Blinkhawk): Implement cancel of server session
is_sync_cancelled = false;
SetWaitSynchronizationResult(ERR_SYNCHRONIZATION_CANCELED);
ResumeFromWait();
SetSynchronizationResults(nullptr, ERR_SYNCHRONIZATION_CANCELED);
SetStatus(ThreadStatus::Ready);
}
static void ResetThreadContext32(Core::ARM_Interface::ThreadContext32& context, u32 stack_top,
@@ -153,12 +150,29 @@ static void ResetThreadContext64(Core::ARM_Interface::ThreadContext64& context,
context.fpcr = 0;
}
ResultVal<std::shared_ptr<Thread>> Thread::Create(KernelCore& kernel, std::string name,
VAddr entry_point, u32 priority, u64 arg,
s32 processor_id, VAddr stack_top,
Process& owner_process) {
std::shared_ptr<Common::Fiber>& Thread::GetHostContext() {
return host_context;
}
ResultVal<std::shared_ptr<Thread>> Thread::Create(Core::System& system, ThreadType type_flags,
std::string name, VAddr entry_point, u32 priority,
u64 arg, s32 processor_id, VAddr stack_top,
Process* owner_process) {
std::function<void(void*)> init_func = system.GetCpuManager().GetGuestThreadStartFunc();
void* init_func_parameter = system.GetCpuManager().GetStartFuncParamater();
return Create(system, type_flags, name, entry_point, priority, arg, processor_id, stack_top,
owner_process, std::move(init_func), init_func_parameter);
}
ResultVal<std::shared_ptr<Thread>> Thread::Create(Core::System& system, ThreadType type_flags,
std::string name, VAddr entry_point, u32 priority,
u64 arg, s32 processor_id, VAddr stack_top,
Process* owner_process,
std::function<void(void*)>&& thread_start_func,
void* thread_start_parameter) {
auto& kernel = system.Kernel();
// Check if priority is in ranged. Lowest priority -> highest priority id.
if (priority > THREADPRIO_LOWEST) {
if (priority > THREADPRIO_LOWEST && ((type_flags & THREADTYPE_IDLE) == 0)) {
LOG_ERROR(Kernel_SVC, "Invalid thread priority: {}", priority);
return ERR_INVALID_THREAD_PRIORITY;
}
@@ -168,11 +182,12 @@ ResultVal<std::shared_ptr<Thread>> Thread::Create(KernelCore& kernel, std::strin
return ERR_INVALID_PROCESSOR_ID;
}
auto& system = Core::System::GetInstance();
if (!system.Memory().IsValidVirtualAddress(owner_process, entry_point)) {
LOG_ERROR(Kernel_SVC, "(name={}): invalid entry {:016X}", name, entry_point);
// TODO (bunnei): Find the correct error code to use here
return RESULT_UNKNOWN;
if (owner_process) {
if (!system.Memory().IsValidVirtualAddress(*owner_process, entry_point)) {
LOG_ERROR(Kernel_SVC, "(name={}): invalid entry {:016X}", name, entry_point);
// TODO (bunnei): Find the correct error code to use here
return RESULT_UNKNOWN;
}
}
std::shared_ptr<Thread> thread = std::make_shared<Thread>(kernel);
@@ -183,51 +198,82 @@ ResultVal<std::shared_ptr<Thread>> Thread::Create(KernelCore& kernel, std::strin
thread->stack_top = stack_top;
thread->tpidr_el0 = 0;
thread->nominal_priority = thread->current_priority = priority;
thread->last_running_ticks = system.CoreTiming().GetTicks();
thread->last_running_ticks = 0;
thread->processor_id = processor_id;
thread->ideal_core = processor_id;
thread->affinity_mask = 1ULL << processor_id;
thread->wait_objects.clear();
thread->wait_objects = nullptr;
thread->mutex_wait_address = 0;
thread->condvar_wait_address = 0;
thread->wait_handle = 0;
thread->name = std::move(name);
thread->global_handle = kernel.GlobalHandleTable().Create(thread).Unwrap();
thread->owner_process = &owner_process;
auto& scheduler = kernel.GlobalScheduler();
scheduler.AddThread(thread);
thread->tls_address = thread->owner_process->CreateTLSRegion();
thread->owner_process = owner_process;
thread->type = type_flags;
if ((type_flags & THREADTYPE_IDLE) == 0) {
auto& scheduler = kernel.GlobalScheduler();
scheduler.AddThread(thread);
}
if (owner_process) {
thread->tls_address = thread->owner_process->CreateTLSRegion();
thread->owner_process->RegisterThread(thread.get());
} else {
thread->tls_address = 0;
}
// TODO(peachum): move to ScheduleThread() when scheduler is added so selected core is used
// to initialize the context
thread->arm_interface.reset();
if ((type_flags & THREADTYPE_HLE) == 0) {
#ifdef ARCHITECTURE_x86_64
if (owner_process && !owner_process->Is64BitProcess()) {
thread->arm_interface = std::make_unique<Core::ARM_Dynarmic_32>(
system, kernel.Interrupts(), kernel.IsMulticore(), kernel.GetExclusiveMonitor(),
processor_id);
} else {
thread->arm_interface = std::make_unique<Core::ARM_Dynarmic_64>(
system, kernel.Interrupts(), kernel.IsMulticore(), kernel.GetExclusiveMonitor(),
processor_id);
}
thread->owner_process->RegisterThread(thread.get());
ResetThreadContext32(thread->context_32, static_cast<u32>(stack_top),
static_cast<u32>(entry_point), static_cast<u32>(arg));
ResetThreadContext64(thread->context_64, stack_top, entry_point, arg);
#else
if (owner_process && !owner_process->Is64BitProcess()) {
thread->arm_interface = std::make_shared<Core::ARM_Unicorn>(
system, kernel.Interrupts(), kernel.IsMulticore(), ARM_Unicorn::Arch::AArch32,
processor_id);
} else {
thread->arm_interface = std::make_shared<Core::ARM_Unicorn>(
system, kernel.Interrupts(), kernel.IsMulticore(), ARM_Unicorn::Arch::AArch64,
processor_id);
}
LOG_WARNING(Core, "CPU JIT requested, but Dynarmic not available");
#endif
ResetThreadContext32(thread->context_32, static_cast<u32>(stack_top),
static_cast<u32>(entry_point), static_cast<u32>(arg));
ResetThreadContext64(thread->context_64, stack_top, entry_point, arg);
}
thread->host_context =
std::make_shared<Common::Fiber>(std::move(thread_start_func), thread_start_parameter);
return MakeResult<std::shared_ptr<Thread>>(std::move(thread));
}
void Thread::SetPriority(u32 priority) {
SchedulerLock lock(kernel);
ASSERT_MSG(priority <= THREADPRIO_LOWEST && priority >= THREADPRIO_HIGHEST,
"Invalid priority value.");
nominal_priority = priority;
UpdatePriority();
}
void Thread::SetWaitSynchronizationResult(ResultCode result) {
context_32.cpu_registers[0] = result.raw;
context_64.cpu_registers[0] = result.raw;
}
void Thread::SetWaitSynchronizationOutput(s32 output) {
context_32.cpu_registers[1] = output;
context_64.cpu_registers[1] = output;
void Thread::SetSynchronizationResults(SynchronizationObject* object, ResultCode result) {
signaling_object = object;
signaling_result = result;
}
s32 Thread::GetSynchronizationObjectIndex(std::shared_ptr<SynchronizationObject> object) const {
ASSERT_MSG(!wait_objects.empty(), "Thread is not waiting for anything");
const auto match = std::find(wait_objects.rbegin(), wait_objects.rend(), object);
return static_cast<s32>(std::distance(match, wait_objects.rend()) - 1);
ASSERT_MSG(!wait_objects->empty(), "Thread is not waiting for anything");
const auto match = std::find(wait_objects->rbegin(), wait_objects->rend(), object);
return static_cast<s32>(std::distance(match, wait_objects->rend()) - 1);
}
VAddr Thread::GetCommandBufferAddress() const {
@@ -236,6 +282,14 @@ VAddr Thread::GetCommandBufferAddress() const {
return GetTLSAddress() + command_header_offset;
}
Core::ARM_Interface& Thread::ArmInterface() {
return *arm_interface;
}
const Core::ARM_Interface& Thread::ArmInterface() const {
return *arm_interface;
}
void Thread::SetStatus(ThreadStatus new_status) {
if (new_status == status) {
return;
@@ -257,10 +311,6 @@ void Thread::SetStatus(ThreadStatus new_status) {
break;
}
if (status == ThreadStatus::Running) {
last_running_ticks = Core::System::GetInstance().CoreTiming().GetTicks();
}
status = new_status;
}
@@ -341,75 +391,116 @@ void Thread::UpdatePriority() {
lock_owner->UpdatePriority();
}
void Thread::ChangeCore(u32 core, u64 mask) {
SetCoreAndAffinityMask(core, mask);
}
bool Thread::AllSynchronizationObjectsReady() const {
return std::none_of(wait_objects.begin(), wait_objects.end(),
return std::none_of(wait_objects->begin(), wait_objects->end(),
[this](const std::shared_ptr<SynchronizationObject>& object) {
return object->ShouldWait(this);
});
}
bool Thread::InvokeWakeupCallback(ThreadWakeupReason reason, std::shared_ptr<Thread> thread,
std::shared_ptr<SynchronizationObject> object,
std::size_t index) {
ASSERT(wakeup_callback);
return wakeup_callback(reason, std::move(thread), std::move(object), index);
bool Thread::InvokeHLECallback(std::shared_ptr<Thread> thread) {
ASSERT(hle_callback);
return hle_callback(std::move(thread));
}
void Thread::SetActivity(ThreadActivity value) {
activity = value;
ResultCode Thread::SetActivity(ThreadActivity value) {
SchedulerLock lock(kernel);
auto sched_status = GetSchedulingStatus();
if (sched_status != ThreadSchedStatus::Runnable && sched_status != ThreadSchedStatus::Paused) {
return ERR_INVALID_STATE;
}
if (IsPendingTermination()) {
return RESULT_SUCCESS;
}
if (value == ThreadActivity::Paused) {
// Set status if not waiting
if (status == ThreadStatus::Ready || status == ThreadStatus::Running) {
SetStatus(ThreadStatus::Paused);
kernel.PrepareReschedule(processor_id);
if ((pausing_state & static_cast<u32>(ThreadSchedFlags::ThreadPauseFlag)) != 0) {
return ERR_INVALID_STATE;
}
} else if (status == ThreadStatus::Paused) {
// Ready to reschedule
ResumeFromWait();
AddSchedulingFlag(ThreadSchedFlags::ThreadPauseFlag);
} else {
if ((pausing_state & static_cast<u32>(ThreadSchedFlags::ThreadPauseFlag)) == 0) {
return ERR_INVALID_STATE;
}
RemoveSchedulingFlag(ThreadSchedFlags::ThreadPauseFlag);
}
return RESULT_SUCCESS;
}
void Thread::Sleep(s64 nanoseconds) {
// Sleep current thread and check for next thread to schedule
SetStatus(ThreadStatus::WaitSleep);
ResultCode Thread::Sleep(s64 nanoseconds) {
Handle event_handle{};
{
SchedulerLockAndSleep lock(kernel, event_handle, this, nanoseconds);
SetStatus(ThreadStatus::WaitSleep);
}
// Create an event to wake the thread up after the specified nanosecond delay has passed
WakeAfterDelay(nanoseconds);
if (event_handle != InvalidHandle) {
auto& time_manager = kernel.TimeManager();
time_manager.UnscheduleTimeEvent(event_handle);
}
return RESULT_SUCCESS;
}
bool Thread::YieldSimple() {
auto& scheduler = kernel.GlobalScheduler();
return scheduler.YieldThread(this);
std::pair<ResultCode, bool> Thread::YieldSimple() {
bool is_redundant = false;
{
SchedulerLock lock(kernel);
is_redundant = kernel.GlobalScheduler().YieldThread(this);
}
return {RESULT_SUCCESS, is_redundant};
}
bool Thread::YieldAndBalanceLoad() {
auto& scheduler = kernel.GlobalScheduler();
return scheduler.YieldThreadAndBalanceLoad(this);
std::pair<ResultCode, bool> Thread::YieldAndBalanceLoad() {
bool is_redundant = false;
{
SchedulerLock lock(kernel);
is_redundant = kernel.GlobalScheduler().YieldThreadAndBalanceLoad(this);
}
return {RESULT_SUCCESS, is_redundant};
}
bool Thread::YieldAndWaitForLoadBalancing() {
auto& scheduler = kernel.GlobalScheduler();
return scheduler.YieldThreadAndWaitForLoadBalancing(this);
std::pair<ResultCode, bool> Thread::YieldAndWaitForLoadBalancing() {
bool is_redundant = false;
{
SchedulerLock lock(kernel);
is_redundant = kernel.GlobalScheduler().YieldThreadAndWaitForLoadBalancing(this);
}
return {RESULT_SUCCESS, is_redundant};
}
void Thread::AddSchedulingFlag(ThreadSchedFlags flag) {
const u32 old_state = scheduling_state;
pausing_state |= static_cast<u32>(flag);
const u32 base_scheduling = static_cast<u32>(GetSchedulingStatus());
scheduling_state = base_scheduling | pausing_state;
kernel.GlobalScheduler().AdjustSchedulingOnStatus(this, old_state);
}
void Thread::RemoveSchedulingFlag(ThreadSchedFlags flag) {
const u32 old_state = scheduling_state;
pausing_state &= ~static_cast<u32>(flag);
const u32 base_scheduling = static_cast<u32>(GetSchedulingStatus());
scheduling_state = base_scheduling | pausing_state;
kernel.GlobalScheduler().AdjustSchedulingOnStatus(this, old_state);
}
void Thread::SetSchedulingStatus(ThreadSchedStatus new_status) {
const u32 old_flags = scheduling_state;
const u32 old_state = scheduling_state;
scheduling_state = (scheduling_state & static_cast<u32>(ThreadSchedMasks::HighMask)) |
static_cast<u32>(new_status);
AdjustSchedulingOnStatus(old_flags);
kernel.GlobalScheduler().AdjustSchedulingOnStatus(this, old_state);
}
void Thread::SetCurrentPriority(u32 new_priority) {
const u32 old_priority = std::exchange(current_priority, new_priority);
AdjustSchedulingOnPriority(old_priority);
kernel.GlobalScheduler().AdjustSchedulingOnPriority(this, old_priority);
}
ResultCode Thread::SetCoreAndAffinityMask(s32 new_core, u64 new_affinity_mask) {
SchedulerLock lock(kernel);
const auto HighestSetCore = [](u64 mask, u32 max_cores) {
for (s32 core = static_cast<s32>(max_cores - 1); core >= 0; core--) {
if (((mask >> core) & 1) != 0) {
@@ -443,111 +534,12 @@ ResultCode Thread::SetCoreAndAffinityMask(s32 new_core, u64 new_affinity_mask) {
processor_id = ideal_core;
}
}
AdjustSchedulingOnAffinity(old_affinity_mask, old_core);
kernel.GlobalScheduler().AdjustSchedulingOnAffinity(this, old_affinity_mask, old_core);
}
}
return RESULT_SUCCESS;
}
void Thread::AdjustSchedulingOnStatus(u32 old_flags) {
if (old_flags == scheduling_state) {
return;
}
auto& scheduler = kernel.GlobalScheduler();
if (static_cast<ThreadSchedStatus>(old_flags & static_cast<u32>(ThreadSchedMasks::LowMask)) ==
ThreadSchedStatus::Runnable) {
// In this case the thread was running, now it's pausing/exitting
if (processor_id >= 0) {
scheduler.Unschedule(current_priority, static_cast<u32>(processor_id), this);
}
for (u32 core = 0; core < Core::Hardware::NUM_CPU_CORES; core++) {
if (core != static_cast<u32>(processor_id) && ((affinity_mask >> core) & 1) != 0) {
scheduler.Unsuggest(current_priority, core, this);
}
}
} else if (GetSchedulingStatus() == ThreadSchedStatus::Runnable) {
// The thread is now set to running from being stopped
if (processor_id >= 0) {
scheduler.Schedule(current_priority, static_cast<u32>(processor_id), this);
}
for (u32 core = 0; core < Core::Hardware::NUM_CPU_CORES; core++) {
if (core != static_cast<u32>(processor_id) && ((affinity_mask >> core) & 1) != 0) {
scheduler.Suggest(current_priority, core, this);
}
}
}
scheduler.SetReselectionPending();
}
void Thread::AdjustSchedulingOnPriority(u32 old_priority) {
if (GetSchedulingStatus() != ThreadSchedStatus::Runnable) {
return;
}
auto& scheduler = kernel.GlobalScheduler();
if (processor_id >= 0) {
scheduler.Unschedule(old_priority, static_cast<u32>(processor_id), this);
}
for (u32 core = 0; core < Core::Hardware::NUM_CPU_CORES; core++) {
if (core != static_cast<u32>(processor_id) && ((affinity_mask >> core) & 1) != 0) {
scheduler.Unsuggest(old_priority, core, this);
}
}
// Add thread to the new priority queues.
Thread* current_thread = GetCurrentThread();
if (processor_id >= 0) {
if (current_thread == this) {
scheduler.SchedulePrepend(current_priority, static_cast<u32>(processor_id), this);
} else {
scheduler.Schedule(current_priority, static_cast<u32>(processor_id), this);
}
}
for (u32 core = 0; core < Core::Hardware::NUM_CPU_CORES; core++) {
if (core != static_cast<u32>(processor_id) && ((affinity_mask >> core) & 1) != 0) {
scheduler.Suggest(current_priority, core, this);
}
}
scheduler.SetReselectionPending();
}
void Thread::AdjustSchedulingOnAffinity(u64 old_affinity_mask, s32 old_core) {
auto& scheduler = kernel.GlobalScheduler();
if (GetSchedulingStatus() != ThreadSchedStatus::Runnable ||
current_priority >= THREADPRIO_COUNT) {
return;
}
for (u32 core = 0; core < Core::Hardware::NUM_CPU_CORES; core++) {
if (((old_affinity_mask >> core) & 1) != 0) {
if (core == static_cast<u32>(old_core)) {
scheduler.Unschedule(current_priority, core, this);
} else {
scheduler.Unsuggest(current_priority, core, this);
}
}
}
for (u32 core = 0; core < Core::Hardware::NUM_CPU_CORES; core++) {
if (((affinity_mask >> core) & 1) != 0) {
if (core == static_cast<u32>(processor_id)) {
scheduler.Schedule(current_priority, core, this);
} else {
scheduler.Suggest(current_priority, core, this);
}
}
}
scheduler.SetReselectionPending();
}
////////////////////////////////////////////////////////////////////////////////////////////////////
/**

View File

@@ -6,26 +6,47 @@
#include <functional>
#include <string>
#include <utility>
#include <vector>
#include "common/common_types.h"
#include "common/spin_lock.h"
#include "core/arm/arm_interface.h"
#include "core/hle/kernel/object.h"
#include "core/hle/kernel/synchronization_object.h"
#include "core/hle/result.h"
namespace Common {
class Fiber;
}
namespace Core {
class ARM_Interface;
class System;
} // namespace Core
namespace Kernel {
class GlobalScheduler;
class KernelCore;
class Process;
class Scheduler;
enum ThreadPriority : u32 {
THREADPRIO_HIGHEST = 0, ///< Highest thread priority
THREADPRIO_USERLAND_MAX = 24, ///< Highest thread priority for userland apps
THREADPRIO_DEFAULT = 44, ///< Default thread priority for userland apps
THREADPRIO_LOWEST = 63, ///< Lowest thread priority
THREADPRIO_COUNT = 64, ///< Total number of possible thread priorities.
THREADPRIO_HIGHEST = 0, ///< Highest thread priority
THREADPRIO_MAX_CORE_MIGRATION = 2, ///< Highest priority for a core migration
THREADPRIO_USERLAND_MAX = 24, ///< Highest thread priority for userland apps
THREADPRIO_DEFAULT = 44, ///< Default thread priority for userland apps
THREADPRIO_LOWEST = 63, ///< Lowest thread priority
THREADPRIO_COUNT = 64, ///< Total number of possible thread priorities.
};
enum ThreadType : u32 {
THREADTYPE_USER = 0x1,
THREADTYPE_KERNEL = 0x2,
THREADTYPE_HLE = 0x4,
THREADTYPE_IDLE = 0x8,
THREADTYPE_SUSPEND = 0x10,
};
enum ThreadProcessorId : s32 {
@@ -107,26 +128,45 @@ public:
using ThreadSynchronizationObjects = std::vector<std::shared_ptr<SynchronizationObject>>;
using WakeupCallback =
std::function<bool(ThreadWakeupReason reason, std::shared_ptr<Thread> thread,
std::shared_ptr<SynchronizationObject> object, std::size_t index)>;
using HLECallback = std::function<bool(std::shared_ptr<Thread> thread)>;
/**
* Creates and returns a new thread. The new thread is immediately scheduled
* @param kernel The kernel instance this thread will be created under.
* @param system The instance of the whole system
* @param name The friendly name desired for the thread
* @param entry_point The address at which the thread should start execution
* @param priority The thread's priority
* @param arg User data to pass to the thread
* @param processor_id The ID(s) of the processors on which the thread is desired to be run
* @param stack_top The address of the thread's stack top
* @param owner_process The parent process for the thread
* @param owner_process The parent process for the thread, if null, it's a kernel thread
* @return A shared pointer to the newly created thread
*/
static ResultVal<std::shared_ptr<Thread>> Create(KernelCore& kernel, std::string name,
VAddr entry_point, u32 priority, u64 arg,
s32 processor_id, VAddr stack_top,
Process& owner_process);
static ResultVal<std::shared_ptr<Thread>> Create(Core::System& system, ThreadType type_flags,
std::string name, VAddr entry_point,
u32 priority, u64 arg, s32 processor_id,
VAddr stack_top, Process* owner_process);
/**
* Creates and returns a new thread. The new thread is immediately scheduled
* @param system The instance of the whole system
* @param name The friendly name desired for the thread
* @param entry_point The address at which the thread should start execution
* @param priority The thread's priority
* @param arg User data to pass to the thread
* @param processor_id The ID(s) of the processors on which the thread is desired to be run
* @param stack_top The address of the thread's stack top
* @param owner_process The parent process for the thread, if null, it's a kernel thread
* @param thread_start_func The function where the host context will start.
* @param thread_start_parameter The parameter which will passed to host context on init
* @return A shared pointer to the newly created thread
*/
static ResultVal<std::shared_ptr<Thread>> Create(Core::System& system, ThreadType type_flags,
std::string name, VAddr entry_point,
u32 priority, u64 arg, s32 processor_id,
VAddr stack_top, Process* owner_process,
std::function<void(void*)>&& thread_start_func,
void* thread_start_parameter);
std::string GetName() const override {
return name;
@@ -181,7 +221,7 @@ public:
void UpdatePriority();
/// Changes the core that the thread is running or scheduled to run on.
void ChangeCore(u32 core, u64 mask);
ResultCode SetCoreAndAffinityMask(s32 new_core, u64 new_affinity_mask);
/**
* Gets the thread's thread ID
@@ -194,6 +234,10 @@ public:
/// Resumes a thread from waiting
void ResumeFromWait();
void OnWakeUp();
ResultCode Start();
/// Cancels a waiting operation that this thread may or may not be within.
///
/// When the thread is within a waiting state, this will set the thread's
@@ -202,26 +246,19 @@ public:
///
void CancelWait();
/**
* Schedules an event to wake up the specified thread after the specified delay
* @param nanoseconds The time this thread will be allowed to sleep for
*/
void WakeAfterDelay(s64 nanoseconds);
void SetSynchronizationResults(SynchronizationObject* object, ResultCode result);
/// Cancel any outstanding wakeup events for this thread
void CancelWakeupTimer();
Core::ARM_Interface& ArmInterface();
/**
* Sets the result after the thread awakens (from svcWaitSynchronization)
* @param result Value to set to the returned result
*/
void SetWaitSynchronizationResult(ResultCode result);
const Core::ARM_Interface& ArmInterface() const;
/**
* Sets the output parameter value after the thread awakens (from svcWaitSynchronization)
* @param output Value to set to the output parameter
*/
void SetWaitSynchronizationOutput(s32 output);
SynchronizationObject* GetSignalingObject() const {
return signaling_object;
}
ResultCode GetSignalingResult() const {
return signaling_result;
}
/**
* Retrieves the index that this particular object occupies in the list of objects
@@ -269,11 +306,6 @@ public:
*/
VAddr GetCommandBufferAddress() const;
/// Returns whether this thread is waiting on objects from a WaitSynchronization call.
bool IsSleepingOnWait() const {
return status == ThreadStatus::WaitSynch;
}
ThreadContext32& GetContext32() {
return context_32;
}
@@ -290,6 +322,28 @@ public:
return context_64;
}
bool IsHLEThread() const {
return (type & THREADTYPE_HLE) != 0;
}
bool IsSuspendThread() const {
return (type & THREADTYPE_SUSPEND) != 0;
}
bool IsIdleThread() const {
return (type & THREADTYPE_IDLE) != 0;
}
bool WasRunning() const {
return was_running;
}
void SetWasRunning(bool value) {
was_running = value;
}
std::shared_ptr<Common::Fiber>& GetHostContext();
ThreadStatus GetStatus() const {
return status;
}
@@ -325,18 +379,18 @@ public:
}
const ThreadSynchronizationObjects& GetSynchronizationObjects() const {
return wait_objects;
return *wait_objects;
}
void SetSynchronizationObjects(ThreadSynchronizationObjects objects) {
wait_objects = std::move(objects);
void SetSynchronizationObjects(ThreadSynchronizationObjects* objects) {
wait_objects = objects;
}
void ClearSynchronizationObjects() {
for (const auto& waiting_object : wait_objects) {
for (const auto& waiting_object : *wait_objects) {
waiting_object->RemoveWaitingThread(SharedFrom(this));
}
wait_objects.clear();
wait_objects->clear();
}
/// Determines whether all the objects this thread is waiting on are ready.
@@ -386,26 +440,35 @@ public:
arb_wait_address = address;
}
bool HasWakeupCallback() const {
return wakeup_callback != nullptr;
bool HasHLECallback() const {
return hle_callback != nullptr;
}
void SetWakeupCallback(WakeupCallback callback) {
wakeup_callback = std::move(callback);
void SetHLECallback(HLECallback callback) {
hle_callback = std::move(callback);
}
void InvalidateWakeupCallback() {
SetWakeupCallback(nullptr);
void SetHLETimeEvent(Handle time_event) {
hle_time_event = time_event;
}
/**
* Invokes the thread's wakeup callback.
*
* @pre A valid wakeup callback has been set. Violating this precondition
* will cause an assertion to trigger.
*/
bool InvokeWakeupCallback(ThreadWakeupReason reason, std::shared_ptr<Thread> thread,
std::shared_ptr<SynchronizationObject> object, std::size_t index);
void SetHLESyncObject(SynchronizationObject* object) {
hle_object = object;
}
Handle GetHLETimeEvent() const {
return hle_time_event;
}
SynchronizationObject* GetHLESyncObject() const {
return hle_object;
}
void InvalidateHLECallback() {
SetHLECallback(nullptr);
}
bool InvokeHLECallback(std::shared_ptr<Thread> thread);
u32 GetIdealCore() const {
return ideal_core;
@@ -415,23 +478,19 @@ public:
return affinity_mask;
}
ThreadActivity GetActivity() const {
return activity;
}
void SetActivity(ThreadActivity value);
ResultCode SetActivity(ThreadActivity value);
/// Sleeps this thread for the given amount of nanoseconds.
void Sleep(s64 nanoseconds);
ResultCode Sleep(s64 nanoseconds);
/// Yields this thread without rebalancing loads.
bool YieldSimple();
std::pair<ResultCode, bool> YieldSimple();
/// Yields this thread and does a load rebalancing.
bool YieldAndBalanceLoad();
std::pair<ResultCode, bool> YieldAndBalanceLoad();
/// Yields this thread and if the core is left idle, loads are rebalanced
bool YieldAndWaitForLoadBalancing();
std::pair<ResultCode, bool> YieldAndWaitForLoadBalancing();
void IncrementYieldCount() {
yield_count++;
@@ -446,6 +505,10 @@ public:
static_cast<u32>(ThreadSchedMasks::LowMask));
}
bool IsRunnable() const {
return scheduling_state == static_cast<u32>(ThreadSchedStatus::Runnable);
}
bool IsRunning() const {
return is_running;
}
@@ -466,17 +529,67 @@ public:
return global_handle;
}
private:
void SetSchedulingStatus(ThreadSchedStatus new_status);
void SetCurrentPriority(u32 new_priority);
ResultCode SetCoreAndAffinityMask(s32 new_core, u64 new_affinity_mask);
bool IsWaitingForArbitration() const {
return waiting_for_arbitration;
}
void WaitForArbitration(bool set) {
waiting_for_arbitration = set;
}
bool IsWaitingSync() const {
return is_waiting_on_sync;
}
void SetWaitingSync(bool is_waiting) {
is_waiting_on_sync = is_waiting;
}
bool IsPendingTermination() const {
return will_be_terminated || GetSchedulingStatus() == ThreadSchedStatus::Exited;
}
bool IsPaused() const {
return pausing_state != 0;
}
bool IsContinuousOnSVC() const {
return is_continuous_on_svc;
}
void SetContinuousOnSVC(bool is_continuous) {
is_continuous_on_svc = is_continuous;
}
bool IsPhantomMode() const {
return is_phantom_mode;
}
void SetPhantomMode(bool phantom) {
is_phantom_mode = phantom;
}
bool HasExited() const {
return has_exited;
}
private:
friend class GlobalScheduler;
friend class Scheduler;
void SetSchedulingStatus(ThreadSchedStatus new_status);
void AddSchedulingFlag(ThreadSchedFlags flag);
void RemoveSchedulingFlag(ThreadSchedFlags flag);
void SetCurrentPriority(u32 new_priority);
void AdjustSchedulingOnStatus(u32 old_flags);
void AdjustSchedulingOnPriority(u32 old_priority);
void AdjustSchedulingOnAffinity(u64 old_affinity_mask, s32 old_core);
Common::SpinLock context_guard{};
ThreadContext32 context_32{};
ThreadContext64 context_64{};
std::unique_ptr<Core::ARM_Interface> arm_interface{};
std::shared_ptr<Common::Fiber> host_context{};
u64 thread_id = 0;
@@ -485,6 +598,8 @@ private:
VAddr entry_point = 0;
VAddr stack_top = 0;
ThreadType type;
/// Nominal thread priority, as set by the emulated application.
/// The nominal priority is the thread priority without priority
/// inheritance taken into account.
@@ -509,7 +624,10 @@ private:
/// Objects that the thread is waiting on, in the same order as they were
/// passed to WaitSynchronization.
ThreadSynchronizationObjects wait_objects;
ThreadSynchronizationObjects* wait_objects;
SynchronizationObject* signaling_object;
ResultCode signaling_result{RESULT_SUCCESS};
/// List of threads that are waiting for a mutex that is held by this thread.
MutexWaitingThreads wait_mutex_threads;
@@ -526,30 +644,39 @@ private:
/// If waiting for an AddressArbiter, this is the address being waited on.
VAddr arb_wait_address{0};
bool waiting_for_arbitration{};
/// Handle used as userdata to reference this object when inserting into the CoreTiming queue.
Handle global_handle = 0;
/// Callback that will be invoked when the thread is resumed from a waiting state. If the thread
/// was waiting via WaitSynchronization then the object will be the last object that became
/// available. In case of a timeout, the object will be nullptr.
WakeupCallback wakeup_callback;
/// Callback for HLE Events
HLECallback hle_callback;
Handle hle_time_event;
SynchronizationObject* hle_object;
Scheduler* scheduler = nullptr;
u32 ideal_core{0xFFFFFFFF};
u64 affinity_mask{0x1};
ThreadActivity activity = ThreadActivity::Normal;
s32 ideal_core_override = -1;
u64 affinity_mask_override = 0x1;
u32 affinity_override_count = 0;
u32 scheduling_state = 0;
u32 pausing_state = 0;
bool is_running = false;
bool is_waiting_on_sync = false;
bool is_sync_cancelled = false;
bool is_continuous_on_svc = false;
bool will_be_terminated = false;
bool is_phantom_mode = false;
bool has_exited = false;
bool was_running = false;
std::string name;
};

Some files were not shown because too many files have changed in this diff Show More