Compare commits

...

214 Commits

Author SHA1 Message Date
Lioncash
922d5187c4 CMakeLists: Update zstd to 1.5.0
zstd 1.5.0 brings numerous performance improvements to the library, as
can be seen here: https://github.com/facebook/zstd/releases/tag/v1.5.0
2021-05-21 13:24:11 -04:00
bunnei
5068279f23 Merge pull request #6248 from A-w-x/intelmesa
gl_device: Intel: Disable texture view formats workaround on mesa
2021-05-20 23:47:14 -07:00
bunnei
136e8e829f Merge pull request #6333 from Morph1984/swkbd-confirm-text
applets/swkbd: Send the correct text string on TextCheck::Confirm
2021-05-20 22:42:54 -07:00
bunnei
ea4e4b05e4 Merge pull request #6320 from Morph1984/get-pid
hle_ipc: Add a getter for PID
2021-05-20 21:40:03 -07:00
bunnei
7626ca3343 Merge pull request #6321 from lat9nq/per-game-cpu
configuration: Add CPU tab to game properties and slight per-game settings rework
2021-05-20 20:10:56 -07:00
lat9nq
5153d5387a configure_cpu: Simplify UpdateGroup
Co-authored-by: Ameer J <52414509+ameerj@users.noreply.github.com>
2021-05-20 01:11:56 -04:00
bunnei
b5d21cc1b1 Merge pull request #6297 from lioncash/common-conv
parent_of_member: Make sign conversion explicit in OffsetOfImpl()
2021-05-19 18:43:47 -07:00
bunnei
41b1f8d616 Merge pull request #6310 from german77/nanMotion
input_common: Sanitize motion data
2021-05-19 15:47:48 -07:00
lat9nq
12ef74456c configuration_shared: Drop unused function and template another
Drops an unused variant of ApplyPerGameSetting, and turns the QComboBox
variants of SetPerGameSetting into a template.

Co-authored-by: Ameer J <52414509+ameerj@users.noreply.github.com>
2021-05-19 16:00:48 -04:00
Morph
5396593b55 applets/swkbd: Send the correct text string on TextCheck::Confirm
Previously the text string for the inline software keyboard was being sent instead of the normal software keyboard, leading to empty text being sent all the time.
2021-05-19 00:26:32 -04:00
bunnei
7d86a6ff02 Merge pull request #6317 from ameerj/fps-fix
perf_stats: Rework FPS counter to be more accurate
2021-05-18 19:56:29 -07:00
bunnei
61f293e5c9 Merge pull request #6337 from Morph1984/transfer-mem-size
KTransferMemory: Return size instead of size * PageSize in GetSize()
2021-05-18 11:00:34 -07:00
Morph
7f78b17e20 KTransferMemory: Return size instead of size * PageSize in GetSize()
size is already the size in bytes. We do not need to multiply it by the page size
2021-05-18 13:14:28 -04:00
bunnei
93bc59b62d Merge pull request #6322 from ameerj/fast-null-buffer
buffer_cache: Ensure null buffers cannot take the fast uniform bind path
2021-05-17 15:45:36 -07:00
lat9nq
339dc4f806 general: Demote custom_rtc to regular setting 2021-05-17 15:54:30 -04:00
Mat M
b462618ed7 Merge pull request #6328 from Morph1984/enforce-c4715
CMakeLists: Enforce C4715 on MSVC
2021-05-17 13:20:58 -04:00
bunnei
e8269fe3bc Merge pull request #6327 from Morph1984/duplicate_labels
configure_debug: FIx duplicate labels
2021-05-17 06:18:10 -07:00
Morph
d001687ca6 CMakeLists: Enforce C4715 on MSVC
This is similar to -Werror=return-type
2021-05-17 03:48:58 -04:00
Morph
cd6dcef5aa configure_debug: FIx duplicate labels
Duplicate labels were unintentionally introduced due to copy-paste. This silences the compilation warning produced by the presence of these duplicates.
2021-05-16 23:32:51 -04:00
bunnei
0a74d8490a Merge pull request #6326 from Morph1984/fix-version
yuzu/main: Fix version info in logging and about dialog
2021-05-16 20:09:54 -07:00
Morph
af69b48390 yuzu/main: Fix version info in logging and about dialog 2021-05-16 22:17:17 -04:00
bunnei
440eb840ea Merge pull request #6319 from Morph1984/no-install-base
main: Prevent installing base titles into NAND
2021-05-16 16:33:33 -07:00
Ameer J
bfe8816f7c Merge pull request #6324 from lat9nq/appimage-freeze
ci: linux: Freeze AppImage binaries
2021-05-16 14:43:02 -04:00
ameerj
acf22336ec buffer_cache: Ensure null buffers cannot take the fast uniform bind path
Fixes a crash in New Pokemon Snap
2021-05-16 07:43:40 -04:00
lat9nq
9ec26a805a ci: linux: Freeze AppImage binaries
A regression was introduced on May 13 by linuxdeploy that causes file
open dialogs to crash yuzu in the AppImage (likely this commit
1e28ee38fa174279defe70cdaadf2a552c80258c from
linuxdeploy/linuxdeploy-desktopfile). Instead of downloading the latest
version from each of the repos we use to build the AppImage, just
download the ones hosted at yuzu-emu/ext-linux-bin, which are the same
binaries we have been using, but verified to be working and won't update
on us beyond our control.

This can eventually be moved into the container itself to remove the
need to download them at build time.
2021-05-16 05:07:49 -04:00
bunnei
d5131805ce Merge pull request #6284 from ameerj/shantae-fix
nvflinger: Create layers when they are queried but not found
2021-05-16 01:45:14 -07:00
bunnei
ad6e20cfde Merge pull request #6296 from lioncash/shadow-error
core: Make variable shadowing a compile-time error
2021-05-16 01:35:46 -07:00
bunnei
e8d2de1f99 Merge pull request #6307 from Morph1984/fix-response-push-size
nifm, ssl: Fix incorrect response sizes
2021-05-16 01:32:04 -07:00
Morph
a170aa16b6 main: Prevent installing base titles into NAND
Many users have been installing their base titles into NAND instead of adding them into the games list. This prevents users from installing any base titles and warns the user about the action.
2021-05-16 04:13:57 -04:00
Morph
049769a0c9 hle_ipc: unsigned -> u32
This is more concise and consistent with the rest of the codebase.
2021-05-16 04:11:00 -04:00
Morph
81a5ecdb18 hle_ipc: Add a getter for PID 2021-05-16 04:10:42 -04:00
Morph
9edfd88a8a Merge pull request #6293 from v1993/master
On Linux, build SDL2 from externals with HIDAPI support
2021-05-16 04:05:42 -04:00
Lioncash
9a07ed53eb core: Make variable shadowing a compile-time error
Now that we have most of core free of shadowing, we can enable the
warning as an error to catch anything that may be remaining and also
eliminate this class of logic bug entirely.
2021-05-16 03:43:16 -04:00
bunnei
06c410ee88 Merge pull request #6316 from ameerj/title-fix
main: Add running title's version to window name on EA/mainline
2021-05-15 22:40:35 -07:00
lat9nq
ab2677f0a1 configuration: Add CPU tab to game properties
Allows setting CPU accuracy to Accurate or Unsafe per-game, as well as
the accuracy options for Unsafe. Debug is not allowed here as a per-game
CPU accuracy.
2021-05-16 01:31:42 -04:00
bunnei
5a2b15bf75 Merge pull request #6299 from bunnei/ipc-improvements
Various improvements to IPC and session management
2021-05-15 22:30:21 -07:00
bunnei
a1138028a8 Merge pull request #6289 from ameerj/oob-blit
texture_cache: Handle out of bound texture blits
2021-05-15 21:32:37 -07:00
Morph
faaea00069 nifm, ssl: Fix incorrect response sizes 2021-05-16 00:20:48 -04:00
Morph
6c78c2ae38 Merge pull request #6244 from german77/sdlmotion
input_common: Implement SDL motion
2021-05-15 23:20:18 -04:00
lat9nq
4aac1ae4b1 configuration: Simplify applying per-game settings
Originally, every time we add a per-game setting, we'd have to guard for
it when setting it on the global config, and use a specific function to
do it for the per-game config.

This moves the global check into the ApplyPerGameSetting function so
that we can use it for changing both the global and per-game states.
Less work for the programmer.
2021-05-15 22:59:38 -04:00
lat9nq
59236b7d0f configuration_shared: Add some comments
Monke brain can't remember what all of these does a year later.
2021-05-15 22:07:20 -04:00
lat9nq
e169fdad4f general: Make CPU accuracy and related a Settings::Setting
Required to make CPU accuracy and unsafe settings available to use as a
per-game setting.
2021-05-15 20:46:48 -04:00
ameerj
5bef54618a perf_stats: Rework FPS counter to be more accurate
The FPS counter was based on metrics in the nvdisp swapbuffers call. This metric would be accurate if the gpu thread/renderer were synchronous with the nvdisp service, but that's no longer the case.

This commit moves the frame counting responsibility onto the concrete renderers after their frame draw calls. Resulting in more meaningful metrics.
The displayed FPS is now made up of the average framerate between the previous and most recent update, in order to avoid distracting FPS counter updates when framerate is oscillating between close values.

The status bar update frequency was also changed from 2 seconds to 500ms.
2021-05-15 20:34:20 -04:00
ameerj
a3e68dce56 main: Add title's version to window name on EA/mainline
Fixes the missing title version number on EA/mainline builds which override the title bar string.
2021-05-15 16:55:30 -04:00
german77
f20f4587e6 input_common: Implement SDL motion 2021-05-15 08:56:58 -05:00
Ameer J
904584e4ba Merge pull request #6300 from Morph1984/mbedtls
externals: Update mbedtls to 8c88150ca
2021-05-13 23:11:32 -04:00
german77
fd7c273fab input_common: Sanitize motion data 2021-05-13 13:41:32 -05:00
Morph
0949e38263 Merge pull request #6306 from lat9nq/ffmpeg-untagged
externals: Checkout 79e8d17024 for FFmpeg
2021-05-13 04:59:48 -04:00
lat9nq
0ecb6c6647 externals: Checkout 79e8d17024 for FFmpeg
6b6b9e593d does not exist on FFmpeg master, and tag n4.3.1 requires
manually fetching all of FFmpeg's tags. `git` reports that the commit
does not exist initially and can be confusing as a result. Instead,
checkout the immediately previous commit from n4.3.1 on their master
branch.
2021-05-13 04:53:59 -04:00
bunnei
e12ee020e7 Merge pull request #6301 from Morph1984/ssl-ImportClientPki
ssl: Stub Import(Client/Server)Pki
2021-05-12 22:11:19 -07:00
Morph
c8707628f6 Merge pull request #6298 from Kewlan/toggled-show-add-on-refresh
configure_ui: Call RequestGameListUpdate when toggling "Show Add-Ons Column"
2021-05-12 21:06:04 -04:00
Morph
271f2e2d78 ssl: Stub Import(Client/Server)Pki
- Used in JUMP FORCE Deluxe Edition
2021-05-12 21:04:13 -04:00
Morph
5a042bdaa1 Merge pull request #6267 from german77/gestureRewrite
hid: Improve hardware accuracy of gestures
2021-05-12 09:17:23 -04:00
bunnei
eee302b9b9 common: tree: Avoid a nullptr dereference. 2021-05-11 15:40:20 -07:00
bunnei
12d569e483 hle: kernel: hle_ipc: Fix outgoing IPC response size calculation. 2021-05-11 12:27:43 -07:00
bunnei
fc086f93b2 WORKAROUND: temp. disable session resource limits while we work out issues 2021-05-11 10:51:39 -07:00
bunnei
f2c26443f8 WORKAROUND: Do not use slab heap while we track down issues with resource management. 2021-05-11 10:27:18 -07:00
bunnei
b9f543b29f audren 2021-05-11 10:24:53 -07:00
Morph
02547439b1 externals: Update mbedtls to 8c88150ca 2021-05-11 00:43:04 -04:00
bunnei
343d92a092 core: hle: ipc_helpers: Fix cast on raw_data_size calculation. 2021-05-10 20:34:38 -07:00
bunnei
2c1e119c4a hle: service: sm: Add TIPC support.
- Fixes our error checking of names as well.
2021-05-10 20:34:38 -07:00
bunnei
913971417e hle: kernel: hle_ipc: Improve IPC code and add initial support for TIPC.
- Fixes our move handles implementation to actually move objects.
- Simplifies the traditional IPC path.
2021-05-10 20:34:38 -07:00
bunnei
49c4c329f6 hle: service: sm: GetService: Reserve session resource when we create a KSession. 2021-05-10 20:34:38 -07:00
bunnei
21671d05a3 hle: service: Add support for dispatching TIPC requests. 2021-05-10 20:34:38 -07:00
bunnei
da25a59866 hle: service: Implement IPC::CommandType::Close.
- This was not actually closing sessions before.
2021-05-10 20:34:38 -07:00
bunnei
41928dfdda hle: service: sm: Use RegisterNamedService to register the service. 2021-05-10 20:34:38 -07:00
bunnei
934b2d8842 hle: service: sm: Improve Initialize implementation. 2021-05-10 20:34:38 -07:00
bunnei
f54ea749a4 hle: kernel: svc: Update ConnectToNamedPort to use new CreateNamedServicePort interface. 2021-05-10 20:34:38 -07:00
bunnei
c6de9657be hle: kernel: Implement named service ports using service interface factory.
- This allows us to create a new interface each time ConnectToNamedPort is called, removing the assumption that these are static.
2021-05-10 20:34:38 -07:00
bunnei
44c763f9c6 hle: kernel: KSession: Improve implementation of CloneCurrentObject. 2021-05-10 20:33:53 -07:00
bunnei
cfed6936f3 hle: service: sm: Increase point buffer size. 2021-05-10 15:43:42 -07:00
bunnei
9f44a44f2f hle: ipc_helpers: Reserve session resource when we create a KSession. 2021-05-10 15:42:46 -07:00
bunnei
75f23ad494 hle: kernel: KClientPort: Cleanup comment format. 2021-05-10 15:41:46 -07:00
bunnei
7a06037c5f hle: ipc: Add declarations for TIPC. 2021-05-10 15:05:10 -07:00
bunnei
ed25191ee6 hle: kernel: Further cleanup and add TIPC helpers. 2021-05-10 15:05:10 -07:00
bunnei
d08bd3e062 hle: ipc_helpers: Update IPC response generation for TIPC. 2021-05-10 15:05:10 -07:00
Kewlan
1b4331397b configure_ui: Call RequestGameListUpdate when toggling "Show Add-Ons Column" 2021-05-10 18:49:30 +02:00
Lioncash
0aff3ba2ff parent_of_member: Make sign conversion explicit in OffsetOfImpl()
Previously these conversions were implicit and causing quite a few
warnings on clang.
2021-05-10 08:07:33 -04:00
bunnei
ec50a9b5b9 Merge pull request #6291 from lioncash/kern-shadow
kernel: Eliminate variable shadowing
2021-05-09 20:15:00 -07:00
v1993
fa647cc0b9 Only build SDL2 subsystems that we use
While at it, use better way to enable HIDAPI.
2021-05-10 01:11:54 +03:00
Morph
bb7d4ec3d3 Merge pull request #6294 from german77/kernelCleanup
kernel: Delete unused files
2021-05-09 12:22:44 -04:00
german77
0c1bb46f0a kernel: Delete unused files 2021-05-09 11:15:31 -05:00
v1993
648bef235e On Linux, build SDL2 from externals with HIDAPI support 2021-05-09 18:12:58 +03:00
Morph
f2b76284ed Merge pull request #6292 from lat9nq/sdl-trunk
externals: Update SDL to 107db2d8
2021-05-09 04:38:28 -04:00
lat9nq
b021e09fc0 externals: Use SDL2 statically
Building it as a shared library causes issues distributing it to an
AppImage, since linuxdeploy expects the executable to only dynamically
link to system libraries. Additionally, simply dynamically linking to a
library in the binary directory is bound to cause issues.

Solution is to use SDL's CMake switches and build it statically. We also
alias `SDL2` to `SDL2-static` on the external submodule for
compatibility with the rest of the project.
2021-05-09 02:38:46 -04:00
lat9nq
751cc687bb externals: Update SDL to 107db2d8
In light of 72a49c2bbc, the SDL submodule also needs updated. Updates
to the same commit used by the SDL package in ext-windows-bin.
2021-05-09 01:36:17 -04:00
Lioncash
2f62bae9e3 kernel: Eliminate variable shadowing
Now that the large kernel refactor is merged, we can eliminate the
remaining variable shadowing cases.
2021-05-08 12:33:26 -04:00
bunnei
72a49c2bbc Update SDL2 to SDL2-2.0.15-prerelease.
- Improves native Switch JoyCon/Pro Controller support.
2021-05-08 01:51:24 -07:00
bunnei
faa067f175 Merge pull request #6266 from bunnei/kautoobject-refactor
Kernel Rework: Migrate kernel objects to KAutoObject
2021-05-07 23:30:17 -07:00
ameerj
3671fd0a97 texture_cache: Handle out of bound texture blits
Some games interleave a texture blit using regions which are out-of-bounds. This addresses the interleaving to avoid oob reads from the src texture.
2021-05-07 22:14:21 -04:00
bunnei
8acf739b3f Merge pull request #6287 from lioncash/ldr-copy
ldr: Simplify memory copy within LoadNro()
2021-05-07 09:46:31 -07:00
Lioncash
8f638e81e9 ldr: Simplify memory copy within LoadNro()
We can use the dedicated memory function for performing copies instead
of reading into a temporary buffer and then immediately writing it back
out to memory.

Eliminates a bit of heap memory churn.
2021-05-06 19:18:14 -04:00
ameerj
da62e92784 nvflinger: Create layers when they are queried but not found
Fixes Shantae softlock on boot.
2021-05-06 11:20:52 -04:00
bunnei
d57b12193b hle: kernel: KPageTable: CanContain should not be constexpr. 2021-05-05 16:40:55 -07:00
bunnei
b805ee653f hle: kernel: Move slab resource counts to Kernel. 2021-05-05 16:40:54 -07:00
bunnei
d2c4dbde9e fixup! hle: kernel: Migrate KSharedMemory to KAutoObject. 2021-05-05 16:40:54 -07:00
bunnei
2c4615f3a6 fixup! hle: kernel: Migrate more of KThread to KAutoObject. 2021-05-05 16:40:54 -07:00
bunnei
a488b86e97 fixup! common: bit_util: Add BIT macro. 2021-05-05 16:40:54 -07:00
bunnei
510f71d871 fixup! hle: kernel: Ensure all kernel objects with KAutoObject are properly created. 2021-05-05 16:40:54 -07:00
bunnei
9f81221528 fixup! hle: kernel: Ensure all kernel objects with KAutoObject are properly created. 2021-05-05 16:40:54 -07:00
bunnei
eae107d0e9 kernel: svc: Remove unused RetrieveResourceLimitValue function. 2021-05-05 16:40:54 -07:00
bunnei
da22def511 hle: kernel: Fix un/sign mismatch errors with NUM_CPU_CORES. 2021-05-05 16:40:54 -07:00
bunnei
f23760b1e1 fixup! hle: kernel: Add initial impl. of slab setup. 2021-05-05 16:40:54 -07:00
bunnei
1e983b19df fixup! hle: kernel: Migrate to KHandleTable. 2021-05-05 16:40:54 -07:00
bunnei
ad5a5ef43f fixup! hle: kernel: Migrate more of KThread to KAutoObject. 2021-05-05 16:40:54 -07:00
bunnei
e02785be83 common: parent_of_member: Fix build for OffsetOf(). 2021-05-05 16:40:54 -07:00
bunnei
27a6ef64fd fixup! common: intrusive_red_black_tree: Disable static_assert that will not evaluate as constant on MSVC. 2021-05-05 16:40:54 -07:00
bunnei
9434603450 fixup! hle: kernel: Migrate KReadableEvent and KWritableEvent to KAutoObject. 2021-05-05 16:40:54 -07:00
bunnei
703d7aaab6 fixup! hle: kernel: Migrate to KHandleTable. 2021-05-05 16:40:54 -07:00
bunnei
9beb239634 fixup! hle: kernel: Add initial impl. of KLinkedList. 2021-05-05 16:40:54 -07:00
bunnei
2cdc7142b0 fixup! hle: kernel: Migrate to KHandleTable. 2021-05-05 16:40:54 -07:00
bunnei
34abe4a905 fixup! hle: kernel: Migrate KPort, KClientPort, and KServerPort to KAutoObject. 2021-05-05 16:40:54 -07:00
bunnei
f6d45b747e fixup! hle: kernel: Migrate KSession, KClientSession, and KServerSession to KAutoObject. 2021-05-05 16:40:53 -07:00
bunnei
1b074b8984 fixup! hle: kernel: Migrate KSession, KClientSession, and KServerSession to KAutoObject. 2021-05-05 16:40:53 -07:00
bunnei
50d2dc3b51 fixup! hle: kernel: Migrate KPort, KClientPort, and KServerPort to KAutoObject. 2021-05-05 16:40:53 -07:00
bunnei
d23f9f75ff fixup! hle: kernel: Migrate to KHandleTable. 2021-05-05 16:40:53 -07:00
bunnei
4356361faf fixup! hle: kernel: Add initial impl. of KAutoObjectWithListContainer. 2021-05-05 16:40:53 -07:00
bunnei
51aa5a5364 fixup! hle: kernel: Add initial impl. of KAutoObjectWithListContainer. 2021-05-05 16:40:53 -07:00
bunnei
25538db150 fixup! hle: kernel: Add initial impl. of KAutoObject. 2021-05-05 16:40:53 -07:00
bunnei
9bae3992e6 fixup! hle: kernel: Add initial impl. of KAutoObject. 2021-05-05 16:40:53 -07:00
bunnei
91d8657959 fixup! hle: kernel: Add initial impl. of slab setup. 2021-05-05 16:40:53 -07:00
bunnei
d3c166d4d5 common: Rename NON_COPYABLE/NON_MOVABLE with YUZU_ prefix. 2021-05-05 16:40:53 -07:00
bunnei
0536004d91 fixup! hle: kernel: Rename Process to KProcess. 2021-05-05 16:40:53 -07:00
bunnei
57f80c74b6 fixup! hle: kernel: Migrate to KHandleTable. 2021-05-05 16:40:53 -07:00
bunnei
caa11748c6 fixup! hle: kernel: Improve MapSharedMemory and implement UnmapSharedMemory. 2021-05-05 16:40:53 -07:00
bunnei
7866eb03bb hle: kernel: svc: ConnectToNamedPort: Use KHandleTable::Reserve. 2021-05-05 16:40:53 -07:00
bunnei
4b03e6e776 hle: kernel: Migrate to KHandleTable. 2021-05-05 16:40:53 -07:00
bunnei
8f5052a514 hle: kernel: KClassToken: Ensure class tokens are correct. 2021-05-05 16:40:53 -07:00
bunnei
0b27c721c9 hle: kernel: Improve MapSharedMemory and implement UnmapSharedMemory. 2021-05-05 16:40:52 -07:00
bunnei
2a7eff57a8 hle: kernel: Rename Process to KProcess. 2021-05-05 16:40:52 -07:00
bunnei
bf380b8584 hle: kernel: Remove deprecated Object class. 2021-05-05 16:40:52 -07:00
bunnei
864841eb9e hle: kernel: Do not shutdown twice on emulator close. 2021-05-05 16:40:52 -07:00
bunnei
39a8dba9a6 hle: kernel: Cleanup shutdown of persistent kernel objects. 2021-05-05 16:40:52 -07:00
bunnei
626f746971 hle: kernel: Migrate KPort, KClientPort, and KServerPort to KAutoObject. 2021-05-05 16:40:52 -07:00
bunnei
7a06864100 hle: kernel: Migrate KServerPort to KAutoObject. 2021-05-05 16:40:52 -07:00
bunnei
0297448fbc hle: kernel: Migrate KClientPort to KAutoObject. 2021-05-05 16:40:52 -07:00
bunnei
aa2844bcf9 hle: kernel: HandleTable: Remove deprecated APIs. 2021-05-05 16:40:52 -07:00
bunnei
b57c5a9b54 hle: kernel: Migrate KResourceLimit to KAutoObject. 2021-05-05 16:40:52 -07:00
bunnei
674122038a hle: kernel: svc: Migrate WaitSynchronization. 2021-05-05 16:40:51 -07:00
bunnei
126aaeb6d3 hle: kernel: svc: Use new handle table API for Process. 2021-05-05 16:40:51 -07:00
bunnei
c7d8b7421c hle: kernel: Migrate KTransferMemory to KAutoObject. 2021-05-05 16:40:51 -07:00
bunnei
7444963bbb hle: kernel: Migrate KSession, KClientSession, and KServerSession to KAutoObject. 2021-05-05 16:40:51 -07:00
bunnei
2cb6106523 hle: kernel: svc: Migrate GetThreadContext, GetThreadCoreMask. 2021-05-05 16:40:51 -07:00
bunnei
76a0814142 hle: kernel: svc: Migrate GetProcessId, CancelSynchronization, SetThreadActivity. 2021-05-05 16:40:51 -07:00
bunnei
84bb772003 hle: kernel: KThread: Remove incorrect resource release. 2021-05-05 16:40:51 -07:00
bunnei
269d233a94 hle: kernel: svc_results: Update naming.. 2021-05-05 16:40:51 -07:00
bunnei
c2f6f2ba7a hle: kernel: KThread: Add missing resource hint release. 2021-05-05 16:40:51 -07:00
bunnei
2e8d6fe9a0 hle: kernel: Migrate KReadableEvent and KWritableEvent to KAutoObject. 2021-05-05 16:40:51 -07:00
bunnei
eba3bb9d21 hle: ipc_helpers: Add methods for copy/move references. 2021-05-05 16:40:51 -07:00
bunnei
cfa7b92563 hle: kernel: Move slab heaps to their own container. 2021-05-05 16:40:51 -07:00
bunnei
89edbe8aa2 hle: kernel: Refactor several threads/events/sharedmemory to use slab heaps. 2021-05-05 16:40:51 -07:00
bunnei
b6156e735c hle: kernel: Move slab heap management to KernelCore. 2021-05-05 16:40:51 -07:00
bunnei
ab704acab8 hle: kernel: Ensure all kernel objects with KAutoObject are properly created. 2021-05-05 16:40:51 -07:00
bunnei
722195cf70 hle: kernel: Use unique_ptr for suspend and dummy threads. 2021-05-05 16:40:50 -07:00
bunnei
addc0bf037 hle: kernel: Migrate KEvent to KAutoObject. 2021-05-05 16:40:50 -07:00
bunnei
086db71e94 hle: kernel: Migrate KSharedMemory to KAutoObject. 2021-05-05 16:40:50 -07:00
bunnei
7ccbdd4d8d hle: kernel: Migrate KProcess to KAutoObject. 2021-05-05 16:40:50 -07:00
bunnei
5e5933256b hle: kernel: Refactor IPC interfaces to not use std::shared_ptr. 2021-05-05 16:40:50 -07:00
bunnei
da7e9553de hle: kernel: Migrate more of KThread to KAutoObject. 2021-05-05 16:40:50 -07:00
bunnei
6fca1c82fd hle: kernel: svc: Migrate GetThreadPriority, StartThread, and ExitThread. 2021-05-05 16:40:50 -07:00
bunnei
de4746ff69 hle: kernel: svc: Migrate CreateThread. 2021-05-05 16:40:50 -07:00
bunnei
0eeecde67c hle: kernel: Migrate idle threads. 2021-05-05 16:40:50 -07:00
bunnei
479bd50b96 hle: kernel: Migrate KThread to KAutoObject. 2021-05-05 16:40:50 -07:00
bunnei
d3d0f2f451 hle: kernel: Add initial impl. of slab setup. 2021-05-05 16:40:50 -07:00
bunnei
34bed1ab41 hle: kernel: Refactor out various KThread std::shared_ptr usage. 2021-05-05 16:40:50 -07:00
bunnei
d9df63583f core: Defer CoreTiming initialization. 2021-05-05 16:40:50 -07:00
bunnei
3401676768 core: memory: Add a work-around to allocate and access kernel memory regions by vaddr. 2021-05-05 16:40:50 -07:00
bunnei
02c2b28cd0 common: common_funcs: Add Size helper function. 2021-05-05 16:40:49 -07:00
bunnei
66f2ad716b hle: kernel: Add initial impl. of KLinkedList. 2021-05-05 16:40:49 -07:00
bunnei
74120c5e3a common: bit_util: Add BIT macro. 2021-05-05 16:40:49 -07:00
bunnei
f93d939426 hle: kernel: Add initial impl. of KSlabAllocated. 2021-05-05 16:40:49 -07:00
bunnei
34ce1dd7c7 hle: kernel: Add initial impl. of KAutoObjectWithListContainer. 2021-05-05 16:40:49 -07:00
bunnei
b8751630e2 hle: kernel: Add initial impl. of KAutoObject. 2021-05-05 16:40:49 -07:00
bunnei
d9205f82b3 common: intrusive_red_black_tree: Disable static_assert that will not evaluate as constant on MSVC. 2021-05-05 16:40:49 -07:00
bunnei
b99fc70191 common: common_funcs: Add helper macros for non-copyable and non-moveable.
- Useful for scenarios where we do not want to inherit from NonCopyable.
2021-05-05 16:40:49 -07:00
bunnei
260b841dc3 Merge pull request #6279 from ogniK5377/nvhost-prof
nvdrv: /dev/nvhost-prof-gpu for production
2021-05-05 16:16:13 -07:00
bunnei
0b7a03bd65 Update src/core/hle/service/nvdrv/interface.cpp
Co-authored-by: Ameer J <52414509+ameerj@users.noreply.github.com>
2021-05-05 16:16:02 -07:00
bunnei
860d73637e Merge pull request #6283 from lioncash/unused-fields
service: Remove unused class variables
2021-05-05 09:26:01 -07:00
german77
8c30ed6d09 hid: Improve hardware accuracy of gestures 2021-05-05 10:13:09 -05:00
Lioncash
cc47a6a9c2 service: Remove unused class variables
Prevents some warnings from occurring.
2021-05-05 01:32:28 -04:00
bunnei
403cf6be69 Merge pull request #6281 from lioncash/shadow-field
service: Resolve cases of member field shadowing
2021-05-04 19:51:08 -07:00
Lioncash
9e726a9250 service: Resolve cases of member field shadowing
Now all that remains is for kernel code to be 'shadow-free' and then
-Wshadow can be turned into an error.
2021-05-04 04:38:38 -04:00
bunnei
df51eb9bde Merge pull request #6278 from lioncash/misc-shadow
core: Resolve misc straggler cases of variable shadowing
2021-05-03 16:04:28 -07:00
bunnei
898aa5fb66 Merge pull request #6275 from german77/mousefocus
input_common: Release mouse buttons on out of focus
2021-05-03 12:50:19 -07:00
Lioncash
ebb64d5bf4 core: Resolve misc cases of variable shadowing
Resolves shadowing warnings that aren't in a particularly large
subsection of core. Brings us closer to turning -Wshadow into an error.

All that remains now is for cases in the kernel (left untouched for now
since a big change by bunnei is pending), and a few left over in the
service code (will be tackled next).
2021-05-03 01:19:13 -04:00
Chloe Marcec
7d257ce7bd nvdrv: /dev/nvhost-prof-gpu for production
While we're at it, we can fix the is_initialized error code.
This fixes the crashes on Shante
2021-05-03 14:39:03 +10:00
Morph
707ed72a3c Merge pull request #6277 from german77/touchsetting2
hid: Fix touch not initializing properly if disabled
2021-05-02 23:25:10 -04:00
german77
08d5bd36d8 hid: Fix touch not initializing properly if disabled 2021-05-02 21:27:15 -05:00
german77
6e81473574 input_common: Release mouse buttons on out of focus 2021-05-02 19:08:33 -05:00
bunnei
c17a59b58e Merge pull request #6269 from lioncash/file-shadow
file_sys: Resolve cases of variable shadowing
2021-05-02 15:12:07 -07:00
Morph
0d2d0844a5 Merge pull request #6263 from Kewlan/folder-swap-expand-state
game_list: Fix dir move up/down expand state
2021-05-02 07:43:45 -04:00
bunnei
01a57d4c8d Merge pull request #6265 from Morph1984/snap-save-fix
service: filesystem: Return proper error codes for CreateFile
2021-05-02 00:45:18 -07:00
Lioncash
1da72c7792 file_sys: Resolve cases of variable shadowing
Brings us closer to enabling -Wshadow as an error in the core code.
2021-05-02 02:59:57 -04:00
bunnei
54dc22285b Merge pull request #6245 from lat9nq/boost-only-config
cmake: Only config Boost during find_package
2021-05-01 22:21:05 -07:00
bunnei
03b3c5800b Merge pull request #6261 from Kewlan/game-list-filter-fix
game_list: Update filter results when removing directories
2021-05-01 09:14:10 -07:00
Morph
07934f0e87 Merge pull request #6264 from german77/touchsetting
hid: Disable touch if setting is not enabled
2021-05-01 11:28:28 -04:00
Morph
72b22fd433 service: filesystem: Return proper error codes for CreateFile
This improves the accuracy of CreateFile by returning the correct error codes on certain conditions (parent directory does not exist, path already exists).

This fixes saving and the loading of existing saves in New Pokemon Snap
2021-05-01 09:33:00 -04:00
german77
1ed1dd3c89 Disable touch if setting is not enabled 2021-04-30 19:28:21 -05:00
bunnei
fa3ffff8de Merge pull request #6257 from Morph1984/fix-use-after-free-webapplet
applets/web: Fix a use-after-free when passing in the URL string
2021-04-30 14:48:32 -07:00
bunnei
aab57b7975 Merge pull request #6243 from german77/GCresetOrigin
input_common: Reset GC sticks center by measuring multiple packets
2021-04-30 12:02:05 -07:00
Kewlan
497ccfaedc game_list: Fix dir move up/down expand state 2021-04-30 12:18:38 +02:00
bunnei
922b0d9933 Merge pull request #6226 from german77/sevensix
hid: Implement SevenSixAxis and ConsoleSixAxisSensor
2021-04-29 22:06:57 -07:00
bunnei
bea6fca9a1 Merge pull request #6258 from Morph1984/config-conv
yuzu: config: Silence narrowing conversion warning on MSVC
2021-04-29 21:49:05 -07:00
Kewlan
fc84822266 game_list: Update filter results when removing directories 2021-04-30 00:05:23 +02:00
Morph
29a06ad393 yuzu: config: Silence narrowing conversion warning on MSVC 2021-04-28 22:42:56 -04:00
Ameer J
e1a196cfd7 Merge pull request #6259 from Morph1984/main-conv
yuzu: main: Silence type conversion warning on MSVC
2021-04-28 20:54:05 -04:00
Morph
0af182baa2 applets/web: Fix a use-after-free when passing in the URL string
The URL string was being deleted before being used, leading to a use-after-free occurring when it is used afterwards.

Fix this by taking the string by const ref to extend its lifetime, ensuring it doesn't get deleted before use.
2021-04-28 12:34:28 -04:00
Morph
d95605cd24 yuzu: main: Silence type conversion warning on MSVC 2021-04-28 12:22:41 -04:00
bunnei
b096ec68cd Merge pull request #6250 from lioncash/loader-shadow
loader: Resolve instances of variable shadowing
2021-04-27 19:40:46 -07:00
Lioncash
724c19a307 loader: Resolve instances of variable shadowing
Eliminates variable shadowing cases across all the loaders to bring us
closer to enabling variable shadowing as an error in core.
2021-04-27 12:48:15 -04:00
german77
cfdec68d5a address comments 2021-04-26 22:07:16 -05:00
A-w-x
6a2084a204 gl_device: Intel: Disable texture view formats workaround on mesa 2021-04-26 18:14:10 +02:00
lat9nq
697a2c0018 cmake: Only config Boost during find_package
Without the CONFIG option, find_package will perform Module search. On
at least Linux Mint 20 (I'm unable to reproduce this on CentOS and Arch
Linux), my guess is that this causes CMake to find "dirty" modules that
modify the configuration state despite the Boost version being too
low/absent.

Use CONFIG to put CMake into pure Config mode and avoid Module search.
2021-04-25 21:02:39 -04:00
german77
c19ad21ae8 hid: Implement SevenSixAxis and ConsoleSixAxisSensor 2021-04-23 22:12:41 -05:00
385 changed files with 7474 additions and 4520 deletions

View File

@@ -30,10 +30,10 @@ make install DESTDIR=AppDir
rm -vf AppDir/usr/bin/yuzu-cmd AppDir/usr/bin/yuzu-tester
# Download tools needed to build an AppImage
wget -nc https://github.com/linuxdeploy/linuxdeploy/releases/download/continuous/linuxdeploy-x86_64.AppImage
wget -nc https://github.com/linuxdeploy/linuxdeploy-plugin-qt/releases/download/continuous/linuxdeploy-plugin-qt-x86_64.AppImage
wget -nc https://github.com/darealshinji/AppImageKit-checkrt/releases/download/continuous/AppRun-patched-x86_64
wget -nc https://github.com/darealshinji/AppImageKit-checkrt/releases/download/continuous/exec-x86_64.so
wget -nc https://github.com/yuzu-emu/ext-linux-bin/raw/main/appimage/linuxdeploy-x86_64.AppImage
wget -nc https://github.com/yuzu-emu/ext-linux-bin/raw/main/appimage/linuxdeploy-plugin-qt-x86_64.AppImage
wget -nc https://github.com/yuzu-emu/ext-linux-bin/raw/main/appimage/AppRun-patched-x86_64
wget -nc https://github.com/yuzu-emu/ext-linux-bin/raw/main/appimage/exec-x86_64.so
# Set executable bit
chmod 755 \
AppRun-patched-x86_64 \

View File

@@ -21,7 +21,7 @@ cp build/bin/yuzu "$DIR_NAME"
# Build an AppImage
cd build
wget -nc https://github.com/AppImage/AppImageKit/releases/download/continuous/appimagetool-x86_64.AppImage
wget -nc https://github.com/yuzu-emu/ext-linux-bin/raw/main/appimage/appimagetool-x86_64.AppImage
chmod 755 appimagetool-x86_64.AppImage
if [ "${RELEASE_NAME}" = "mainline" ]; then

View File

@@ -12,6 +12,8 @@ project(yuzu)
# OFF by default, but if ENABLE_SDL2 and MSVC are true then ON
option(ENABLE_SDL2 "Enable the SDL2 frontend" ON)
CMAKE_DEPENDENT_OPTION(YUZU_USE_BUNDLED_SDL2 "Download bundled SDL2 binaries" ON "ENABLE_SDL2;MSVC" OFF)
# On Linux system SDL2 is likely to be lacking HIDAPI support which have drawbacks but is needed for SDL motion
CMAKE_DEPENDENT_OPTION(YUZU_ALLOW_SYSTEM_SDL2 "Try using system SDL2 before fallling back to one from externals" NOT UNIX "ENABLE_SDL2" OFF)
option(ENABLE_QT "Enable the Qt frontend" ON)
option(ENABLE_QT_TRANSLATION "Enable translations for the Qt frontend" OFF)
@@ -172,7 +174,7 @@ macro(yuzu_find_packages)
"lz4 1.8 lz4/1.9.2"
"nlohmann_json 3.8 nlohmann_json/3.8.0"
"ZLIB 1.2 zlib/1.2.11"
"zstd 1.4 zstd/1.4.8"
"zstd 1.5 zstd/1.5.0"
# can't use opus until AVX check is fixed: https://github.com/yuzu-emu/yuzu/pull/4068
#"opus 1.3 opus/1.3.1"
)
@@ -202,7 +204,7 @@ macro(yuzu_find_packages)
endmacro()
if (NOT YUZU_USE_BUNDLED_BOOST)
find_package(Boost 1.73.0 COMPONENTS context headers QUIET)
find_package(Boost 1.73.0 CONFIG COMPONENTS context headers QUIET)
endif()
if (Boost_FOUND)
set(Boost_LIBRARIES Boost::boost)
@@ -224,7 +226,7 @@ elseif (${CMAKE_SYSTEM_NAME} STREQUAL "Linux" OR YUZU_USE_BUNDLED_BOOST)
download_bundled_external("boost/" ${Boost_EXT_NAME} "")
set(Boost_USE_DEBUG_RUNTIME FALSE)
set(Boost_USE_STATIC_LIBS ON)
find_package(Boost 1.75.0 REQUIRED COMPONENTS context headers PATHS ${Boost_PATH} NO_DEFAULT_PATH)
find_package(Boost 1.75.0 CONFIG REQUIRED COMPONENTS context headers PATHS ${Boost_PATH} NO_DEFAULT_PATH)
# Manually set the include dirs since the find_package sets it incorrectly
set(Boost_INCLUDE_DIRS ${Boost_PATH}/include CACHE PATH "Path to Boost headers" FORCE)
include_directories(SYSTEM "${Boost_INCLUDE_DIRS}")
@@ -274,7 +276,7 @@ if (ENABLE_SDL2)
if (YUZU_USE_BUNDLED_SDL2)
# Detect toolchain and platform
if ((MSVC_VERSION GREATER_EQUAL 1910 AND MSVC_VERSION LESS 1930) AND ARCHITECTURE_x86_64)
set(SDL2_VER "SDL2-2.0.14")
set(SDL2_VER "SDL2-2.0.15-prerelease")
else()
message(FATAL_ERROR "No bundled SDL2 binaries for your toolchain. Disable YUZU_USE_BUNDLED_SDL2 and provide your own.")
endif()
@@ -292,20 +294,24 @@ if (ENABLE_SDL2)
target_link_libraries(SDL2 INTERFACE "${SDL2_LIBRARY}")
target_include_directories(SDL2 INTERFACE "${SDL2_INCLUDE_DIR}")
else()
find_package(SDL2 2.0.14 QUIET)
if (YUZU_ALLOW_SYSTEM_SDL2)
find_package(SDL2 2.0.15 QUIET)
if (SDL2_FOUND)
# Some installations don't set SDL2_LIBRARIES
if("${SDL2_LIBRARIES}" STREQUAL "")
message(WARNING "SDL2_LIBRARIES wasn't set, manually setting to SDL2::SDL2")
set(SDL2_LIBRARIES "SDL2::SDL2")
if (SDL2_FOUND)
# Some installations don't set SDL2_LIBRARIES
if("${SDL2_LIBRARIES}" STREQUAL "")
message(WARNING "SDL2_LIBRARIES wasn't set, manually setting to SDL2::SDL2")
set(SDL2_LIBRARIES "SDL2::SDL2")
endif()
include_directories(SYSTEM ${SDL2_INCLUDE_DIRS})
add_library(SDL2 INTERFACE)
target_link_libraries(SDL2 INTERFACE "${SDL2_LIBRARIES}")
else()
message(STATUS "SDL2 2.0.15 or newer not found, falling back to externals.")
endif()
include_directories(SYSTEM ${SDL2_INCLUDE_DIRS})
add_library(SDL2 INTERFACE)
target_link_libraries(SDL2 INTERFACE "${SDL2_LIBRARIES}")
else()
message(STATUS "SDL2 2.0.14 or newer not found, falling back to externals.")
message(STATUS "Using SDL2 from externals.")
endif()
endif()
endif()

View File

@@ -47,7 +47,22 @@ target_include_directories(unicorn-headers INTERFACE ./unicorn/include)
# SDL2
if (NOT SDL2_FOUND AND ENABLE_SDL2)
# Yuzu itself needs: Events Joystick Haptic Sensor Timers
# Yuzu-cmd also needs: Video (depends on Loadso/Dlopen)
set(SDL_UNUSED_SUBSYSTEMS
Atomic Audio Render Power Threads
File CPUinfo Filesystem Locale)
foreach(_SUB ${SDL_UNUSED_SUBSYSTEMS})
string(TOUPPER ${_SUB} _OPT)
option(SDL_${_OPT} "" OFF)
endforeach()
set(SDL_STATIC ON)
set(SDL_SHARED OFF)
option(HIDAPI "" ON)
add_subdirectory(SDL EXCLUDE_FROM_ALL)
add_library(SDL2 ALIAS SDL2-static)
endif()
# SoundTouch

2
externals/SDL vendored

View File

@@ -54,6 +54,7 @@ if (MSVC)
/we4547 # 'operator' : operator before comma has no effect; expected operator with side-effect
/we4549 # 'operator1': operator before comma has no effect; did you intend 'operator2'?
/we4555 # Expression has no effect; expected expression with side-effect
/we4715 # 'function': not all control paths return a value
/we4834 # Discarding return value of function with 'nodiscard' attribute
/we5038 # data member 'member1' will be initialized after data member 'member2'
)

View File

@@ -108,6 +108,14 @@ __declspec(dllimport) void __stdcall DebugBreak(void);
} \
}
#define YUZU_NON_COPYABLE(cls) \
cls(const cls&) = delete; \
cls& operator=(const cls&) = delete
#define YUZU_NON_MOVEABLE(cls) \
cls(cls&&) = delete; \
cls& operator=(cls&&) = delete
#define R_SUCCEEDED(res) (res.IsSuccess())
/// Evaluates an expression that returns a result, and returns the result if it would fail.
@@ -128,4 +136,19 @@ namespace Common {
return u32(a) | u32(b) << 8 | u32(c) << 16 | u32(d) << 24;
}
// std::size() does not support zero-size C arrays. We're fixing that.
template <class C>
constexpr auto Size(const C& c) -> decltype(c.size()) {
return std::size(c);
}
template <class C>
constexpr std::size_t Size(const C& c) {
if constexpr (sizeof(C) == 0) {
return 0;
} else {
return std::size(c);
}
}
} // namespace Common

View File

@@ -509,7 +509,6 @@ private:
private:
static constexpr TypedStorage<Derived> DerivedStorage = {};
static_assert(GetParent(GetNode(GetPointer(DerivedStorage))) == GetPointer(DerivedStorage));
};
template <auto T, class Derived = impl::GetParentType<T>>

View File

@@ -109,7 +109,8 @@ struct OffsetOfCalculator {
}
}
return (next - start) * sizeof(MemberType) + Offset;
return static_cast<ptrdiff_t>(static_cast<size_t>(next - start) * sizeof(MemberType) +
Offset);
}
static constexpr std::ptrdiff_t OffsetOf(MemberType ParentType::*member) {
@@ -133,27 +134,27 @@ template <auto MemberPtr>
using GetMemberType = typename GetMemberPointerTraits<decltype(MemberPtr)>::Member;
template <auto MemberPtr, typename RealParentType = GetParentType<MemberPtr>>
static inline std::ptrdiff_t OffsetOf = [] {
constexpr std::ptrdiff_t OffsetOf() {
using DeducedParentType = GetParentType<MemberPtr>;
using MemberType = GetMemberType<MemberPtr>;
static_assert(std::is_base_of<DeducedParentType, RealParentType>::value ||
std::is_same<RealParentType, DeducedParentType>::value);
return OffsetOfCalculator<RealParentType, MemberType>::OffsetOf(MemberPtr);
}();
};
} // namespace impl
template <auto MemberPtr, typename RealParentType = impl::GetParentType<MemberPtr>>
constexpr RealParentType& GetParentReference(impl::GetMemberType<MemberPtr>* member) {
std::ptrdiff_t Offset = impl::OffsetOf<MemberPtr, RealParentType>;
std::ptrdiff_t Offset = impl::OffsetOf<MemberPtr, RealParentType>();
return *static_cast<RealParentType*>(
static_cast<void*>(static_cast<uint8_t*>(static_cast<void*>(member)) - Offset));
}
template <auto MemberPtr, typename RealParentType = impl::GetParentType<MemberPtr>>
constexpr RealParentType const& GetParentReference(impl::GetMemberType<MemberPtr> const* member) {
std::ptrdiff_t Offset = impl::OffsetOf<MemberPtr, RealParentType>;
std::ptrdiff_t Offset = impl::OffsetOf<MemberPtr, RealParentType>();
return *static_cast<const RealParentType*>(static_cast<const void*>(
static_cast<const uint8_t*>(static_cast<const void*>(member)) - Offset));
}

View File

@@ -42,7 +42,7 @@ void LogSettings() {
log_setting("System_RegionIndex", values.region_index.GetValue());
log_setting("System_TimeZoneIndex", values.time_zone_index.GetValue());
log_setting("Core_UseMultiCore", values.use_multi_core.GetValue());
log_setting("CPU_Accuracy", values.cpu_accuracy);
log_setting("CPU_Accuracy", values.cpu_accuracy.GetValue());
log_setting("Renderer_UseResolutionFactor", values.resolution_factor.GetValue());
log_setting("Renderer_UseFrameLimit", values.use_frame_limit.GetValue());
log_setting("Renderer_FrameLimit", values.frame_limit.GetValue());
@@ -106,6 +106,12 @@ void RestoreGlobalState(bool is_powered_on) {
// Core
values.use_multi_core.SetGlobal(true);
// CPU
values.cpu_accuracy.SetGlobal(true);
values.cpuopt_unsafe_unfuse_fma.SetGlobal(true);
values.cpuopt_unsafe_reduce_fp_error.SetGlobal(true);
values.cpuopt_unsafe_inaccurate_nan.SetGlobal(true);
// Renderer
values.renderer_backend.SetGlobal(true);
values.vulkan_device.SetGlobal(true);
@@ -130,7 +136,6 @@ void RestoreGlobalState(bool is_powered_on) {
values.region_index.SetGlobal(true);
values.time_zone_index.SetGlobal(true);
values.rng_seed.SetGlobal(true);
values.custom_rtc.SetGlobal(true);
values.sound_index.SetGlobal(true);
// Controls

View File

@@ -115,7 +115,7 @@ struct Values {
Setting<bool> use_multi_core;
// Cpu
CPUAccuracy cpu_accuracy;
Setting<CPUAccuracy> cpu_accuracy;
bool cpuopt_page_tables;
bool cpuopt_block_linking;
@@ -126,9 +126,9 @@ struct Values {
bool cpuopt_misc_ir;
bool cpuopt_reduce_misalign_checks;
bool cpuopt_unsafe_unfuse_fma;
bool cpuopt_unsafe_reduce_fp_error;
bool cpuopt_unsafe_inaccurate_nan;
Setting<bool> cpuopt_unsafe_unfuse_fma;
Setting<bool> cpuopt_unsafe_reduce_fp_error;
Setting<bool> cpuopt_unsafe_inaccurate_nan;
// Renderer
Setting<RendererBackend> renderer_backend;
@@ -157,7 +157,7 @@ struct Values {
// System
Setting<std::optional<u32>> rng_seed;
// Measured in seconds since epoch
Setting<std::optional<std::chrono::seconds>> custom_rtc;
std::optional<std::chrono::seconds> custom_rtc;
// Set on game boot, reset on stop. Seconds difference between current time and `custom_rtc`
std::chrono::seconds custom_rtc_differential;

View File

@@ -322,7 +322,7 @@ void RB_INSERT_COLOR(RBHead<Node>* head, Node* elm) {
template <typename Node>
void RB_REMOVE_COLOR(RBHead<Node>* head, Node* parent, Node* elm) {
Node* tmp;
while ((elm == nullptr || RB_IS_BLACK(elm)) && elm != head->Root()) {
while ((elm == nullptr || RB_IS_BLACK(elm)) && elm != head->Root() && parent != nullptr) {
if (RB_LEFT(parent) == elm) {
tmp = RB_RIGHT(parent);
if (RB_IS_RED(tmp)) {

View File

@@ -144,31 +144,40 @@ add_library(core STATIC
hle/kernel/board/nintendo/nx/k_system_control.cpp
hle/kernel/board/nintendo/nx/k_system_control.h
hle/kernel/board/nintendo/nx/secure_monitor.h
hle/kernel/client_port.cpp
hle/kernel/client_port.h
hle/kernel/client_session.cpp
hle/kernel/client_session.h
hle/kernel/code_set.cpp
hle/kernel/code_set.h
hle/kernel/svc_results.h
hle/kernel/global_scheduler_context.cpp
hle/kernel/global_scheduler_context.h
hle/kernel/handle_table.cpp
hle/kernel/handle_table.h
hle/kernel/hle_ipc.cpp
hle/kernel/hle_ipc.h
hle/kernel/init/init_slab_setup.cpp
hle/kernel/init/init_slab_setup.h
hle/kernel/k_address_arbiter.cpp
hle/kernel/k_address_arbiter.h
hle/kernel/k_address_space_info.cpp
hle/kernel/k_address_space_info.h
hle/kernel/k_auto_object.cpp
hle/kernel/k_auto_object.h
hle/kernel/k_auto_object_container.cpp
hle/kernel/k_auto_object_container.h
hle/kernel/k_affinity_mask.h
hle/kernel/k_class_token.cpp
hle/kernel/k_class_token.h
hle/kernel/k_client_port.cpp
hle/kernel/k_client_port.h
hle/kernel/k_client_session.cpp
hle/kernel/k_client_session.h
hle/kernel/k_condition_variable.cpp
hle/kernel/k_condition_variable.h
hle/kernel/k_event.cpp
hle/kernel/k_event.h
hle/kernel/k_handle_table.cpp
hle/kernel/k_handle_table.h
hle/kernel/k_light_condition_variable.h
hle/kernel/k_light_lock.cpp
hle/kernel/k_light_lock.h
hle/kernel/k_linked_list.h
hle/kernel/k_memory_block.h
hle/kernel/k_memory_block_manager.cpp
hle/kernel/k_memory_block_manager.h
@@ -185,7 +194,11 @@ add_library(core STATIC
hle/kernel/k_page_linked_list.h
hle/kernel/k_page_table.cpp
hle/kernel/k_page_table.h
hle/kernel/k_port.cpp
hle/kernel/k_port.h
hle/kernel/k_priority_queue.h
hle/kernel/k_process.cpp
hle/kernel/k_process.h
hle/kernel/k_readable_event.cpp
hle/kernel/k_readable_event.h
hle/kernel/k_resource_limit.cpp
@@ -196,6 +209,12 @@ add_library(core STATIC
hle/kernel/k_scoped_lock.h
hle/kernel/k_scoped_resource_reservation.h
hle/kernel/k_scoped_scheduler_lock_and_sleep.h
hle/kernel/k_server_port.cpp
hle/kernel/k_server_port.h
hle/kernel/k_server_session.cpp
hle/kernel/k_server_session.h
hle/kernel/k_session.cpp
hle/kernel/k_session.h
hle/kernel/k_shared_memory.cpp
hle/kernel/k_shared_memory.h
hle/kernel/k_slab_heap.h
@@ -208,28 +227,21 @@ add_library(core STATIC
hle/kernel/k_thread.h
hle/kernel/k_thread_queue.h
hle/kernel/k_trace.h
hle/kernel/k_transfer_memory.cpp
hle/kernel/k_transfer_memory.h
hle/kernel/k_writable_event.cpp
hle/kernel/k_writable_event.h
hle/kernel/kernel.cpp
hle/kernel/kernel.h
hle/kernel/memory_types.h
hle/kernel/object.cpp
hle/kernel/object.h
hle/kernel/physical_core.cpp
hle/kernel/physical_core.h
hle/kernel/physical_memory.h
hle/kernel/process.cpp
hle/kernel/process.h
hle/kernel/process_capability.cpp
hle/kernel/process_capability.h
hle/kernel/server_port.cpp
hle/kernel/server_port.h
hle/kernel/server_session.cpp
hle/kernel/server_session.h
hle/kernel/service_thread.cpp
hle/kernel/service_thread.h
hle/kernel/session.cpp
hle/kernel/session.h
hle/kernel/slab_helpers.h
hle/kernel/svc.cpp
hle/kernel/svc.h
hle/kernel/svc_common.h
@@ -237,8 +249,6 @@ add_library(core STATIC
hle/kernel/svc_wrap.h
hle/kernel/time_manager.cpp
hle/kernel/time_manager.h
hle/kernel/transfer_memory.cpp
hle/kernel/transfer_memory.h
hle/lock.cpp
hle/lock.h
hle/result.h
@@ -393,6 +403,8 @@ add_library(core STATIC
hle/service/hid/xcd.cpp
hle/service/hid/xcd.h
hle/service/hid/errors.h
hle/service/hid/controllers/console_sixaxis.cpp
hle/service/hid/controllers/console_sixaxis.h
hle/service/hid/controllers/controller_base.cpp
hle/service/hid/controllers/controller_base.h
hle/service/hid/controllers/debug_pad.cpp
@@ -639,20 +651,17 @@ endif()
if (MSVC)
target_compile_options(core PRIVATE
# 'expression' : signed/unsigned mismatch
/we4018
# 'argument' : conversion from 'type1' to 'type2', possible loss of data (floating-point)
/we4244
# 'conversion' : conversion from 'type1' to 'type2', signed/unsigned mismatch
/we4245
# 'operator': conversion from 'type1:field_bits' to 'type2:field_bits', possible loss of data
/we4254
# 'var' : conversion from 'size_t' to 'type', possible loss of data
/we4267
# 'context' : truncation from 'type1' to 'type2'
/we4305
# 'function' : not all control paths return a value
/we4715
/we4018 # 'expression' : signed/unsigned mismatch
/we4244 # 'argument' : conversion from 'type1' to 'type2', possible loss of data (floating-point)
/we4245 # 'conversion' : conversion from 'type1' to 'type2', signed/unsigned mismatch
/we4254 # 'operator': conversion from 'type1:field_bits' to 'type2:field_bits', possible loss of data
/we4267 # 'var' : conversion from 'size_t' to 'type', possible loss of data
/we4305 # 'context' : truncation from 'type1' to 'type2'
/we4456 # Declaration of 'identifier' hides previous local declaration
/we4457 # Declaration of 'identifier' hides function parameter
/we4458 # Declaration of 'identifier' hides class member
/we4459 # Declaration of 'identifier' hides global declaration
/we4715 # 'function' : not all control paths return a value
)
else()
target_compile_options(core PRIVATE
@@ -660,6 +669,7 @@ else()
-Werror=ignored-qualifiers
-Werror=implicit-fallthrough
-Werror=sign-compare
-Werror=shadow
$<$<CXX_COMPILER_ID:GNU>:-Werror=class-memaccess>
$<$<CXX_COMPILER_ID:GNU>:-Werror=unused-but-set-parameter>

View File

@@ -24,7 +24,7 @@ namespace Core {
class DynarmicCallbacks32 : public Dynarmic::A32::UserCallbacks {
public:
explicit DynarmicCallbacks32(ARM_Dynarmic_32& parent) : parent(parent) {}
explicit DynarmicCallbacks32(ARM_Dynarmic_32& parent_) : parent{parent_} {}
u8 MemoryRead8(u32 vaddr) override {
return parent.system.Memory().Read8(vaddr);
@@ -142,7 +142,7 @@ std::shared_ptr<Dynarmic::A32::Jit> ARM_Dynarmic_32::MakeJit(Common::PageTable*
config.far_code_offset = 256 * 1024 * 1024;
// Safe optimizations
if (Settings::values.cpu_accuracy == Settings::CPUAccuracy::DebugMode) {
if (Settings::values.cpu_accuracy.GetValue() == Settings::CPUAccuracy::DebugMode) {
if (!Settings::values.cpuopt_page_tables) {
config.page_table = nullptr;
}
@@ -170,15 +170,15 @@ std::shared_ptr<Dynarmic::A32::Jit> ARM_Dynarmic_32::MakeJit(Common::PageTable*
}
// Unsafe optimizations
if (Settings::values.cpu_accuracy == Settings::CPUAccuracy::Unsafe) {
if (Settings::values.cpu_accuracy.GetValue() == Settings::CPUAccuracy::Unsafe) {
config.unsafe_optimizations = true;
if (Settings::values.cpuopt_unsafe_unfuse_fma) {
if (Settings::values.cpuopt_unsafe_unfuse_fma.GetValue()) {
config.optimizations |= Dynarmic::OptimizationFlag::Unsafe_UnfuseFMA;
}
if (Settings::values.cpuopt_unsafe_reduce_fp_error) {
if (Settings::values.cpuopt_unsafe_reduce_fp_error.GetValue()) {
config.optimizations |= Dynarmic::OptimizationFlag::Unsafe_ReducedErrorFP;
}
if (Settings::values.cpuopt_unsafe_inaccurate_nan) {
if (Settings::values.cpuopt_unsafe_inaccurate_nan.GetValue()) {
config.optimizations |= Dynarmic::OptimizationFlag::Unsafe_InaccurateNaN;
}
}
@@ -198,13 +198,13 @@ void ARM_Dynarmic_32::Step() {
jit->Step();
}
ARM_Dynarmic_32::ARM_Dynarmic_32(System& system, CPUInterrupts& interrupt_handlers,
bool uses_wall_clock, ExclusiveMonitor& exclusive_monitor,
std::size_t core_index)
: ARM_Interface{system, interrupt_handlers, uses_wall_clock},
ARM_Dynarmic_32::ARM_Dynarmic_32(System& system_, CPUInterrupts& interrupt_handlers_,
bool uses_wall_clock_, ExclusiveMonitor& exclusive_monitor_,
std::size_t core_index_)
: ARM_Interface{system_, interrupt_handlers_, uses_wall_clock_},
cb(std::make_unique<DynarmicCallbacks32>(*this)),
cp15(std::make_shared<DynarmicCP15>(*this)), core_index{core_index},
exclusive_monitor{dynamic_cast<DynarmicExclusiveMonitor&>(exclusive_monitor)},
cp15(std::make_shared<DynarmicCP15>(*this)), core_index{core_index_},
exclusive_monitor{dynamic_cast<DynarmicExclusiveMonitor&>(exclusive_monitor_)},
jit(MakeJit(nullptr)) {}
ARM_Dynarmic_32::~ARM_Dynarmic_32() = default;

View File

@@ -29,8 +29,8 @@ class System;
class ARM_Dynarmic_32 final : public ARM_Interface {
public:
ARM_Dynarmic_32(System& system, CPUInterrupts& interrupt_handlers, bool uses_wall_clock,
ExclusiveMonitor& exclusive_monitor, std::size_t core_index);
ARM_Dynarmic_32(System& system_, CPUInterrupts& interrupt_handlers_, bool uses_wall_clock_,
ExclusiveMonitor& exclusive_monitor_, std::size_t core_index_);
~ARM_Dynarmic_32() override;
void SetPC(u64 pc) override;

View File

@@ -16,8 +16,8 @@
#include "core/core.h"
#include "core/core_timing.h"
#include "core/hardware_properties.h"
#include "core/hle/kernel/k_process.h"
#include "core/hle/kernel/k_scheduler.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/svc.h"
#include "core/memory.h"
@@ -27,7 +27,7 @@ using Vector = Dynarmic::A64::Vector;
class DynarmicCallbacks64 : public Dynarmic::A64::UserCallbacks {
public:
explicit DynarmicCallbacks64(ARM_Dynarmic_64& parent) : parent(parent) {}
explicit DynarmicCallbacks64(ARM_Dynarmic_64& parent_) : parent{parent_} {}
u8 MemoryRead8(u64 vaddr) override {
return parent.system.Memory().Read8(vaddr);
@@ -182,7 +182,7 @@ std::shared_ptr<Dynarmic::A64::Jit> ARM_Dynarmic_64::MakeJit(Common::PageTable*
config.far_code_offset = 256 * 1024 * 1024;
// Safe optimizations
if (Settings::values.cpu_accuracy == Settings::CPUAccuracy::DebugMode) {
if (Settings::values.cpu_accuracy.GetValue() == Settings::CPUAccuracy::DebugMode) {
if (!Settings::values.cpuopt_page_tables) {
config.page_table = nullptr;
}
@@ -210,15 +210,15 @@ std::shared_ptr<Dynarmic::A64::Jit> ARM_Dynarmic_64::MakeJit(Common::PageTable*
}
// Unsafe optimizations
if (Settings::values.cpu_accuracy == Settings::CPUAccuracy::Unsafe) {
if (Settings::values.cpu_accuracy.GetValue() == Settings::CPUAccuracy::Unsafe) {
config.unsafe_optimizations = true;
if (Settings::values.cpuopt_unsafe_unfuse_fma) {
if (Settings::values.cpuopt_unsafe_unfuse_fma.GetValue()) {
config.optimizations |= Dynarmic::OptimizationFlag::Unsafe_UnfuseFMA;
}
if (Settings::values.cpuopt_unsafe_reduce_fp_error) {
if (Settings::values.cpuopt_unsafe_reduce_fp_error.GetValue()) {
config.optimizations |= Dynarmic::OptimizationFlag::Unsafe_ReducedErrorFP;
}
if (Settings::values.cpuopt_unsafe_inaccurate_nan) {
if (Settings::values.cpuopt_unsafe_inaccurate_nan.GetValue()) {
config.optimizations |= Dynarmic::OptimizationFlag::Unsafe_InaccurateNaN;
}
}
@@ -238,12 +238,12 @@ void ARM_Dynarmic_64::Step() {
cb->InterpreterFallback(jit->GetPC(), 1);
}
ARM_Dynarmic_64::ARM_Dynarmic_64(System& system, CPUInterrupts& interrupt_handlers,
bool uses_wall_clock, ExclusiveMonitor& exclusive_monitor,
std::size_t core_index)
: ARM_Interface{system, interrupt_handlers, uses_wall_clock},
cb(std::make_unique<DynarmicCallbacks64>(*this)), core_index{core_index},
exclusive_monitor{dynamic_cast<DynarmicExclusiveMonitor&>(exclusive_monitor)},
ARM_Dynarmic_64::ARM_Dynarmic_64(System& system_, CPUInterrupts& interrupt_handlers_,
bool uses_wall_clock_, ExclusiveMonitor& exclusive_monitor_,
std::size_t core_index_)
: ARM_Interface{system_, interrupt_handlers_, uses_wall_clock_},
cb(std::make_unique<DynarmicCallbacks64>(*this)), core_index{core_index_},
exclusive_monitor{dynamic_cast<DynarmicExclusiveMonitor&>(exclusive_monitor_)},
jit(MakeJit(nullptr, 48)) {}
ARM_Dynarmic_64::~ARM_Dynarmic_64() = default;

View File

@@ -26,8 +26,8 @@ class System;
class ARM_Dynarmic_64 final : public ARM_Interface {
public:
ARM_Dynarmic_64(System& system, CPUInterrupts& interrupt_handlers, bool uses_wall_clock,
ExclusiveMonitor& exclusive_monitor, std::size_t core_index);
ARM_Dynarmic_64(System& system_, CPUInterrupts& interrupt_handlers_, bool uses_wall_clock_,
ExclusiveMonitor& exclusive_monitor_, std::size_t core_index_);
~ARM_Dynarmic_64() override;
void SetPC(u64 pc) override;

View File

@@ -94,12 +94,11 @@ CallbackOrAccessOneWord DynarmicCP15::CompileGetOneWord(bool two, unsigned opc1,
CallbackOrAccessTwoWords DynarmicCP15::CompileGetTwoWords(bool two, unsigned opc, CoprocReg CRm) {
if (!two && opc == 0 && CRm == CoprocReg::C14) {
// CNTPCT
const auto callback = static_cast<u64 (*)(Dynarmic::A32::Jit*, void*, u32, u32)>(
[](Dynarmic::A32::Jit*, void* arg, u32, u32) -> u64 {
ARM_Dynarmic_32& parent = *(ARM_Dynarmic_32*)arg;
return parent.system.CoreTiming().GetClockTicks();
});
return Dynarmic::A32::Coprocessor::Callback{callback, (void*)&parent};
const auto callback = [](Dynarmic::A32::Jit*, void* arg, u32, u32) -> u64 {
const auto& parent_arg = *static_cast<ARM_Dynarmic_32*>(arg);
return parent_arg.system.CoreTiming().GetClockTicks();
};
return Callback{callback, &parent};
}
LOG_CRITICAL(Core_ARM, "CP15: mrrc{} p15, {}, <Rt>, <Rt2>, {}", two ? "2" : "", opc, CRm);

View File

@@ -18,7 +18,7 @@ class DynarmicCP15 final : public Dynarmic::A32::Coprocessor {
public:
using CoprocReg = Dynarmic::A32::CoprocReg;
explicit DynarmicCP15(ARM_Dynarmic_32& parent) : parent(parent) {}
explicit DynarmicCP15(ARM_Dynarmic_32& parent_) : parent{parent_} {}
std::optional<Callback> CompileInternalOperation(bool two, unsigned opc1, CoprocReg CRd,
CoprocReg CRn, CoprocReg CRm,

View File

@@ -9,8 +9,8 @@
namespace Core {
DynarmicExclusiveMonitor::DynarmicExclusiveMonitor(Memory::Memory& memory, std::size_t core_count)
: monitor(core_count), memory{memory} {}
DynarmicExclusiveMonitor::DynarmicExclusiveMonitor(Memory::Memory& memory_, std::size_t core_count_)
: monitor{core_count_}, memory{memory_} {}
DynarmicExclusiveMonitor::~DynarmicExclusiveMonitor() = default;

View File

@@ -22,7 +22,7 @@ namespace Core {
class DynarmicExclusiveMonitor final : public ExclusiveMonitor {
public:
explicit DynarmicExclusiveMonitor(Memory::Memory& memory, std::size_t core_count);
explicit DynarmicExclusiveMonitor(Memory::Memory& memory_, std::size_t core_count_);
~DynarmicExclusiveMonitor() override;
u8 ExclusiveRead8(std::size_t core_index, VAddr addr) override;

View File

@@ -27,12 +27,12 @@
#include "core/file_sys/vfs_concat.h"
#include "core/file_sys/vfs_real.h"
#include "core/hardware_interrupt_manager.h"
#include "core/hle/kernel/client_port.h"
#include "core/hle/kernel/k_client_port.h"
#include "core/hle/kernel/k_process.h"
#include "core/hle/kernel/k_scheduler.h"
#include "core/hle/kernel/k_thread.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/physical_core.h"
#include "core/hle/kernel/process.h"
#include "core/hle/service/am/applets/applets.h"
#include "core/hle/service/apm/controller.h"
#include "core/hle/service/filesystem/filesystem.h"
@@ -166,14 +166,14 @@ struct System::Impl {
cpu_manager.SetAsyncGpu(is_async_gpu);
core_timing.SetMulticore(is_multicore);
core_timing.Initialize([&system]() { system.RegisterHostThread(); });
kernel.Initialize();
cpu_manager.Initialize();
core_timing.Initialize([&system]() { system.RegisterHostThread(); });
const auto current_time = std::chrono::duration_cast<std::chrono::seconds>(
std::chrono::system_clock::now().time_since_epoch());
Settings::values.custom_rtc_differential =
Settings::values.custom_rtc.GetValue().value_or(current_time) - current_time;
Settings::values.custom_rtc.value_or(current_time) - current_time;
// Create a default fs if one doesn't already exist.
if (virtual_filesystem == nullptr)
@@ -233,8 +233,11 @@ struct System::Impl {
}
telemetry_session->AddInitialInfo(*app_loader, fs_controller, *content_provider);
auto main_process =
Kernel::Process::Create(system, "main", Kernel::Process::ProcessType::Userland);
auto main_process = Kernel::KProcess::Create(system.Kernel());
ASSERT(Kernel::KProcess::Initialize(main_process, system, "main",
Kernel::KProcess::ProcessType::Userland)
.IsSuccess());
main_process->Open();
const auto [load_result, load_parameters] = app_loader->Load(*main_process, system);
if (load_result != Loader::ResultStatus::Success) {
LOG_CRITICAL(Core, "Failed to load ROM (Error {})!", load_result);
@@ -244,7 +247,7 @@ struct System::Impl {
static_cast<u32>(load_result));
}
AddGlueRegistrationForProcess(*app_loader, *main_process);
kernel.MakeCurrentProcess(main_process.get());
kernel.MakeCurrentProcess(main_process);
kernel.InitializeCores();
// Initialize cheat engine
@@ -286,7 +289,8 @@ struct System::Impl {
telemetry_session->AddField(performance, "Shutdown_EmulationSpeed",
perf_results.emulation_speed * 100.0);
telemetry_session->AddField(performance, "Shutdown_Framerate", perf_results.game_fps);
telemetry_session->AddField(performance, "Shutdown_Framerate",
perf_results.average_game_fps);
telemetry_session->AddField(performance, "Shutdown_Frametime",
perf_results.frametime * 1000.0);
telemetry_session->AddField(performance, "Mean_Frametime_MS",
@@ -311,6 +315,7 @@ struct System::Impl {
gpu_core.reset();
perf_stats.reset();
kernel.Shutdown();
memory.Reset();
applet_manager.ClearAll();
LOG_DEBUG(Core, "Shutdown OK");
@@ -322,7 +327,7 @@ struct System::Impl {
return app_loader->ReadTitle(out);
}
void AddGlueRegistrationForProcess(Loader::AppLoader& loader, Kernel::Process& process) {
void AddGlueRegistrationForProcess(Loader::AppLoader& loader, Kernel::KProcess& process) {
std::vector<u8> nacp_data;
FileSys::NACP nacp;
if (loader.ReadControlData(nacp) == Loader::ResultStatus::Success) {
@@ -513,7 +518,7 @@ const Kernel::GlobalSchedulerContext& System::GlobalSchedulerContext() const {
return impl->kernel.GlobalSchedulerContext();
}
Kernel::Process* System::CurrentProcess() {
Kernel::KProcess* System::CurrentProcess() {
return impl->kernel.CurrentProcess();
}
@@ -525,7 +530,7 @@ const Core::DeviceMemory& System::DeviceMemory() const {
return *impl->device_memory;
}
const Kernel::Process* System::CurrentProcess() const {
const Kernel::KProcess* System::CurrentProcess() const {
return impl->kernel.CurrentProcess();
}

View File

@@ -12,7 +12,6 @@
#include "common/common_types.h"
#include "core/file_sys/vfs_types.h"
#include "core/hle/kernel/object.h"
namespace Core::Frontend {
class EmuWindow;
@@ -29,7 +28,7 @@ namespace Kernel {
class GlobalSchedulerContext;
class KernelCore;
class PhysicalCore;
class Process;
class KProcess;
class KScheduler;
} // namespace Kernel
@@ -264,10 +263,10 @@ public:
[[nodiscard]] const Core::DeviceMemory& DeviceMemory() const;
/// Provides a pointer to the current process
[[nodiscard]] Kernel::Process* CurrentProcess();
[[nodiscard]] Kernel::KProcess* CurrentProcess();
/// Provides a constant pointer to the current process.
[[nodiscard]] const Kernel::Process* CurrentProcess() const;
[[nodiscard]] const Kernel::KProcess* CurrentProcess() const;
/// Provides a reference to the core timing instance.
[[nodiscard]] Timing::CoreTiming& CoreTiming();

View File

@@ -133,8 +133,8 @@ void CoreTiming::UnscheduleEvent(const std::shared_ptr<EventType>& event_type,
}
}
void CoreTiming::AddTicks(u64 ticks) {
this->ticks += ticks;
void CoreTiming::AddTicks(u64 ticks_to_add) {
ticks += ticks_to_add;
downcount -= static_cast<s64>(ticks);
}

View File

@@ -102,7 +102,7 @@ public:
/// We only permit one event of each type in the queue at a time.
void RemoveEvent(const std::shared_ptr<EventType>& event_type);
void AddTicks(u64 ticks);
void AddTicks(u64 ticks_to_add);
void ResetTicks();

View File

@@ -18,7 +18,7 @@
namespace Core {
CpuManager::CpuManager(System& system) : system{system} {}
CpuManager::CpuManager(System& system_) : system{system_} {}
CpuManager::~CpuManager() = default;
void CpuManager::ThreadStart(CpuManager& cpu_manager, std::size_t core) {

View File

@@ -25,7 +25,7 @@ class System;
class CpuManager {
public:
explicit CpuManager(System& system);
explicit CpuManager(System& system_);
CpuManager(const CpuManager&) = delete;
CpuManager(CpuManager&&) = delete;
@@ -35,13 +35,13 @@ public:
CpuManager& operator=(CpuManager&&) = delete;
/// Sets if emulation is multicore or single core, must be set before Initialize
void SetMulticore(bool is_multicore) {
this->is_multicore = is_multicore;
void SetMulticore(bool is_multi) {
is_multicore = is_multi;
}
/// Sets if emulation is using an asynchronous GPU.
void SetAsyncGpu(bool is_async_gpu) {
this->is_async_gpu = is_async_gpu;
void SetAsyncGpu(bool is_async) {
is_async_gpu = is_async;
}
void Initialize();

View File

@@ -10,8 +10,8 @@
namespace Core::Crypto {
CTREncryptionLayer::CTREncryptionLayer(FileSys::VirtualFile base_, Key128 key_,
std::size_t base_offset)
: EncryptionLayer(std::move(base_)), base_offset(base_offset), cipher(key_, Mode::CTR) {}
std::size_t base_offset_)
: EncryptionLayer(std::move(base_)), base_offset(base_offset_), cipher(key_, Mode::CTR) {}
std::size_t CTREncryptionLayer::Read(u8* data, std::size_t length, std::size_t offset) const {
if (length == 0)

View File

@@ -17,7 +17,7 @@ class CTREncryptionLayer : public EncryptionLayer {
public:
using IVData = std::array<u8, 16>;
CTREncryptionLayer(FileSys::VirtualFile base, Key128 key, std::size_t base_offset);
CTREncryptionLayer(FileSys::VirtualFile base_, Key128 key_, std::size_t base_offset_);
std::size_t Read(u8* data, std::size_t length, std::size_t offset) const override;

View File

@@ -458,7 +458,7 @@ static std::array<u8, size> operator^(const std::array<u8, size>& lhs,
const std::array<u8, size>& rhs) {
std::array<u8, size> out;
std::transform(lhs.begin(), lhs.end(), rhs.begin(), out.begin(),
[](u8 lhs, u8 rhs) { return u8(lhs ^ rhs); });
[](u8 lhs_elem, u8 rhs_elem) { return u8(lhs_elem ^ rhs_elem); });
return out;
}

View File

@@ -176,26 +176,30 @@ u64 XCI::GetProgramTitleID() const {
u32 XCI::GetSystemUpdateVersion() {
const auto update = GetPartition(XCIPartition::Update);
if (update == nullptr)
if (update == nullptr) {
return 0;
}
for (const auto& file : update->GetFiles()) {
NCA nca{file, nullptr, 0};
for (const auto& update_file : update->GetFiles()) {
NCA nca{update_file, nullptr, 0};
if (nca.GetStatus() != Loader::ResultStatus::Success)
if (nca.GetStatus() != Loader::ResultStatus::Success) {
continue;
}
if (nca.GetType() == NCAContentType::Meta && nca.GetTitleId() == 0x0100000000000816) {
const auto dir = nca.GetSubdirectories()[0];
const auto cnmt = dir->GetFile("SystemUpdate_0100000000000816.cnmt");
if (cnmt == nullptr)
if (cnmt == nullptr) {
continue;
}
CNMT cnmt_data{cnmt};
const auto metas = cnmt_data.GetMetaRecords();
if (metas.empty())
if (metas.empty()) {
continue;
}
return metas[0].title_version;
}
@@ -262,8 +266,8 @@ VirtualDir XCI::ConcatenatedPseudoDirectory() {
if (part == nullptr)
continue;
for (const auto& file : part->GetFiles())
out->AddFile(file);
for (const auto& part_file : part->GetFiles())
out->AddFile(part_file);
}
return out;
@@ -283,12 +287,12 @@ Loader::ResultStatus XCI::AddNCAFromPartition(XCIPartition part) {
return Loader::ResultStatus::ErrorXCIMissingPartition;
}
for (const VirtualFile& file : partition->GetFiles()) {
if (file->GetExtension() != "nca") {
for (const VirtualFile& partition_file : partition->GetFiles()) {
if (partition_file->GetExtension() != "nca") {
continue;
}
auto nca = std::make_shared<NCA>(file, nullptr, 0);
auto nca = std::make_shared<NCA>(partition_file, nullptr, 0);
if (nca->IsUpdate()) {
continue;
}

View File

@@ -5,6 +5,7 @@
#include <algorithm>
#include <cstring>
#include <optional>
#include <ranges>
#include <utility>
#include "common/logging/log.h"
@@ -136,12 +137,11 @@ NCA::NCA(VirtualFile file_, VirtualFile bktr_base_romfs_, u64 bktr_base_ivfc_off
return;
}
has_rights_id = std::any_of(header.rights_id.begin(), header.rights_id.end(),
[](char c) { return c != '\0'; });
has_rights_id = std::ranges::any_of(header.rights_id, [](char c) { return c != '\0'; });
const std::vector<NCASectionHeader> sections = ReadSectionHeaders();
is_update = std::any_of(sections.begin(), sections.end(), [](const NCASectionHeader& header) {
return header.raw.header.crypto_type == NCASectionCryptoType::BKTR;
is_update = std::ranges::any_of(sections, [](const NCASectionHeader& nca_header) {
return nca_header.raw.header.crypto_type == NCASectionCryptoType::BKTR;
});
if (!ReadSections(sections, bktr_base_ivfc_offset)) {
@@ -202,8 +202,9 @@ bool NCA::HandlePotentialHeaderDecryption() {
std::vector<NCASectionHeader> NCA::ReadSectionHeaders() const {
const std::ptrdiff_t number_sections =
std::count_if(std::begin(header.section_tables), std::end(header.section_tables),
[](NCASectionTableEntry entry) { return entry.media_offset > 0; });
std::ranges::count_if(header.section_tables, [](const NCASectionTableEntry& entry) {
return entry.media_offset > 0;
});
std::vector<NCASectionHeader> sections(number_sections);
const auto length_sections = SECTION_HEADER_SIZE * number_sections;
@@ -312,11 +313,11 @@ bool NCA::ReadRomFSSection(const NCASectionHeader& section, const NCASectionTabl
}
std::vector<RelocationBucket> relocation_buckets(relocation_buckets_raw.size());
std::transform(relocation_buckets_raw.begin(), relocation_buckets_raw.end(),
relocation_buckets.begin(), &ConvertRelocationBucketRaw);
std::ranges::transform(relocation_buckets_raw, relocation_buckets.begin(),
&ConvertRelocationBucketRaw);
std::vector<SubsectionBucket> subsection_buckets(subsection_buckets_raw.size());
std::transform(subsection_buckets_raw.begin(), subsection_buckets_raw.end(),
subsection_buckets.begin(), &ConvertSubsectionBucketRaw);
std::ranges::transform(subsection_buckets_raw, subsection_buckets.begin(),
&ConvertSubsectionBucketRaw);
u32 ctr_low;
std::memcpy(&ctr_low, section.raw.section_ctr.data(), sizeof(ctr_low));

View File

@@ -9,6 +9,7 @@
namespace FileSys {
constexpr ResultCode ERROR_PATH_NOT_FOUND{ErrorModule::FS, 1};
constexpr ResultCode ERROR_PATH_ALREADY_EXISTS{ErrorModule::FS, 2};
constexpr ResultCode ERROR_ENTITY_NOT_FOUND{ErrorModule::FS, 1002};
constexpr ResultCode ERROR_SD_CARD_NOT_FOUND{ErrorModule::FS, 2001};
constexpr ResultCode ERROR_OUT_OF_BOUNDS{ErrorModule::FS, 3005};

View File

@@ -126,16 +126,17 @@ static u64 romfs_get_hash_table_count(u64 num_entries) {
return count;
}
void RomFSBuildContext::VisitDirectory(VirtualDir root_romfs, VirtualDir ext,
void RomFSBuildContext::VisitDirectory(VirtualDir root_romfs, VirtualDir ext_dir,
std::shared_ptr<RomFSBuildDirectoryContext> parent) {
std::vector<std::shared_ptr<RomFSBuildDirectoryContext>> child_dirs;
VirtualDir dir;
if (parent->path_len == 0)
if (parent->path_len == 0) {
dir = root_romfs;
else
} else {
dir = root_romfs->GetDirectoryRelative(parent->path);
}
const auto entries = dir->GetEntries();
@@ -147,8 +148,9 @@ void RomFSBuildContext::VisitDirectory(VirtualDir root_romfs, VirtualDir ext,
child->path_len = child->cur_path_ofs + static_cast<u32>(kv.first.size());
child->path = parent->path + "/" + kv.first;
if (ext != nullptr && ext->GetFileRelative(child->path + ".stub") != nullptr)
if (ext_dir != nullptr && ext_dir->GetFileRelative(child->path + ".stub") != nullptr) {
continue;
}
// Sanity check on path_len
ASSERT(child->path_len < FS_MAX_PATH);
@@ -163,21 +165,20 @@ void RomFSBuildContext::VisitDirectory(VirtualDir root_romfs, VirtualDir ext,
child->path_len = child->cur_path_ofs + static_cast<u32>(kv.first.size());
child->path = parent->path + "/" + kv.first;
if (ext != nullptr && ext->GetFileRelative(child->path + ".stub") != nullptr)
if (ext_dir != nullptr && ext_dir->GetFileRelative(child->path + ".stub") != nullptr) {
continue;
}
// Sanity check on path_len
ASSERT(child->path_len < FS_MAX_PATH);
child->source = root_romfs->GetFileRelative(child->path);
if (ext != nullptr) {
const auto ips = ext->GetFileRelative(child->path + ".ips");
if (ips != nullptr) {
auto patched = PatchIPS(child->source, ips);
if (patched != nullptr)
if (ext_dir != nullptr) {
if (const auto ips = ext_dir->GetFileRelative(child->path + ".ips")) {
if (auto patched = PatchIPS(child->source, ips)) {
child->source = std::move(patched);
}
}
}
@@ -188,7 +189,7 @@ void RomFSBuildContext::VisitDirectory(VirtualDir root_romfs, VirtualDir ext,
}
for (auto& child : child_dirs) {
this->VisitDirectory(root_romfs, ext, child);
this->VisitDirectory(root_romfs, ext_dir, child);
}
}

View File

@@ -59,7 +59,7 @@ private:
u64 file_hash_table_size = 0;
u64 file_partition_size = 0;
void VisitDirectory(VirtualDir filesys, VirtualDir ext,
void VisitDirectory(VirtualDir filesys, VirtualDir ext_dir,
std::shared_ptr<RomFSBuildDirectoryContext> parent);
bool AddDirectory(std::shared_ptr<RomFSBuildDirectoryContext> parent_dir_ctx,

View File

@@ -39,10 +39,10 @@ CNMT::CNMT(VirtualFile file) {
}
}
CNMT::CNMT(CNMTHeader header, OptionalHeader opt_header, std::vector<ContentRecord> content_records,
std::vector<MetaRecord> meta_records)
: header(std::move(header)), opt_header(std::move(opt_header)),
content_records(std::move(content_records)), meta_records(std::move(meta_records)) {}
CNMT::CNMT(CNMTHeader header_, OptionalHeader opt_header_,
std::vector<ContentRecord> content_records_, std::vector<MetaRecord> meta_records_)
: header(std::move(header_)), opt_header(std::move(opt_header_)),
content_records(std::move(content_records_)), meta_records(std::move(meta_records_)) {}
CNMT::~CNMT() = default;

View File

@@ -87,8 +87,8 @@ static_assert(sizeof(CNMTHeader) == 0x20, "CNMTHeader has incorrect size.");
class CNMT {
public:
explicit CNMT(VirtualFile file);
CNMT(CNMTHeader header, OptionalHeader opt_header, std::vector<ContentRecord> content_records,
std::vector<MetaRecord> meta_records);
CNMT(CNMTHeader header_, OptionalHeader opt_header_,
std::vector<ContentRecord> content_records_, std::vector<MetaRecord> meta_records_);
~CNMT();
u64 GetTitleID() const;

View File

@@ -83,11 +83,14 @@ BKTR::~BKTR() = default;
std::size_t BKTR::Read(u8* data, std::size_t length, std::size_t offset) const {
// Read out of bounds.
if (offset >= relocation.size)
if (offset >= relocation.size) {
return 0;
const auto relocation = GetRelocationEntry(offset);
const auto section_offset = offset - relocation.address_patch + relocation.address_source;
const auto bktr_read = relocation.from_patch;
}
const auto relocation_entry = GetRelocationEntry(offset);
const auto section_offset =
offset - relocation_entry.address_patch + relocation_entry.address_source;
const auto bktr_read = relocation_entry.from_patch;
const auto next_relocation = GetNextRelocationEntry(offset);
@@ -106,15 +109,16 @@ std::size_t BKTR::Read(u8* data, std::size_t length, std::size_t offset) const {
return bktr_romfs->Read(data, length, section_offset);
}
const auto subsection = GetSubsectionEntry(section_offset);
const auto subsection_entry = GetSubsectionEntry(section_offset);
Core::Crypto::AESCipher<Core::Crypto::Key128> cipher(key, Core::Crypto::Mode::CTR);
// Calculate AES IV
std::array<u8, 16> iv{};
auto subsection_ctr = subsection.ctr;
auto subsection_ctr = subsection_entry.ctr;
auto offset_iv = section_offset + base_offset;
for (std::size_t i = 0; i < section_ctr.size(); ++i)
for (std::size_t i = 0; i < section_ctr.size(); ++i) {
iv[i] = section_ctr[0x8 - i - 1];
}
offset_iv >>= 4;
for (std::size_t i = 0; i < sizeof(u64); ++i) {
iv[0xF - i] = static_cast<u8>(offset_iv & 0xFF);

View File

@@ -12,6 +12,7 @@
#include "common/logging/log.h"
#include "core/crypto/key_manager.h"
#include "core/file_sys/card_image.h"
#include "core/file_sys/common_funcs.h"
#include "core/file_sys/content_archive.h"
#include "core/file_sys/nca_metadata.h"
#include "core/file_sys/registered_cache.h"
@@ -281,14 +282,14 @@ NcaID PlaceholderCache::Generate() {
return out;
}
VirtualFile RegisteredCache::OpenFileOrDirectoryConcat(const VirtualDir& dir,
VirtualFile RegisteredCache::OpenFileOrDirectoryConcat(const VirtualDir& open_dir,
std::string_view path) const {
const auto file = dir->GetFileRelative(path);
const auto file = open_dir->GetFileRelative(path);
if (file != nullptr) {
return file;
}
const auto nca_dir = dir->GetDirectoryRelative(path);
const auto nca_dir = open_dir->GetDirectoryRelative(path);
if (nca_dir == nullptr) {
return nullptr;
}
@@ -431,13 +432,15 @@ void RegisteredCache::ProcessFiles(const std::vector<NcaID>& ids) {
}
void RegisteredCache::AccumulateYuzuMeta() {
const auto dir = this->dir->GetSubdirectory("yuzu_meta");
if (dir == nullptr)
const auto meta_dir = dir->GetSubdirectory("yuzu_meta");
if (meta_dir == nullptr) {
return;
}
for (const auto& file : dir->GetFiles()) {
if (file->GetExtension() != "cnmt")
for (const auto& file : meta_dir->GetFiles()) {
if (file->GetExtension() != "cnmt") {
continue;
}
CNMT cnmt(file);
yuzu_meta.insert_or_assign(cnmt.GetTitleID(), std::move(cnmt));
@@ -445,8 +448,10 @@ void RegisteredCache::AccumulateYuzuMeta() {
}
void RegisteredCache::Refresh() {
if (dir == nullptr)
if (dir == nullptr) {
return;
}
const auto ids = AccumulateFiles();
ProcessFiles(ids);
AccumulateYuzuMeta();
@@ -566,7 +571,7 @@ InstallResult RegisteredCache::InstallEntry(const NSP& nsp, bool overwrite_if_ex
}
const auto meta_id_raw = (*meta_iter)->GetName().substr(0, 32);
const auto meta_id = Common::HexStringToArray<16>(meta_id_raw);
const auto meta_id_data = Common::HexStringToArray<16>(meta_id_raw);
if ((*meta_iter)->GetSubdirectories().empty()) {
LOG_ERROR(Loader,
@@ -588,10 +593,16 @@ InstallResult RegisteredCache::InstallEntry(const NSP& nsp, bool overwrite_if_ex
const CNMT cnmt(cnmt_file);
const auto title_id = cnmt.GetTitleID();
const auto version = cnmt.GetTitleVersion();
if (title_id == GetBaseTitleID(title_id) && version == 0) {
return InstallResult::ErrorBaseInstall;
}
const auto result = RemoveExistingEntry(title_id);
// Install Metadata File
const auto res = RawInstallNCA(**meta_iter, copy, overwrite_if_exists, meta_id);
const auto res = RawInstallNCA(**meta_iter, copy, overwrite_if_exists, meta_id_data);
if (res != InstallResult::Success) {
return res;
}
@@ -741,15 +752,15 @@ InstallResult RegisteredCache::RawInstallNCA(const NCA& nca, const VfsCopyFuncti
bool RegisteredCache::RawInstallYuzuMeta(const CNMT& cnmt) {
// Reasoning behind this method can be found in the comment for InstallEntry, NCA overload.
const auto dir = this->dir->CreateDirectoryRelative("yuzu_meta");
const auto meta_dir = dir->CreateDirectoryRelative("yuzu_meta");
const auto filename = GetCNMTName(cnmt.GetType(), cnmt.GetTitleID());
if (dir->GetFile(filename) == nullptr) {
auto out = dir->CreateFile(filename);
if (meta_dir->GetFile(filename) == nullptr) {
auto out = meta_dir->CreateFile(filename);
const auto buffer = cnmt.Serialize();
out->Resize(buffer.size());
out->WriteBytes(buffer);
} else {
auto out = dir->GetFile(filename);
auto out = meta_dir->GetFile(filename);
CNMT old_cnmt(out);
// Returns true on change
if (old_cnmt.UnionRecords(cnmt)) {

View File

@@ -38,6 +38,7 @@ enum class InstallResult {
ErrorAlreadyExists,
ErrorCopyFailed,
ErrorMetaFailed,
ErrorBaseInstall,
};
struct ContentProviderEntry {
@@ -182,7 +183,7 @@ private:
void AccumulateYuzuMeta();
std::optional<NcaID> GetNcaIDFromMetadata(u64 title_id, ContentRecordType type) const;
VirtualFile GetFileAtID(NcaID id) const;
VirtualFile OpenFileOrDirectoryConcat(const VirtualDir& dir, std::string_view path) const;
VirtualFile OpenFileOrDirectoryConcat(const VirtualDir& open_dir, std::string_view path) const;
InstallResult RawInstallNCA(const NCA& nca, const VfsCopyFunction& copy,
bool overwrite_if_exists, std::optional<NcaID> override_id = {});
bool RawInstallYuzuMeta(const CNMT& cnmt);

View File

@@ -13,7 +13,7 @@
#include "core/file_sys/patch_manager.h"
#include "core/file_sys/registered_cache.h"
#include "core/file_sys/romfs_factory.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/k_process.h"
#include "core/hle/service/filesystem/filesystem.h"
#include "core/loader/loader.h"
@@ -33,8 +33,8 @@ RomFSFactory::RomFSFactory(Loader::AppLoader& app_loader, ContentProvider& provi
RomFSFactory::~RomFSFactory() = default;
void RomFSFactory::SetPackedUpdate(VirtualFile update_raw) {
this->update_raw = std::move(update_raw);
void RomFSFactory::SetPackedUpdate(VirtualFile update_raw_file) {
update_raw = std::move(update_raw_file);
}
ResultVal<VirtualFile> RomFSFactory::OpenCurrentProcess(u64 current_process_title_id) const {

View File

@@ -40,7 +40,7 @@ public:
Service::FileSystem::FileSystemController& controller);
~RomFSFactory();
void SetPackedUpdate(VirtualFile update_raw);
void SetPackedUpdate(VirtualFile update_raw_file);
[[nodiscard]] ResultVal<VirtualFile> OpenCurrentProcess(u64 current_process_title_id) const;
[[nodiscard]] ResultVal<VirtualFile> OpenPatchedRomFS(u64 title_id,
ContentRecordType type) const;

View File

@@ -9,7 +9,7 @@
#include "core/core.h"
#include "core/file_sys/savedata_factory.h"
#include "core/file_sys/vfs.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/k_process.h"
namespace FileSys {
@@ -170,26 +170,30 @@ std::string SaveDataFactory::GetFullPath(Core::System& system, SaveDataSpaceId s
SaveDataSize SaveDataFactory::ReadSaveDataSize(SaveDataType type, u64 title_id,
u128 user_id) const {
const auto path = GetFullPath(system, SaveDataSpaceId::NandUser, type, title_id, user_id, 0);
const auto dir = GetOrCreateDirectoryRelative(this->dir, path);
const auto relative_dir = GetOrCreateDirectoryRelative(dir, path);
const auto size_file = dir->GetFile(SAVE_DATA_SIZE_FILENAME);
if (size_file == nullptr || size_file->GetSize() < sizeof(SaveDataSize))
const auto size_file = relative_dir->GetFile(SAVE_DATA_SIZE_FILENAME);
if (size_file == nullptr || size_file->GetSize() < sizeof(SaveDataSize)) {
return {0, 0};
}
SaveDataSize out;
if (size_file->ReadObject(&out) != sizeof(SaveDataSize))
if (size_file->ReadObject(&out) != sizeof(SaveDataSize)) {
return {0, 0};
}
return out;
}
void SaveDataFactory::WriteSaveDataSize(SaveDataType type, u64 title_id, u128 user_id,
SaveDataSize new_value) const {
const auto path = GetFullPath(system, SaveDataSpaceId::NandUser, type, title_id, user_id, 0);
const auto dir = GetOrCreateDirectoryRelative(this->dir, path);
const auto relative_dir = GetOrCreateDirectoryRelative(dir, path);
const auto size_file = dir->CreateFile(SAVE_DATA_SIZE_FILENAME);
if (size_file == nullptr)
const auto size_file = relative_dir->CreateFile(SAVE_DATA_SIZE_FILENAME);
if (size_file == nullptr) {
return;
}
size_file->Resize(sizeof(SaveDataSize));
size_file->WriteObject(new_value);

View File

@@ -20,8 +20,8 @@
namespace FileSys {
NSP::NSP(VirtualFile file_, std::size_t program_index)
: file(std::move(file_)), program_index(program_index), status{Loader::ResultStatus::Success},
NSP::NSP(VirtualFile file_, std::size_t program_index_)
: file(std::move(file_)), program_index(program_index_), status{Loader::ResultStatus::Success},
pfs(std::make_shared<PartitionFilesystem>(file)), keys{Core::Crypto::KeyManager::Instance()} {
if (pfs->GetStatus() != Loader::ResultStatus::Success) {
status = pfs->GetStatus();
@@ -232,15 +232,15 @@ void NSP::SetTicketKeys(const std::vector<VirtualFile>& files) {
void NSP::InitializeExeFSAndRomFS(const std::vector<VirtualFile>& files) {
exefs = pfs;
const auto romfs_iter = std::find_if(files.begin(), files.end(), [](const VirtualFile& file) {
return file->GetName().rfind(".romfs") != std::string::npos;
const auto iter = std::find_if(files.begin(), files.end(), [](const VirtualFile& entry) {
return entry->GetName().rfind(".romfs") != std::string::npos;
});
if (romfs_iter == files.end()) {
if (iter == files.end()) {
return;
}
romfs = *romfs_iter;
romfs = *iter;
}
void NSP::ReadNCAs(const std::vector<VirtualFile>& files) {

View File

@@ -27,7 +27,7 @@ enum class ContentRecordType : u8;
class NSP : public ReadOnlyVfsDirectory {
public:
explicit NSP(VirtualFile file, std::size_t program_index = 0);
explicit NSP(VirtualFile file_, std::size_t program_index_ = 0);
~NSP() override;
Loader::ResultStatus GetStatus() const;

View File

@@ -23,8 +23,8 @@ static bool VerifyConcatenationMapContinuity(const std::multimap<u64, VirtualFil
return map.begin()->first == 0;
}
ConcatenatedVfsFile::ConcatenatedVfsFile(std::vector<VirtualFile> files_, std::string name)
: name(std::move(name)) {
ConcatenatedVfsFile::ConcatenatedVfsFile(std::vector<VirtualFile> files_, std::string name_)
: name(std::move(name_)) {
std::size_t next_offset = 0;
for (const auto& file : files_) {
files.emplace(next_offset, file);
@@ -32,8 +32,8 @@ ConcatenatedVfsFile::ConcatenatedVfsFile(std::vector<VirtualFile> files_, std::s
}
}
ConcatenatedVfsFile::ConcatenatedVfsFile(std::multimap<u64, VirtualFile> files_, std::string name)
: files(std::move(files_)), name(std::move(name)) {
ConcatenatedVfsFile::ConcatenatedVfsFile(std::multimap<u64, VirtualFile> files_, std::string name_)
: files(std::move(files_)), name(std::move(name_)) {
ASSERT(VerifyConcatenationMapContinuity(files));
}
@@ -136,7 +136,7 @@ std::size_t ConcatenatedVfsFile::Write(const u8* data, std::size_t length, std::
return 0;
}
bool ConcatenatedVfsFile::Rename(std::string_view name) {
bool ConcatenatedVfsFile::Rename(std::string_view new_name) {
return false;
}

View File

@@ -14,8 +14,8 @@ namespace FileSys {
// Class that wraps multiple vfs files and concatenates them, making reads seamless. Currently
// read-only.
class ConcatenatedVfsFile : public VfsFile {
ConcatenatedVfsFile(std::vector<VirtualFile> files, std::string name);
ConcatenatedVfsFile(std::multimap<u64, VirtualFile> files, std::string name);
explicit ConcatenatedVfsFile(std::vector<VirtualFile> files, std::string name_);
explicit ConcatenatedVfsFile(std::multimap<u64, VirtualFile> files, std::string name_);
public:
~ConcatenatedVfsFile() override;
@@ -36,7 +36,7 @@ public:
bool IsReadable() const override;
std::size_t Read(u8* data, std::size_t length, std::size_t offset) const override;
std::size_t Write(const u8* data, std::size_t length, std::size_t offset) override;
bool Rename(std::string_view name) override;
bool Rename(std::string_view new_name) override;
private:
// Maps starting offset to file -- more efficient.

View File

@@ -8,8 +8,8 @@
namespace FileSys {
LayeredVfsDirectory::LayeredVfsDirectory(std::vector<VirtualDir> dirs, std::string name)
: dirs(std::move(dirs)), name(std::move(name)) {}
LayeredVfsDirectory::LayeredVfsDirectory(std::vector<VirtualDir> dirs_, std::string name_)
: dirs(std::move(dirs_)), name(std::move(name_)) {}
LayeredVfsDirectory::~LayeredVfsDirectory() = default;
@@ -45,12 +45,12 @@ VirtualDir LayeredVfsDirectory::GetDirectoryRelative(std::string_view path) cons
return MakeLayeredDirectory(std::move(out));
}
VirtualFile LayeredVfsDirectory::GetFile(std::string_view name) const {
return GetFileRelative(name);
VirtualFile LayeredVfsDirectory::GetFile(std::string_view file_name) const {
return GetFileRelative(file_name);
}
VirtualDir LayeredVfsDirectory::GetSubdirectory(std::string_view name) const {
return GetDirectoryRelative(name);
VirtualDir LayeredVfsDirectory::GetSubdirectory(std::string_view subdir_name) const {
return GetDirectoryRelative(subdir_name);
}
std::string LayeredVfsDirectory::GetFullPath() const {
@@ -105,24 +105,24 @@ VirtualDir LayeredVfsDirectory::GetParentDirectory() const {
return dirs[0]->GetParentDirectory();
}
VirtualDir LayeredVfsDirectory::CreateSubdirectory(std::string_view name) {
VirtualDir LayeredVfsDirectory::CreateSubdirectory(std::string_view subdir_name) {
return nullptr;
}
VirtualFile LayeredVfsDirectory::CreateFile(std::string_view name) {
VirtualFile LayeredVfsDirectory::CreateFile(std::string_view file_name) {
return nullptr;
}
bool LayeredVfsDirectory::DeleteSubdirectory(std::string_view name) {
bool LayeredVfsDirectory::DeleteSubdirectory(std::string_view subdir_name) {
return false;
}
bool LayeredVfsDirectory::DeleteFile(std::string_view name) {
bool LayeredVfsDirectory::DeleteFile(std::string_view file_name) {
return false;
}
bool LayeredVfsDirectory::Rename(std::string_view name_) {
name = name_;
bool LayeredVfsDirectory::Rename(std::string_view new_name) {
name = new_name;
return true;
}

View File

@@ -13,7 +13,7 @@ namespace FileSys {
// one and falling back to the one after. The highest priority directory (overwrites all others)
// should be element 0 in the dirs vector.
class LayeredVfsDirectory : public VfsDirectory {
LayeredVfsDirectory(std::vector<VirtualDir> dirs, std::string name);
explicit LayeredVfsDirectory(std::vector<VirtualDir> dirs_, std::string name_);
public:
~LayeredVfsDirectory() override;
@@ -23,8 +23,8 @@ public:
VirtualFile GetFileRelative(std::string_view path) const override;
VirtualDir GetDirectoryRelative(std::string_view path) const override;
VirtualFile GetFile(std::string_view name) const override;
VirtualDir GetSubdirectory(std::string_view name) const override;
VirtualFile GetFile(std::string_view file_name) const override;
VirtualDir GetSubdirectory(std::string_view subdir_name) const override;
std::string GetFullPath() const override;
std::vector<VirtualFile> GetFiles() const override;
@@ -33,11 +33,11 @@ public:
bool IsReadable() const override;
std::string GetName() const override;
VirtualDir GetParentDirectory() const override;
VirtualDir CreateSubdirectory(std::string_view name) override;
VirtualFile CreateFile(std::string_view name) override;
bool DeleteSubdirectory(std::string_view name) override;
bool DeleteFile(std::string_view name) override;
bool Rename(std::string_view name) override;
VirtualDir CreateSubdirectory(std::string_view subdir_name) override;
VirtualFile CreateFile(std::string_view file_name) override;
bool DeleteSubdirectory(std::string_view subdir_name) override;
bool DeleteFile(std::string_view file_name) override;
bool Rename(std::string_view new_name) override;
private:
std::vector<VirtualDir> dirs;

View File

@@ -3,7 +3,16 @@
// Refer to the license.txt file included.
#include <string>
#ifdef __GNUC__
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wshadow"
#endif
#include <zip.h>
#ifdef __GNUC__
#pragma GCC diagnostic pop
#endif
#include "common/logging/backend.h"
#include "core/file_sys/vfs.h"
#include "core/file_sys/vfs_libzip.h"

View File

@@ -84,8 +84,8 @@ std::size_t OffsetVfsFile::WriteBytes(const std::vector<u8>& data, std::size_t r
return file->Write(data.data(), TrimToFit(data.size(), r_offset), offset + r_offset);
}
bool OffsetVfsFile::Rename(std::string_view name) {
return file->Rename(name);
bool OffsetVfsFile::Rename(std::string_view new_name) {
return file->Rename(new_name);
}
std::size_t OffsetVfsFile::GetOffset() const {

View File

@@ -35,7 +35,7 @@ public:
bool WriteByte(u8 data, std::size_t offset) override;
std::size_t WriteBytes(const std::vector<u8>& data, std::size_t offset) override;
bool Rename(std::string_view name) override;
bool Rename(std::string_view new_name) override;
std::size_t GetOffset() const;

View File

@@ -358,16 +358,16 @@ RealVfsDirectory::RealVfsDirectory(RealVfsFilesystem& base_, const std::string&
RealVfsDirectory::~RealVfsDirectory() = default;
VirtualFile RealVfsDirectory::GetFileRelative(std::string_view path) const {
const auto full_path = FS::SanitizePath(this->path + DIR_SEP + std::string(path));
VirtualFile RealVfsDirectory::GetFileRelative(std::string_view relative_path) const {
const auto full_path = FS::SanitizePath(path + DIR_SEP + std::string(relative_path));
if (!FS::Exists(full_path) || FS::IsDirectory(full_path)) {
return nullptr;
}
return base.OpenFile(full_path, perms);
}
VirtualDir RealVfsDirectory::GetDirectoryRelative(std::string_view path) const {
const auto full_path = FS::SanitizePath(this->path + DIR_SEP + std::string(path));
VirtualDir RealVfsDirectory::GetDirectoryRelative(std::string_view relative_path) const {
const auto full_path = FS::SanitizePath(path + DIR_SEP + std::string(relative_path));
if (!FS::Exists(full_path) || !FS::IsDirectory(full_path)) {
return nullptr;
}
@@ -382,13 +382,13 @@ VirtualDir RealVfsDirectory::GetSubdirectory(std::string_view name) const {
return GetDirectoryRelative(name);
}
VirtualFile RealVfsDirectory::CreateFileRelative(std::string_view path) {
const auto full_path = FS::SanitizePath(this->path + DIR_SEP + std::string(path));
VirtualFile RealVfsDirectory::CreateFileRelative(std::string_view relative_path) {
const auto full_path = FS::SanitizePath(path + DIR_SEP + std::string(relative_path));
return base.CreateFile(full_path, perms);
}
VirtualDir RealVfsDirectory::CreateDirectoryRelative(std::string_view path) {
const auto full_path = FS::SanitizePath(this->path + DIR_SEP + std::string(path));
VirtualDir RealVfsDirectory::CreateDirectoryRelative(std::string_view relative_path) {
const auto full_path = FS::SanitizePath(path + DIR_SEP + std::string(relative_path));
return base.CreateDirectory(full_path, perms);
}

View File

@@ -79,12 +79,12 @@ class RealVfsDirectory : public VfsDirectory {
public:
~RealVfsDirectory() override;
VirtualFile GetFileRelative(std::string_view path) const override;
VirtualDir GetDirectoryRelative(std::string_view path) const override;
VirtualFile GetFileRelative(std::string_view relative_path) const override;
VirtualDir GetDirectoryRelative(std::string_view relative_path) const override;
VirtualFile GetFile(std::string_view name) const override;
VirtualDir GetSubdirectory(std::string_view name) const override;
VirtualFile CreateFileRelative(std::string_view path) override;
VirtualDir CreateDirectoryRelative(std::string_view path) override;
VirtualFile CreateFileRelative(std::string_view relative_path) override;
VirtualDir CreateDirectoryRelative(std::string_view relative_path) override;
bool DeleteSubdirectoryRecursive(std::string_view name) override;
std::vector<VirtualFile> GetFiles() const override;
std::vector<VirtualDir> GetSubdirectories() const override;

View File

@@ -14,9 +14,9 @@ namespace FileSys {
class StaticVfsFile : public VfsFile {
public:
explicit StaticVfsFile(u8 value, std::size_t size = 0, std::string name = "",
VirtualDir parent = nullptr)
: value{value}, size{size}, name{std::move(name)}, parent{std::move(parent)} {}
explicit StaticVfsFile(u8 value_, std::size_t size_ = 0, std::string name_ = "",
VirtualDir parent_ = nullptr)
: value{value_}, size{size_}, name{std::move(name_)}, parent{std::move(parent_)} {}
std::string GetName() const override {
return name;

View File

@@ -7,8 +7,8 @@
#include "core/file_sys/vfs_vector.h"
namespace FileSys {
VectorVfsFile::VectorVfsFile(std::vector<u8> initial_data, std::string name, VirtualDir parent)
: data(std::move(initial_data)), parent(std::move(parent)), name(std::move(name)) {}
VectorVfsFile::VectorVfsFile(std::vector<u8> initial_data, std::string name_, VirtualDir parent_)
: data(std::move(initial_data)), parent(std::move(parent_)), name(std::move(name_)) {}
VectorVfsFile::~VectorVfsFile() = default;
@@ -103,12 +103,12 @@ static bool FindAndRemoveVectorElement(std::vector<T>& vec, std::string_view nam
return true;
}
bool VectorVfsDirectory::DeleteSubdirectory(std::string_view name) {
return FindAndRemoveVectorElement(dirs, name);
bool VectorVfsDirectory::DeleteSubdirectory(std::string_view subdir_name) {
return FindAndRemoveVectorElement(dirs, subdir_name);
}
bool VectorVfsDirectory::DeleteFile(std::string_view name) {
return FindAndRemoveVectorElement(files, name);
bool VectorVfsDirectory::DeleteFile(std::string_view file_name) {
return FindAndRemoveVectorElement(files, file_name);
}
bool VectorVfsDirectory::Rename(std::string_view name_) {
@@ -116,11 +116,11 @@ bool VectorVfsDirectory::Rename(std::string_view name_) {
return true;
}
VirtualDir VectorVfsDirectory::CreateSubdirectory(std::string_view name) {
VirtualDir VectorVfsDirectory::CreateSubdirectory(std::string_view subdir_name) {
return nullptr;
}
VirtualFile VectorVfsDirectory::CreateFile(std::string_view name) {
VirtualFile VectorVfsDirectory::CreateFile(std::string_view file_name) {
return nullptr;
}

View File

@@ -75,8 +75,8 @@ std::shared_ptr<ArrayVfsFile<Size>> MakeArrayFile(const std::array<u8, Size>& da
// An implementation of VfsFile that is backed by a vector optionally supplied upon construction
class VectorVfsFile : public VfsFile {
public:
explicit VectorVfsFile(std::vector<u8> initial_data = {}, std::string name = "",
VirtualDir parent = nullptr);
explicit VectorVfsFile(std::vector<u8> initial_data = {}, std::string name_ = "",
VirtualDir parent_ = nullptr);
~VectorVfsFile() override;
std::string GetName() const override;
@@ -112,11 +112,11 @@ public:
bool IsReadable() const override;
std::string GetName() const override;
VirtualDir GetParentDirectory() const override;
bool DeleteSubdirectory(std::string_view name) override;
bool DeleteFile(std::string_view name) override;
bool DeleteSubdirectory(std::string_view subdir_name) override;
bool DeleteFile(std::string_view file_name) override;
bool Rename(std::string_view name) override;
VirtualDir CreateSubdirectory(std::string_view name) override;
VirtualFile CreateFile(std::string_view name) override;
VirtualDir CreateSubdirectory(std::string_view subdir_name) override;
VirtualFile CreateFile(std::string_view file_name) override;
virtual void AddFile(VirtualFile file);
virtual void AddDirectory(VirtualDir dir);

View File

@@ -12,7 +12,7 @@ WebBrowserApplet::~WebBrowserApplet() = default;
DefaultWebBrowserApplet::~DefaultWebBrowserApplet() = default;
void DefaultWebBrowserApplet::OpenLocalWebPage(
std::string_view local_url, std::function<void()> extract_romfs_callback,
const std::string& local_url, std::function<void()> extract_romfs_callback,
std::function<void(Service::AM::Applets::WebExitReason, std::string)> callback) const {
LOG_WARNING(Service_AM, "(STUBBED) called, backend requested to open local web page at {}",
local_url);
@@ -21,7 +21,7 @@ void DefaultWebBrowserApplet::OpenLocalWebPage(
}
void DefaultWebBrowserApplet::OpenExternalWebPage(
std::string_view external_url,
const std::string& external_url,
std::function<void(Service::AM::Applets::WebExitReason, std::string)> callback) const {
LOG_WARNING(Service_AM, "(STUBBED) called, backend requested to open external web page at {}",
external_url);

View File

@@ -16,11 +16,11 @@ public:
virtual ~WebBrowserApplet();
virtual void OpenLocalWebPage(
std::string_view local_url, std::function<void()> extract_romfs_callback,
const std::string& local_url, std::function<void()> extract_romfs_callback,
std::function<void(Service::AM::Applets::WebExitReason, std::string)> callback) const = 0;
virtual void OpenExternalWebPage(
std::string_view external_url,
const std::string& external_url,
std::function<void(Service::AM::Applets::WebExitReason, std::string)> callback) const = 0;
};
@@ -28,11 +28,12 @@ class DefaultWebBrowserApplet final : public WebBrowserApplet {
public:
~DefaultWebBrowserApplet() override;
void OpenLocalWebPage(std::string_view local_url, std::function<void()> extract_romfs_callback,
void OpenLocalWebPage(const std::string& local_url,
std::function<void()> extract_romfs_callback,
std::function<void(Service::AM::Applets::WebExitReason, std::string)>
callback) const override;
void OpenExternalWebPage(std::string_view external_url,
void OpenExternalWebPage(const std::string& external_url,
std::function<void(Service::AM::Applets::WebExitReason, std::string)>
callback) const override;
};

View File

@@ -26,7 +26,7 @@ public:
private:
class Device : public Input::TouchDevice {
public:
explicit Device(std::weak_ptr<TouchState>&& touch_state) : touch_state(touch_state) {}
explicit Device(std::weak_ptr<TouchState>&& touch_state_) : touch_state(touch_state_) {}
Input::TouchStatus GetStatus() const override {
if (auto state = touch_state.lock()) {
std::lock_guard guard{state->mutex};

View File

@@ -11,6 +11,7 @@
#include <utility>
#include "common/logging/log.h"
#include "common/param_package.h"
#include "common/quaternion.h"
#include "common/vector_math.h"
namespace Input {
@@ -143,9 +144,10 @@ using VibrationDevice = InputDevice<u8>;
/**
* A motion status is an object that returns a tuple of accelerometer state vector,
* gyroscope state vector, rotation state vector and orientation state matrix.
* gyroscope state vector, rotation state vector, orientation state matrix and quaterion state
* vector.
*
* For both vectors:
* For both 3D vectors:
* x+ is the same direction as RIGHT on D-pad.
* y+ is normal to the touch screen, pointing outward.
* z+ is the same direction as UP on D-pad.
@@ -164,9 +166,13 @@ using VibrationDevice = InputDevice<u8>;
* x vector
* y vector
* z vector
*
* For quaternion state vector
* xyz vector
* w float
*/
using MotionStatus = std::tuple<Common::Vec3<float>, Common::Vec3<float>, Common::Vec3<float>,
std::array<Common::Vec3f, 3>>;
std::array<Common::Vec3f, 3>, Common::Quaternion<f32>>;
/**
* A motion device is an input device that returns a motion status object

View File

@@ -32,7 +32,8 @@ enum class CommandType : u32 {
Control = 5,
RequestWithContext = 6,
ControlWithContext = 7,
Unspecified,
TIPC_Close = 15,
TIPC_CommandRegion = 16, // Start of TIPC commands, this is an offset.
};
struct CommandHeader {
@@ -57,6 +58,20 @@ struct CommandHeader {
BitField<10, 4, BufferDescriptorCFlag> buf_c_descriptor_flags;
BitField<31, 1, u32> enable_handle_descriptor;
};
bool IsTipc() const {
return type.Value() >= CommandType::TIPC_CommandRegion;
}
bool IsCloseCommand() const {
switch (type.Value()) {
case CommandType::Close:
case CommandType::TIPC_Close:
return true;
default:
return false;
}
}
};
static_assert(sizeof(CommandHeader) == 8, "CommandHeader size is incorrect");

View File

@@ -13,12 +13,11 @@
#include "common/assert.h"
#include "common/common_types.h"
#include "core/hle/ipc.h"
#include "core/hle/kernel/client_port.h"
#include "core/hle/kernel/client_session.h"
#include "core/hle/kernel/hle_ipc.h"
#include "core/hle/kernel/object.h"
#include "core/hle/kernel/server_session.h"
#include "core/hle/kernel/session.h"
#include "core/hle/kernel/k_client_port.h"
#include "core/hle/kernel/k_process.h"
#include "core/hle/kernel/k_resource_limit.h"
#include "core/hle/kernel/k_session.h"
#include "core/hle/result.h"
namespace IPC {
@@ -29,19 +28,19 @@ class RequestHelperBase {
protected:
Kernel::HLERequestContext* context = nullptr;
u32* cmdbuf;
ptrdiff_t index = 0;
u32 index = 0;
public:
explicit RequestHelperBase(u32* command_buffer) : cmdbuf(command_buffer) {}
explicit RequestHelperBase(Kernel::HLERequestContext& context)
: context(&context), cmdbuf(context.CommandBuffer()) {}
explicit RequestHelperBase(Kernel::HLERequestContext& ctx)
: context(&ctx), cmdbuf(ctx.CommandBuffer()) {}
void Skip(u32 size_in_words, bool set_to_null) {
if (set_to_null) {
memset(cmdbuf + index, 0, size_in_words * sizeof(u32));
}
index += static_cast<ptrdiff_t>(size_in_words);
index += size_in_words;
}
/**
@@ -54,11 +53,11 @@ public:
}
u32 GetCurrentOffset() const {
return static_cast<u32>(index);
return index;
}
void SetCurrentOffset(u32 offset) {
index = static_cast<ptrdiff_t>(offset);
index = offset;
}
};
@@ -72,64 +71,79 @@ public:
AlwaysMoveHandles = 1,
};
explicit ResponseBuilder(Kernel::HLERequestContext& context, u32 normal_params_size,
u32 num_handles_to_copy = 0, u32 num_objects_to_move = 0,
explicit ResponseBuilder(Kernel::HLERequestContext& ctx, u32 normal_params_size_,
u32 num_handles_to_copy_ = 0, u32 num_objects_to_move_ = 0,
Flags flags = Flags::None)
: RequestHelperBase(context), normal_params_size(normal_params_size),
num_handles_to_copy(num_handles_to_copy),
num_objects_to_move(num_objects_to_move), kernel{context.kernel} {
: RequestHelperBase(ctx), normal_params_size(normal_params_size_),
num_handles_to_copy(num_handles_to_copy_),
num_objects_to_move(num_objects_to_move_), kernel{ctx.kernel} {
memset(cmdbuf, 0, sizeof(u32) * IPC::COMMAND_BUFFER_LENGTH);
context.ClearIncomingObjects();
ctx.ClearIncomingObjects();
IPC::CommandHeader header{};
// The entire size of the raw data section in u32 units, including the 16 bytes of mandatory
// padding.
u64 raw_data_size = sizeof(IPC::DataPayloadHeader) / 4 + 4 + normal_params_size;
u32 raw_data_size = ctx.IsTipc()
? normal_params_size - 1
: sizeof(IPC::DataPayloadHeader) / 4 + 4 + normal_params_size;
u32 num_handles_to_move{};
u32 num_domain_objects{};
const bool always_move_handles{
(static_cast<u32>(flags) & static_cast<u32>(Flags::AlwaysMoveHandles)) != 0};
if (!context.Session()->IsDomain() || always_move_handles) {
if (!ctx.Session()->IsDomain() || always_move_handles) {
num_handles_to_move = num_objects_to_move;
} else {
num_domain_objects = num_objects_to_move;
}
if (context.Session()->IsDomain()) {
raw_data_size += sizeof(DomainMessageHeader) / 4 + num_domain_objects;
if (ctx.Session()->IsDomain()) {
raw_data_size += static_cast<u32>(sizeof(DomainMessageHeader) / 4 + num_domain_objects);
}
if (ctx.IsTipc()) {
header.type.Assign(ctx.GetCommandType());
}
ctx.data_size = static_cast<u32>(raw_data_size);
header.data_size.Assign(static_cast<u32>(raw_data_size));
if (num_handles_to_copy || num_handles_to_move) {
if (num_handles_to_copy != 0 || num_handles_to_move != 0) {
header.enable_handle_descriptor.Assign(1);
}
PushRaw(header);
if (header.enable_handle_descriptor) {
IPC::HandleDescriptorHeader handle_descriptor_header{};
handle_descriptor_header.num_handles_to_copy.Assign(num_handles_to_copy);
handle_descriptor_header.num_handles_to_copy.Assign(num_handles_to_copy_);
handle_descriptor_header.num_handles_to_move.Assign(num_handles_to_move);
PushRaw(handle_descriptor_header);
ctx.handles_offset = index;
Skip(num_handles_to_copy + num_handles_to_move, true);
}
AlignWithPadding();
if (!ctx.IsTipc()) {
AlignWithPadding();
if (context.Session()->IsDomain() && context.HasDomainMessageHeader()) {
IPC::DomainMessageHeader domain_header{};
domain_header.num_objects = num_domain_objects;
PushRaw(domain_header);
if (ctx.Session()->IsDomain() && ctx.HasDomainMessageHeader()) {
IPC::DomainMessageHeader domain_header{};
domain_header.num_objects = num_domain_objects;
PushRaw(domain_header);
}
IPC::DataPayloadHeader data_payload_header{};
data_payload_header.magic = Common::MakeMagic('S', 'F', 'C', 'O');
PushRaw(data_payload_header);
}
IPC::DataPayloadHeader data_payload_header{};
data_payload_header.magic = Common::MakeMagic('S', 'F', 'C', 'O');
PushRaw(data_payload_header);
data_payload_index = index;
datapayload_index = index;
ctx.data_payload_offset = index;
ctx.domain_offset = index + raw_data_size / 4;
}
template <class T>
@@ -137,9 +151,14 @@ public:
if (context->Session()->IsDomain()) {
context->AddDomainObject(std::move(iface));
} else {
auto [client, server] = Kernel::Session::Create(kernel, iface->GetServiceName());
context->AddMoveObject(std::move(client));
iface->ClientConnected(std::move(server));
// kernel.CurrentProcess()->GetResourceLimit()->Reserve(
// Kernel::LimitableResource::Sessions, 1);
auto* session = Kernel::KSession::Create(kernel);
session->Initialize(nullptr, iface->GetServiceName());
context->AddMoveObject(&session->GetClientSession());
iface->ClientConnected(&session->GetServerSession());
}
}
@@ -153,7 +172,7 @@ public:
const std::size_t num_move_objects = context->NumMoveObjects();
ASSERT_MSG(!num_domain_objects || !num_move_objects,
"cannot move normal handles and domain objects");
ASSERT_MSG((index - datapayload_index) == normal_params_size,
ASSERT_MSG((index - data_payload_index) == normal_params_size,
"normal_params_size value is incorrect");
ASSERT_MSG((num_domain_objects + num_move_objects) == num_objects_to_move,
"num_objects_to_move value is incorrect");
@@ -215,23 +234,29 @@ public:
void PushRaw(const T& value);
template <typename... O>
void PushMoveObjects(std::shared_ptr<O>... pointers);
void PushMoveObjects(O*... pointers);
template <typename... O>
void PushCopyObjects(std::shared_ptr<O>... pointers);
void PushMoveObjects(O&... pointers);
template <typename... O>
void PushCopyObjects(O*... pointers);
template <typename... O>
void PushCopyObjects(O&... pointers);
private:
u32 normal_params_size{};
u32 num_handles_to_copy{};
u32 num_objects_to_move{}; ///< Domain objects or move handles, context dependent
std::ptrdiff_t datapayload_index{};
u32 data_payload_index{};
Kernel::KernelCore& kernel;
};
/// Push ///
inline void ResponseBuilder::PushImpl(s32 value) {
cmdbuf[index++] = static_cast<u32>(value);
cmdbuf[index++] = value;
}
inline void ResponseBuilder::PushImpl(u32 value) {
@@ -301,18 +326,34 @@ void ResponseBuilder::Push(const First& first_value, const Other&... other_value
}
template <typename... O>
inline void ResponseBuilder::PushCopyObjects(std::shared_ptr<O>... pointers) {
inline void ResponseBuilder::PushCopyObjects(O*... pointers) {
auto objects = {pointers...};
for (auto& object : objects) {
context->AddCopyObject(std::move(object));
context->AddCopyObject(object);
}
}
template <typename... O>
inline void ResponseBuilder::PushMoveObjects(std::shared_ptr<O>... pointers) {
inline void ResponseBuilder::PushCopyObjects(O&... pointers) {
auto objects = {&pointers...};
for (auto& object : objects) {
context->AddCopyObject(object);
}
}
template <typename... O>
inline void ResponseBuilder::PushMoveObjects(O*... pointers) {
auto objects = {pointers...};
for (auto& object : objects) {
context->AddMoveObject(std::move(object));
context->AddMoveObject(object);
}
}
template <typename... O>
inline void ResponseBuilder::PushMoveObjects(O&... pointers) {
auto objects = {&pointers...};
for (auto& object : objects) {
context->AddMoveObject(object);
}
}
@@ -320,9 +361,9 @@ class RequestParser : public RequestHelperBase {
public:
explicit RequestParser(u32* command_buffer) : RequestHelperBase(command_buffer) {}
explicit RequestParser(Kernel::HLERequestContext& context) : RequestHelperBase(context) {
ASSERT_MSG(context.GetDataPayloadOffset(), "context is incomplete");
Skip(context.GetDataPayloadOffset(), false);
explicit RequestParser(Kernel::HLERequestContext& ctx) : RequestHelperBase(ctx) {
ASSERT_MSG(ctx.GetDataPayloadOffset(), "context is incomplete");
Skip(ctx.GetDataPayloadOffset(), false);
// Skip the u64 command id, it's already stored in the context
static constexpr u32 CommandIdSize = 2;
Skip(CommandIdSize, false);
@@ -359,12 +400,6 @@ public:
template <typename T>
T PopRaw();
template <typename T>
std::shared_ptr<T> GetMoveObject(std::size_t index);
template <typename T>
std::shared_ptr<T> GetCopyObject(std::size_t index);
template <class T>
std::shared_ptr<T> PopIpcInterface() {
ASSERT(context->Session()->IsDomain());
@@ -469,14 +504,4 @@ void RequestParser::Pop(First& first_value, Other&... other_values) {
Pop(other_values...);
}
template <typename T>
std::shared_ptr<T> RequestParser::GetMoveObject(std::size_t index) {
return context->GetMoveObject<T>(index);
}
template <typename T>
std::shared_ptr<T> RequestParser::GetCopyObject(std::size_t index) {
return context->GetCopyObject<T>(index);
}
} // namespace IPC

View File

@@ -1,47 +0,0 @@
// Copyright 2016 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "core/hle/kernel/client_port.h"
#include "core/hle/kernel/client_session.h"
#include "core/hle/kernel/hle_ipc.h"
#include "core/hle/kernel/object.h"
#include "core/hle/kernel/server_port.h"
#include "core/hle/kernel/session.h"
#include "core/hle/kernel/svc_results.h"
namespace Kernel {
ClientPort::ClientPort(KernelCore& kernel) : Object{kernel} {}
ClientPort::~ClientPort() = default;
std::shared_ptr<ServerPort> ClientPort::GetServerPort() const {
return server_port;
}
ResultVal<std::shared_ptr<ClientSession>> ClientPort::Connect() {
if (active_sessions >= max_sessions) {
return ResultMaxConnectionsReached;
}
active_sessions++;
auto [client, server] = Kernel::Session::Create(kernel, name);
if (server_port->HasHLEHandler()) {
server_port->GetHLEHandler()->ClientConnected(std::move(server));
} else {
server_port->AppendPendingSession(std::move(server));
}
return MakeResult(std::move(client));
}
void ClientPort::ConnectionClosed() {
if (active_sessions == 0) {
return;
}
--active_sessions;
}
} // namespace Kernel

View File

@@ -1,63 +0,0 @@
// Copyright 2016 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <memory>
#include <string>
#include "common/common_types.h"
#include "core/hle/kernel/object.h"
#include "core/hle/result.h"
namespace Kernel {
class ClientSession;
class KernelCore;
class ServerPort;
class ClientPort final : public Object {
public:
explicit ClientPort(KernelCore& kernel);
~ClientPort() override;
friend class ServerPort;
std::string GetTypeName() const override {
return "ClientPort";
}
std::string GetName() const override {
return name;
}
static constexpr HandleType HANDLE_TYPE = HandleType::ClientPort;
HandleType GetHandleType() const override {
return HANDLE_TYPE;
}
std::shared_ptr<ServerPort> GetServerPort() const;
/**
* Creates a new Session pair, adds the created ServerSession to the associated ServerPort's
* list of pending sessions, and signals the ServerPort, causing any threads
* waiting on it to awake.
* @returns ClientSession The client endpoint of the created Session pair, or error code.
*/
ResultVal<std::shared_ptr<ClientSession>> Connect();
/**
* Signifies that a previously active connection has been closed,
* decreasing the total number of active connections to this port.
*/
void ConnectionClosed();
void Finalize() override {}
private:
std::shared_ptr<ServerPort> server_port; ///< ServerPort associated with this client port.
u32 max_sessions = 0; ///< Maximum number of simultaneous sessions the port can have
u32 active_sessions = 0; ///< Number of currently open sessions to this port
std::string name; ///< Name of client port (optional)
};
} // namespace Kernel

View File

@@ -1,53 +0,0 @@
// Copyright 2019 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "core/hle/kernel/client_session.h"
#include "core/hle/kernel/hle_ipc.h"
#include "core/hle/kernel/k_thread.h"
#include "core/hle/kernel/server_session.h"
#include "core/hle/kernel/session.h"
#include "core/hle/kernel/svc_results.h"
#include "core/hle/result.h"
namespace Kernel {
ClientSession::ClientSession(KernelCore& kernel) : KSynchronizationObject{kernel} {}
ClientSession::~ClientSession() {
// This destructor will be called automatically when the last ClientSession handle is closed by
// the emulated application.
if (parent->Server()) {
parent->Server()->ClientDisconnected();
}
}
bool ClientSession::IsSignaled() const {
UNIMPLEMENTED();
return true;
}
ResultVal<std::shared_ptr<ClientSession>> ClientSession::Create(KernelCore& kernel,
std::shared_ptr<Session> parent,
std::string name) {
std::shared_ptr<ClientSession> client_session{std::make_shared<ClientSession>(kernel)};
client_session->name = std::move(name);
client_session->parent = std::move(parent);
return MakeResult(std::move(client_session));
}
ResultCode ClientSession::SendSyncRequest(std::shared_ptr<KThread> thread,
Core::Memory::Memory& memory,
Core::Timing::CoreTiming& core_timing) {
// Keep ServerSession alive until we're done working with it.
if (!parent->Server()) {
return ResultSessionClosedByRemote;
}
// Signal the server session that new data is available
return parent->Server()->HandleSyncRequest(std::move(thread), memory, core_timing);
}
} // namespace Kernel

View File

@@ -1,68 +0,0 @@
// Copyright 2019 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <memory>
#include <string>
#include "core/hle/kernel/k_synchronization_object.h"
#include "core/hle/result.h"
union ResultCode;
namespace Core::Memory {
class Memory;
}
namespace Core::Timing {
class CoreTiming;
}
namespace Kernel {
class KernelCore;
class Session;
class KThread;
class ClientSession final : public KSynchronizationObject {
public:
explicit ClientSession(KernelCore& kernel);
~ClientSession() override;
friend class Session;
std::string GetTypeName() const override {
return "ClientSession";
}
std::string GetName() const override {
return name;
}
static constexpr HandleType HANDLE_TYPE = HandleType::ClientSession;
HandleType GetHandleType() const override {
return HANDLE_TYPE;
}
ResultCode SendSyncRequest(std::shared_ptr<KThread> thread, Core::Memory::Memory& memory,
Core::Timing::CoreTiming& core_timing);
bool IsSignaled() const override;
void Finalize() override {}
private:
static ResultVal<std::shared_ptr<ClientSession>> Create(KernelCore& kernel,
std::shared_ptr<Session> parent,
std::string name = "Unknown");
/// The parent session, which links to the server endpoint.
std::shared_ptr<Session> parent;
/// Name of the client session (optional)
std::string name;
};
} // namespace Kernel

View File

@@ -12,17 +12,17 @@
namespace Kernel {
GlobalSchedulerContext::GlobalSchedulerContext(KernelCore& kernel)
: kernel{kernel}, scheduler_lock{kernel} {}
GlobalSchedulerContext::GlobalSchedulerContext(KernelCore& kernel_)
: kernel{kernel_}, scheduler_lock{kernel_} {}
GlobalSchedulerContext::~GlobalSchedulerContext() = default;
void GlobalSchedulerContext::AddThread(std::shared_ptr<KThread> thread) {
void GlobalSchedulerContext::AddThread(KThread* thread) {
std::scoped_lock lock{global_list_guard};
thread_list.push_back(std::move(thread));
thread_list.push_back(thread);
}
void GlobalSchedulerContext::RemoveThread(std::shared_ptr<KThread> thread) {
void GlobalSchedulerContext::RemoveThread(KThread* thread) {
std::scoped_lock lock{global_list_guard};
thread_list.erase(std::remove(thread_list.begin(), thread_list.end(), thread),
thread_list.end());

View File

@@ -34,17 +34,17 @@ class GlobalSchedulerContext final {
public:
using LockType = KAbstractSchedulerLock<KScheduler>;
explicit GlobalSchedulerContext(KernelCore& kernel);
explicit GlobalSchedulerContext(KernelCore& kernel_);
~GlobalSchedulerContext();
/// Adds a new thread to the scheduler
void AddThread(std::shared_ptr<KThread> thread);
void AddThread(KThread* thread);
/// Removes a thread from the scheduler
void RemoveThread(std::shared_ptr<KThread> thread);
void RemoveThread(KThread* thread);
/// Returns a list of all threads managed by the scheduler
[[nodiscard]] const std::vector<std::shared_ptr<KThread>>& GetThreadList() const {
[[nodiscard]] const std::vector<KThread*>& GetThreadList() const {
return thread_list;
}
@@ -79,7 +79,7 @@ private:
LockType scheduler_lock;
/// Lists all thread ids that aren't deleted/etc.
std::vector<std::shared_ptr<KThread>> thread_list;
std::vector<KThread*> thread_list;
Common::SpinLock global_list_guard{};
};

View File

@@ -1,131 +0,0 @@
// Copyright 2014 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <utility>
#include "common/assert.h"
#include "common/logging/log.h"
#include "core/core.h"
#include "core/hle/kernel/handle_table.h"
#include "core/hle/kernel/k_scheduler.h"
#include "core/hle/kernel/k_thread.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/svc_results.h"
namespace Kernel {
namespace {
constexpr u16 GetSlot(Handle handle) {
return static_cast<u16>(handle >> 15);
}
constexpr u16 GetGeneration(Handle handle) {
return static_cast<u16>(handle & 0x7FFF);
}
} // Anonymous namespace
HandleTable::HandleTable(KernelCore& kernel) : kernel{kernel} {
Clear();
}
HandleTable::~HandleTable() = default;
ResultCode HandleTable::SetSize(s32 handle_table_size) {
if (static_cast<u32>(handle_table_size) > MAX_COUNT) {
LOG_ERROR(Kernel, "Handle table size {} is greater than {}", handle_table_size, MAX_COUNT);
return ResultOutOfMemory;
}
// Values less than or equal to zero indicate to use the maximum allowable
// size for the handle table in the actual kernel, so we ignore the given
// value in that case, since we assume this by default unless this function
// is called.
if (handle_table_size > 0) {
table_size = static_cast<u16>(handle_table_size);
}
return RESULT_SUCCESS;
}
ResultVal<Handle> HandleTable::Create(std::shared_ptr<Object> obj) {
DEBUG_ASSERT(obj != nullptr);
const u16 slot = next_free_slot;
if (slot >= table_size) {
LOG_ERROR(Kernel, "Unable to allocate Handle, too many slots in use.");
return ResultHandleTableFull;
}
next_free_slot = generations[slot];
const u16 generation = next_generation++;
// Overflow count so it fits in the 15 bits dedicated to the generation in the handle.
// Horizon OS uses zero to represent an invalid handle, so skip to 1.
if (next_generation >= (1 << 15)) {
next_generation = 1;
}
generations[slot] = generation;
objects[slot] = std::move(obj);
Handle handle = generation | (slot << 15);
return MakeResult<Handle>(handle);
}
ResultVal<Handle> HandleTable::Duplicate(Handle handle) {
std::shared_ptr<Object> object = GetGeneric(handle);
if (object == nullptr) {
LOG_ERROR(Kernel, "Tried to duplicate invalid handle: {:08X}", handle);
return ResultInvalidHandle;
}
return Create(std::move(object));
}
ResultCode HandleTable::Close(Handle handle) {
if (!IsValid(handle)) {
LOG_ERROR(Kernel, "Handle is not valid! handle={:08X}", handle);
return ResultInvalidHandle;
}
const u16 slot = GetSlot(handle);
if (objects[slot].use_count() == 1) {
objects[slot]->Finalize();
}
objects[slot] = nullptr;
generations[slot] = next_free_slot;
next_free_slot = slot;
return RESULT_SUCCESS;
}
bool HandleTable::IsValid(Handle handle) const {
const std::size_t slot = GetSlot(handle);
const u16 generation = GetGeneration(handle);
return slot < table_size && objects[slot] != nullptr && generations[slot] == generation;
}
std::shared_ptr<Object> HandleTable::GetGeneric(Handle handle) const {
if (handle == CurrentThread) {
return SharedFrom(kernel.CurrentScheduler()->GetCurrentThread());
} else if (handle == CurrentProcess) {
return SharedFrom(kernel.CurrentProcess());
}
if (!IsValid(handle)) {
return nullptr;
}
return objects[GetSlot(handle)];
}
void HandleTable::Clear() {
for (u16 i = 0; i < table_size; ++i) {
generations[i] = static_cast<u16>(i + 1);
objects[i] = nullptr;
}
next_free_slot = 0;
}
} // namespace Kernel

View File

@@ -1,144 +0,0 @@
// Copyright 2014 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <array>
#include <cstddef>
#include <memory>
#include "common/common_types.h"
#include "core/hle/kernel/object.h"
#include "core/hle/result.h"
namespace Kernel {
class KernelCore;
enum KernelHandle : Handle {
InvalidHandle = 0,
CurrentThread = 0xFFFF8000,
CurrentProcess = 0xFFFF8001,
};
/**
* This class allows the creation of Handles, which are references to objects that can be tested
* for validity and looked up. Here they are used to pass references to kernel objects to/from the
* emulated process. it has been designed so that it follows the same handle format and has
* approximately the same restrictions as the handle manager in the CTR-OS.
*
* Handles contain two sub-fields: a slot index (bits 31:15) and a generation value (bits 14:0).
* The slot index is used to index into the arrays in this class to access the data corresponding
* to the Handle.
*
* To prevent accidental use of a freed Handle whose slot has already been reused, a global counter
* is kept and incremented every time a Handle is created. This is the Handle's "generation". The
* value of the counter is stored into the Handle as well as in the handle table (in the
* "generations" array). When looking up a handle, the Handle's generation must match with the
* value stored on the class, otherwise the Handle is considered invalid.
*
* To find free slots when allocating a Handle without needing to scan the entire object array, the
* generations field of unallocated slots is re-purposed as a linked list of indices to free slots.
* When a Handle is created, an index is popped off the list and used for the new Handle. When it
* is destroyed, it is again pushed onto the list to be re-used by the next allocation. It is
* likely that this allocation strategy differs from the one used in CTR-OS, but this hasn't been
* verified and isn't likely to cause any problems.
*/
class HandleTable final : NonCopyable {
public:
/// This is the maximum limit of handles allowed per process in Horizon
static constexpr std::size_t MAX_COUNT = 1024;
explicit HandleTable(KernelCore& kernel);
~HandleTable();
/**
* Sets the number of handles that may be in use at one time
* for this handle table.
*
* @param handle_table_size The desired size to limit the handle table to.
*
* @returns an error code indicating if initialization was successful.
* If initialization was not successful, then ERR_OUT_OF_MEMORY
* will be returned.
*
* @pre handle_table_size must be within the range [0, 1024]
*/
ResultCode SetSize(s32 handle_table_size);
/**
* Allocates a handle for the given object.
* @return The created Handle or one of the following errors:
* - `ERR_HANDLE_TABLE_FULL`: the maximum number of handles has been exceeded.
*/
ResultVal<Handle> Create(std::shared_ptr<Object> obj);
/**
* Returns a new handle that points to the same object as the passed in handle.
* @return The duplicated Handle or one of the following errors:
* - `ERR_INVALID_HANDLE`: an invalid handle was passed in.
* - Any errors returned by `Create()`.
*/
ResultVal<Handle> Duplicate(Handle handle);
/**
* Closes a handle, removing it from the table and decreasing the object's ref-count.
* @return `RESULT_SUCCESS` or one of the following errors:
* - `ERR_INVALID_HANDLE`: an invalid handle was passed in.
*/
ResultCode Close(Handle handle);
/// Checks if a handle is valid and points to an existing object.
bool IsValid(Handle handle) const;
/**
* Looks up a handle.
* @return Pointer to the looked-up object, or `nullptr` if the handle is not valid.
*/
std::shared_ptr<Object> GetGeneric(Handle handle) const;
/**
* Looks up a handle while verifying its type.
* @return Pointer to the looked-up object, or `nullptr` if the handle is not valid or its
* type differs from the requested one.
*/
template <class T>
std::shared_ptr<T> Get(Handle handle) const {
return DynamicObjectCast<T>(GetGeneric(handle));
}
/// Closes all handles held in this table.
void Clear();
private:
/// Stores the Object referenced by the handle or null if the slot is empty.
std::array<std::shared_ptr<Object>, MAX_COUNT> objects;
/**
* The value of `next_generation` when the handle was created, used to check for validity. For
* empty slots, contains the index of the next free slot in the list.
*/
std::array<u16, MAX_COUNT> generations;
/**
* The limited size of the handle table. This can be specified by process
* capabilities in order to restrict the overall number of handles that
* can be created in a process instance
*/
u16 table_size = static_cast<u16>(MAX_COUNT);
/**
* Global counter of the number of created handles. Stored in `generations` when a handle is
* created, and wraps around to 1 when it hits 0x8000.
*/
u16 next_generation = 1;
/// Head of the free slots linked list.
u16 next_free_slot = 0;
/// Underlying kernel instance that this handle table operates under.
KernelCore& kernel;
};
} // namespace Kernel

View File

@@ -14,17 +14,16 @@
#include "common/common_types.h"
#include "common/logging/log.h"
#include "core/hle/ipc_helpers.h"
#include "core/hle/kernel/handle_table.h"
#include "core/hle/kernel/hle_ipc.h"
#include "core/hle/kernel/k_handle_table.h"
#include "core/hle/kernel/k_process.h"
#include "core/hle/kernel/k_readable_event.h"
#include "core/hle/kernel/k_scheduler.h"
#include "core/hle/kernel/k_scoped_scheduler_lock_and_sleep.h"
#include "core/hle/kernel/k_server_session.h"
#include "core/hle/kernel/k_thread.h"
#include "core/hle/kernel/k_writable_event.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/object.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/server_session.h"
#include "core/hle/kernel/svc_results.h"
#include "core/hle/kernel/time_manager.h"
#include "core/memory.h"
@@ -35,33 +34,28 @@ SessionRequestHandler::SessionRequestHandler() = default;
SessionRequestHandler::~SessionRequestHandler() = default;
void SessionRequestHandler::ClientConnected(std::shared_ptr<ServerSession> server_session) {
server_session->SetHleHandler(shared_from_this());
connected_sessions.push_back(std::move(server_session));
void SessionRequestHandler::ClientConnected(KServerSession* session) {
session->SetHleHandler(shared_from_this());
}
void SessionRequestHandler::ClientDisconnected(
const std::shared_ptr<ServerSession>& server_session) {
server_session->SetHleHandler(nullptr);
boost::range::remove_erase(connected_sessions, server_session);
void SessionRequestHandler::ClientDisconnected(KServerSession* session) {
session->SetHleHandler(nullptr);
}
HLERequestContext::HLERequestContext(KernelCore& kernel, Core::Memory::Memory& memory,
std::shared_ptr<ServerSession> server_session,
std::shared_ptr<KThread> thread)
: server_session(std::move(server_session)),
thread(std::move(thread)), kernel{kernel}, memory{memory} {
HLERequestContext::HLERequestContext(KernelCore& kernel_, Core::Memory::Memory& memory_,
KServerSession* server_session_, KThread* thread_)
: server_session(server_session_), thread(thread_), kernel{kernel_}, memory{memory_} {
cmd_buf[0] = 0;
}
HLERequestContext::~HLERequestContext() = default;
void HLERequestContext::ParseCommandBuffer(const HandleTable& handle_table, u32_le* src_cmdbuf,
void HLERequestContext::ParseCommandBuffer(const KHandleTable& handle_table, u32_le* src_cmdbuf,
bool incoming) {
IPC::RequestParser rp(src_cmdbuf);
command_header = rp.PopRaw<IPC::CommandHeader>();
if (command_header->type == IPC::CommandType::Close) {
if (command_header->IsCloseCommand()) {
// Close does not populate the rest of the IPC header
return;
}
@@ -70,19 +64,19 @@ void HLERequestContext::ParseCommandBuffer(const HandleTable& handle_table, u32_
if (command_header->enable_handle_descriptor) {
handle_descriptor_header = rp.PopRaw<IPC::HandleDescriptorHeader>();
if (handle_descriptor_header->send_current_pid) {
rp.Skip(2, false);
pid = rp.Pop<u64>();
}
if (incoming) {
// Populate the object lists with the data in the IPC request.
for (u32 handle = 0; handle < handle_descriptor_header->num_handles_to_copy; ++handle) {
const u32 copy_handle{rp.Pop<Handle>()};
copy_handles.push_back(copy_handle);
copy_objects.push_back(handle_table.GetGeneric(copy_handle));
copy_objects.push_back(handle_table.GetObject(copy_handle).GetPointerUnsafe());
}
for (u32 handle = 0; handle < handle_descriptor_header->num_handles_to_move; ++handle) {
const u32 move_handle{rp.Pop<Handle>()};
move_handles.push_back(move_handle);
move_objects.push_back(handle_table.GetGeneric(move_handle));
move_objects.push_back(handle_table.GetObject(move_handle).GetPointerUnsafe());
}
} else {
// For responses we just ignore the handles, they're empty and will be populated when
@@ -92,52 +86,56 @@ void HLERequestContext::ParseCommandBuffer(const HandleTable& handle_table, u32_
}
}
for (unsigned i = 0; i < command_header->num_buf_x_descriptors; ++i) {
for (u32 i = 0; i < command_header->num_buf_x_descriptors; ++i) {
buffer_x_desciptors.push_back(rp.PopRaw<IPC::BufferDescriptorX>());
}
for (unsigned i = 0; i < command_header->num_buf_a_descriptors; ++i) {
for (u32 i = 0; i < command_header->num_buf_a_descriptors; ++i) {
buffer_a_desciptors.push_back(rp.PopRaw<IPC::BufferDescriptorABW>());
}
for (unsigned i = 0; i < command_header->num_buf_b_descriptors; ++i) {
for (u32 i = 0; i < command_header->num_buf_b_descriptors; ++i) {
buffer_b_desciptors.push_back(rp.PopRaw<IPC::BufferDescriptorABW>());
}
for (unsigned i = 0; i < command_header->num_buf_w_descriptors; ++i) {
for (u32 i = 0; i < command_header->num_buf_w_descriptors; ++i) {
buffer_w_desciptors.push_back(rp.PopRaw<IPC::BufferDescriptorABW>());
}
buffer_c_offset = rp.GetCurrentOffset() + command_header->data_size;
const auto buffer_c_offset = rp.GetCurrentOffset() + command_header->data_size;
// Padding to align to 16 bytes
rp.AlignWithPadding();
if (!command_header->IsTipc()) {
// Padding to align to 16 bytes
rp.AlignWithPadding();
if (Session()->IsDomain() && ((command_header->type == IPC::CommandType::Request ||
command_header->type == IPC::CommandType::RequestWithContext) ||
!incoming)) {
// If this is an incoming message, only CommandType "Request" has a domain header
// All outgoing domain messages have the domain header, if only incoming has it
if (incoming || domain_message_header) {
domain_message_header = rp.PopRaw<IPC::DomainMessageHeader>();
} else {
if (Session()->IsDomain()) {
LOG_WARNING(IPC, "Domain request has no DomainMessageHeader!");
if (Session()->IsDomain() &&
((command_header->type == IPC::CommandType::Request ||
command_header->type == IPC::CommandType::RequestWithContext) ||
!incoming)) {
// If this is an incoming message, only CommandType "Request" has a domain header
// All outgoing domain messages have the domain header, if only incoming has it
if (incoming || domain_message_header) {
domain_message_header = rp.PopRaw<IPC::DomainMessageHeader>();
} else {
if (Session()->IsDomain()) {
LOG_WARNING(IPC, "Domain request has no DomainMessageHeader!");
}
}
}
}
data_payload_header = rp.PopRaw<IPC::DataPayloadHeader>();
data_payload_header = rp.PopRaw<IPC::DataPayloadHeader>();
data_payload_offset = rp.GetCurrentOffset();
data_payload_offset = rp.GetCurrentOffset();
if (domain_message_header && domain_message_header->command ==
IPC::DomainMessageHeader::CommandType::CloseVirtualHandle) {
// CloseVirtualHandle command does not have SFC* or any data
return;
}
if (domain_message_header &&
domain_message_header->command ==
IPC::DomainMessageHeader::CommandType::CloseVirtualHandle) {
// CloseVirtualHandle command does not have SFC* or any data
return;
}
if (incoming) {
ASSERT(data_payload_header->magic == Common::MakeMagic('S', 'F', 'C', 'I'));
} else {
ASSERT(data_payload_header->magic == Common::MakeMagic('S', 'F', 'C', 'O'));
if (incoming) {
ASSERT(data_payload_header->magic == Common::MakeMagic('S', 'F', 'C', 'I'));
} else {
ASSERT(data_payload_header->magic == Common::MakeMagic('S', 'F', 'C', 'O'));
}
}
rp.SetCurrentOffset(buffer_c_offset);
@@ -150,14 +148,14 @@ void HLERequestContext::ParseCommandBuffer(const HandleTable& handle_table, u32_
IPC::CommandHeader::BufferDescriptorCFlag::OneDescriptor) {
buffer_c_desciptors.push_back(rp.PopRaw<IPC::BufferDescriptorC>());
} else {
unsigned num_buf_c_descriptors =
static_cast<unsigned>(command_header->buf_c_descriptor_flags.Value()) - 2;
u32 num_buf_c_descriptors =
static_cast<u32>(command_header->buf_c_descriptor_flags.Value()) - 2;
// This is used to detect possible underflows, in case something is broken
// with the two ifs above and the flags value is == 0 || == 1.
ASSERT(num_buf_c_descriptors < 14);
for (unsigned i = 0; i < num_buf_c_descriptors; ++i) {
for (u32 i = 0; i < num_buf_c_descriptors; ++i) {
buffer_c_desciptors.push_back(rp.PopRaw<IPC::BufferDescriptorC>());
}
}
@@ -169,87 +167,70 @@ void HLERequestContext::ParseCommandBuffer(const HandleTable& handle_table, u32_
rp.Skip(1, false); // The command is actually an u64, but we don't use the high part.
}
ResultCode HLERequestContext::PopulateFromIncomingCommandBuffer(const HandleTable& handle_table,
ResultCode HLERequestContext::PopulateFromIncomingCommandBuffer(const KHandleTable& handle_table,
u32_le* src_cmdbuf) {
ParseCommandBuffer(handle_table, src_cmdbuf, true);
if (command_header->type == IPC::CommandType::Close) {
if (command_header->IsCloseCommand()) {
// Close does not populate the rest of the IPC header
return RESULT_SUCCESS;
}
// The data_size already includes the payload header, the padding and the domain header.
std::size_t size = data_payload_offset + command_header->data_size -
sizeof(IPC::DataPayloadHeader) / sizeof(u32) - 4;
if (domain_message_header)
size -= sizeof(IPC::DomainMessageHeader) / sizeof(u32);
std::copy_n(src_cmdbuf, size, cmd_buf.begin());
std::copy_n(src_cmdbuf, IPC::COMMAND_BUFFER_LENGTH, cmd_buf.begin());
return RESULT_SUCCESS;
}
ResultCode HLERequestContext::WriteToOutgoingCommandBuffer(KThread& thread) {
auto& owner_process = *thread.GetOwnerProcess();
ResultCode HLERequestContext::WriteToOutgoingCommandBuffer(KThread& requesting_thread) {
auto current_offset = handles_offset;
auto& owner_process = *requesting_thread.GetOwnerProcess();
auto& handle_table = owner_process.GetHandleTable();
std::array<u32, IPC::COMMAND_BUFFER_LENGTH> dst_cmdbuf;
memory.ReadBlock(owner_process, thread.GetTLSAddress(), dst_cmdbuf.data(),
dst_cmdbuf.size() * sizeof(u32));
// The header was already built in the internal command buffer. Attempt to parse it to verify
// the integrity and then copy it over to the target command buffer.
ParseCommandBuffer(handle_table, cmd_buf.data(), false);
// The data_size already includes the payload header, the padding and the domain header.
std::size_t size = data_payload_offset + command_header->data_size -
sizeof(IPC::DataPayloadHeader) / sizeof(u32) - 4;
if (domain_message_header)
size -= sizeof(IPC::DomainMessageHeader) / sizeof(u32);
std::size_t size{};
std::copy_n(cmd_buf.begin(), size, dst_cmdbuf.data());
if (command_header->enable_handle_descriptor) {
ASSERT_MSG(!move_objects.empty() || !copy_objects.empty(),
"Handle descriptor bit set but no handles to translate");
// We write the translated handles at a specific offset in the command buffer, this space
// was already reserved when writing the header.
std::size_t current_offset =
(sizeof(IPC::CommandHeader) + sizeof(IPC::HandleDescriptorHeader)) / sizeof(u32);
ASSERT_MSG(!handle_descriptor_header->send_current_pid, "Sending PID is not implemented");
ASSERT(copy_objects.size() == handle_descriptor_header->num_handles_to_copy);
ASSERT(move_objects.size() == handle_descriptor_header->num_handles_to_move);
// We don't make a distinction between copy and move handles when translating since HLE
// services don't deal with handles directly. However, the guest applications might check
// for specific values in each of these descriptors.
for (auto& object : copy_objects) {
ASSERT(object != nullptr);
dst_cmdbuf[current_offset++] = handle_table.Create(object).Unwrap();
}
for (auto& object : move_objects) {
ASSERT(object != nullptr);
dst_cmdbuf[current_offset++] = handle_table.Create(object).Unwrap();
if (IsTipc()) {
size = cmd_buf.size();
} else {
size = data_payload_offset + data_size - sizeof(IPC::DataPayloadHeader) / sizeof(u32) - 4;
if (Session()->IsDomain()) {
size -= sizeof(IPC::DomainMessageHeader) / sizeof(u32);
}
}
// TODO(Subv): Translate the X/A/B/W buffers.
for (auto& object : copy_objects) {
Handle handle{};
if (object) {
R_TRY(handle_table.Add(&handle, object));
}
cmd_buf[current_offset++] = handle;
}
for (auto& object : move_objects) {
Handle handle{};
if (object) {
R_TRY(handle_table.Add(&handle, object));
if (Session()->IsDomain() && domain_message_header) {
ASSERT(domain_message_header->num_objects == domain_objects.size());
// Write the domain objects to the command buffer, these go after the raw untranslated data.
// TODO(Subv): This completely ignores C buffers.
std::size_t domain_offset = size - domain_message_header->num_objects;
// Close our reference to the object, as it is being moved to the caller.
object->Close();
}
cmd_buf[current_offset++] = handle;
}
// Write the domain objects to the command buffer, these go after the raw untranslated data.
// TODO(Subv): This completely ignores C buffers.
if (Session()->IsDomain()) {
current_offset = domain_offset - static_cast<u32>(domain_objects.size());
for (const auto& object : domain_objects) {
server_session->AppendDomainRequestHandler(object);
dst_cmdbuf[domain_offset++] =
cmd_buf[current_offset++] =
static_cast<u32_le>(server_session->NumDomainRequestHandlers());
}
}
// Copy the translated command buffer back into the thread's command buffer area.
memory.WriteBlock(owner_process, thread.GetTLSAddress(), dst_cmdbuf.data(),
dst_cmdbuf.size() * sizeof(u32));
memory.WriteBlock(owner_process, requesting_thread.GetTLSAddress(), cmd_buf.data(),
size * sizeof(u32));
return RESULT_SUCCESS;
}

View File

@@ -16,7 +16,8 @@
#include "common/concepts.h"
#include "common/swap.h"
#include "core/hle/ipc.h"
#include "core/hle/kernel/object.h"
#include "core/hle/kernel/k_auto_object.h"
#include "core/hle/kernel/svc_common.h"
union ResultCode;
@@ -35,13 +36,14 @@ class ServiceFrameworkBase;
namespace Kernel {
class Domain;
class HandleTable;
class HLERequestContext;
class KernelCore;
class Process;
class ServerSession;
class KHandleTable;
class KProcess;
class KServerSession;
class KThread;
class KReadableEvent;
class KSession;
class KWritableEvent;
enum class ThreadWakeupReason;
@@ -64,27 +66,22 @@ public:
* this request (ServerSession, Originator thread, Translated command buffer, etc).
* @returns ResultCode the result code of the translate operation.
*/
virtual ResultCode HandleSyncRequest(Kernel::HLERequestContext& context) = 0;
virtual ResultCode HandleSyncRequest(Kernel::KServerSession& session,
Kernel::HLERequestContext& context) = 0;
/**
* Signals that a client has just connected to this HLE handler and keeps the
* associated ServerSession alive for the duration of the connection.
* @param server_session Owning pointer to the ServerSession associated with the connection.
*/
void ClientConnected(std::shared_ptr<ServerSession> server_session);
void ClientConnected(KServerSession* session);
/**
* Signals that a client has just disconnected from this HLE handler and releases the
* associated ServerSession.
* @param server_session ServerSession associated with the connection.
*/
void ClientDisconnected(const std::shared_ptr<ServerSession>& server_session);
protected:
/// List of sessions that are connected to this handler.
/// A ServerSession whose server endpoint is an HLE implementation is kept alive by this list
/// for the duration of the connection.
std::vector<std::shared_ptr<ServerSession>> connected_sessions;
void ClientDisconnected(KServerSession* session);
};
/**
@@ -109,8 +106,7 @@ protected:
class HLERequestContext {
public:
explicit HLERequestContext(KernelCore& kernel, Core::Memory::Memory& memory,
std::shared_ptr<ServerSession> session,
std::shared_ptr<KThread> thread);
KServerSession* session, KThread* thread);
~HLERequestContext();
/// Returns a pointer to the IPC command buffer for this request.
@@ -122,26 +118,43 @@ public:
* Returns the session through which this request was made. This can be used as a map key to
* access per-client data on services.
*/
const std::shared_ptr<Kernel::ServerSession>& Session() const {
Kernel::KServerSession* Session() {
return server_session;
}
/// Populates this context with data from the requesting process/thread.
ResultCode PopulateFromIncomingCommandBuffer(const HandleTable& handle_table,
ResultCode PopulateFromIncomingCommandBuffer(const KHandleTable& handle_table,
u32_le* src_cmdbuf);
/// Writes data from this context back to the requesting process/thread.
ResultCode WriteToOutgoingCommandBuffer(KThread& thread);
ResultCode WriteToOutgoingCommandBuffer(KThread& requesting_thread);
u32_le GetHipcCommand() const {
return command;
}
u32_le GetTipcCommand() const {
return static_cast<u32_le>(command_header->type.Value()) -
static_cast<u32_le>(IPC::CommandType::TIPC_CommandRegion);
}
u32_le GetCommand() const {
return command;
return command_header->IsTipc() ? GetTipcCommand() : GetHipcCommand();
}
bool IsTipc() const {
return command_header->IsTipc();
}
IPC::CommandType GetCommandType() const {
return command_header->type;
}
unsigned GetDataPayloadOffset() const {
u64 GetPID() const {
return pid;
}
u32 GetDataPayloadOffset() const {
return data_payload_offset;
}
@@ -218,22 +231,12 @@ public:
return move_handles.at(index);
}
template <typename T>
std::shared_ptr<T> GetCopyObject(std::size_t index) {
return DynamicObjectCast<T>(copy_objects.at(index));
void AddMoveObject(KAutoObject* object) {
move_objects.emplace_back(object);
}
template <typename T>
std::shared_ptr<T> GetMoveObject(std::size_t index) {
return DynamicObjectCast<T>(move_objects.at(index));
}
void AddMoveObject(std::shared_ptr<Object> object) {
move_objects.emplace_back(std::move(object));
}
void AddCopyObject(std::shared_ptr<Object> object) {
copy_objects.emplace_back(std::move(object));
void AddCopyObject(KAutoObject* object) {
copy_objects.emplace_back(object);
}
void AddDomainObject(std::shared_ptr<SessionRequestHandler> object) {
@@ -276,10 +279,6 @@ public:
return *thread;
}
const KThread& GetThread() const {
return *thread;
}
bool IsThreadWaiting() const {
return is_thread_waiting;
}
@@ -287,16 +286,17 @@ public:
private:
friend class IPC::ResponseBuilder;
void ParseCommandBuffer(const HandleTable& handle_table, u32_le* src_cmdbuf, bool incoming);
void ParseCommandBuffer(const KHandleTable& handle_table, u32_le* src_cmdbuf, bool incoming);
std::array<u32, IPC::COMMAND_BUFFER_LENGTH> cmd_buf;
std::shared_ptr<Kernel::ServerSession> server_session;
std::shared_ptr<KThread> thread;
Kernel::KServerSession* server_session{};
KThread* thread;
// TODO(yuriks): Check common usage of this and optimize size accordingly
boost::container::small_vector<Handle, 8> move_handles;
boost::container::small_vector<Handle, 8> copy_handles;
boost::container::small_vector<std::shared_ptr<Object>, 8> move_objects;
boost::container::small_vector<std::shared_ptr<Object>, 8> copy_objects;
boost::container::small_vector<KAutoObject*, 8> move_objects;
boost::container::small_vector<KAutoObject*, 8> copy_objects;
boost::container::small_vector<std::shared_ptr<SessionRequestHandler>, 8> domain_objects;
std::optional<IPC::CommandHeader> command_header;
@@ -309,9 +309,12 @@ private:
std::vector<IPC::BufferDescriptorABW> buffer_w_desciptors;
std::vector<IPC::BufferDescriptorC> buffer_c_desciptors;
unsigned data_payload_offset{};
unsigned buffer_c_offset{};
u32_le command{};
u64 pid{};
u32 data_payload_offset{};
u32 handles_offset{};
u32 domain_offset{};
u32 data_size{};
std::vector<std::shared_ptr<SessionRequestHandler>> domain_request_handlers;
bool is_thread_waiting{};

View File

@@ -0,0 +1,192 @@
// Copyright 2021 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "common/alignment.h"
#include "common/assert.h"
#include "common/common_funcs.h"
#include "common/common_types.h"
#include "core/core.h"
#include "core/hardware_properties.h"
#include "core/hle/kernel/init/init_slab_setup.h"
#include "core/hle/kernel/k_event.h"
#include "core/hle/kernel/k_memory_layout.h"
#include "core/hle/kernel/k_memory_manager.h"
#include "core/hle/kernel/k_port.h"
#include "core/hle/kernel/k_process.h"
#include "core/hle/kernel/k_resource_limit.h"
#include "core/hle/kernel/k_session.h"
#include "core/hle/kernel/k_shared_memory.h"
#include "core/hle/kernel/k_system_control.h"
#include "core/hle/kernel/k_thread.h"
#include "core/hle/kernel/k_transfer_memory.h"
#include "core/hle/kernel/memory_types.h"
#include "core/memory.h"
namespace Kernel::Init {
#define SLAB_COUNT(CLASS) kernel.SlabResourceCounts().num_##CLASS
#define FOREACH_SLAB_TYPE(HANDLER, ...) \
HANDLER(KProcess, (SLAB_COUNT(KProcess)), ##__VA_ARGS__) \
HANDLER(KThread, (SLAB_COUNT(KThread)), ##__VA_ARGS__) \
HANDLER(KEvent, (SLAB_COUNT(KEvent)), ##__VA_ARGS__) \
HANDLER(KPort, (SLAB_COUNT(KPort)), ##__VA_ARGS__) \
HANDLER(KSharedMemory, (SLAB_COUNT(KSharedMemory)), ##__VA_ARGS__) \
HANDLER(KTransferMemory, (SLAB_COUNT(KTransferMemory)), ##__VA_ARGS__) \
HANDLER(KSession, (SLAB_COUNT(KSession)), ##__VA_ARGS__) \
HANDLER(KResourceLimit, (SLAB_COUNT(KResourceLimit)), ##__VA_ARGS__)
namespace {
#define DEFINE_SLAB_TYPE_ENUM_MEMBER(NAME, COUNT, ...) KSlabType_##NAME,
enum KSlabType : u32 {
FOREACH_SLAB_TYPE(DEFINE_SLAB_TYPE_ENUM_MEMBER) KSlabType_Count,
};
#undef DEFINE_SLAB_TYPE_ENUM_MEMBER
// Constexpr counts.
constexpr size_t SlabCountKProcess = 80;
constexpr size_t SlabCountKThread = 800;
constexpr size_t SlabCountKEvent = 700;
constexpr size_t SlabCountKInterruptEvent = 100;
constexpr size_t SlabCountKPort = 256 + 0x20; // Extra 0x20 ports over Nintendo for homebrew.
constexpr size_t SlabCountKSharedMemory = 80;
constexpr size_t SlabCountKTransferMemory = 200;
constexpr size_t SlabCountKCodeMemory = 10;
constexpr size_t SlabCountKDeviceAddressSpace = 300;
constexpr size_t SlabCountKSession = 933;
constexpr size_t SlabCountKLightSession = 100;
constexpr size_t SlabCountKObjectName = 7;
constexpr size_t SlabCountKResourceLimit = 5;
constexpr size_t SlabCountKDebug = Core::Hardware::NUM_CPU_CORES;
constexpr size_t SlabCountKAlpha = 1;
constexpr size_t SlabCountKBeta = 6;
constexpr size_t SlabCountExtraKThread = 160;
template <typename T>
VAddr InitializeSlabHeap(Core::System& system, KMemoryLayout& memory_layout, VAddr address,
size_t num_objects) {
const size_t size = Common::AlignUp(sizeof(T) * num_objects, alignof(void*));
VAddr start = Common::AlignUp(address, alignof(T));
if (size > 0) {
const KMemoryRegion* region = memory_layout.FindVirtual(start + size - 1);
ASSERT(region != nullptr);
ASSERT(region->IsDerivedFrom(KMemoryRegionType_KernelSlab));
T::InitializeSlabHeap(system.Kernel(), system.Memory().GetKernelBuffer(start, size), size);
}
return start + size;
}
} // namespace
KSlabResourceCounts KSlabResourceCounts::CreateDefault() {
return {
.num_KProcess = SlabCountKProcess,
.num_KThread = SlabCountKThread,
.num_KEvent = SlabCountKEvent,
.num_KInterruptEvent = SlabCountKInterruptEvent,
.num_KPort = SlabCountKPort,
.num_KSharedMemory = SlabCountKSharedMemory,
.num_KTransferMemory = SlabCountKTransferMemory,
.num_KCodeMemory = SlabCountKCodeMemory,
.num_KDeviceAddressSpace = SlabCountKDeviceAddressSpace,
.num_KSession = SlabCountKSession,
.num_KLightSession = SlabCountKLightSession,
.num_KObjectName = SlabCountKObjectName,
.num_KResourceLimit = SlabCountKResourceLimit,
.num_KDebug = SlabCountKDebug,
.num_KAlpha = SlabCountKAlpha,
.num_KBeta = SlabCountKBeta,
};
}
void InitializeSlabResourceCounts(KernelCore& kernel) {
kernel.SlabResourceCounts() = KSlabResourceCounts::CreateDefault();
if (KSystemControl::Init::ShouldIncreaseThreadResourceLimit()) {
kernel.SlabResourceCounts().num_KThread += SlabCountExtraKThread;
}
}
size_t CalculateTotalSlabHeapSize(const KernelCore& kernel) {
size_t size = 0;
#define ADD_SLAB_SIZE(NAME, COUNT, ...) \
{ \
size += alignof(NAME); \
size += Common::AlignUp(sizeof(NAME) * (COUNT), alignof(void*)); \
};
// Add the size required for each slab.
FOREACH_SLAB_TYPE(ADD_SLAB_SIZE)
#undef ADD_SLAB_SIZE
// Add the reserved size.
size += KernelSlabHeapGapsSize;
return size;
}
void InitializeSlabHeaps(Core::System& system, KMemoryLayout& memory_layout) {
auto& kernel = system.Kernel();
// Get the start of the slab region, since that's where we'll be working.
VAddr address = memory_layout.GetSlabRegionAddress();
// Initialize slab type array to be in sorted order.
std::array<KSlabType, KSlabType_Count> slab_types;
for (size_t i = 0; i < slab_types.size(); i++) {
slab_types[i] = static_cast<KSlabType>(i);
}
// N shuffles the slab type array with the following simple algorithm.
for (size_t i = 0; i < slab_types.size(); i++) {
const size_t rnd = KSystemControl::GenerateRandomRange(0, slab_types.size() - 1);
std::swap(slab_types[i], slab_types[rnd]);
}
// Create an array to represent the gaps between the slabs.
const size_t total_gap_size = KernelSlabHeapGapsSize;
std::array<size_t, slab_types.size()> slab_gaps;
for (size_t i = 0; i < slab_gaps.size(); i++) {
// Note: This is an off-by-one error from Nintendo's intention, because GenerateRandomRange
// is inclusive. However, Nintendo also has the off-by-one error, and it's "harmless", so we
// will include it ourselves.
slab_gaps[i] = KSystemControl::GenerateRandomRange(0, total_gap_size);
}
// Sort the array, so that we can treat differences between values as offsets to the starts of
// slabs.
for (size_t i = 1; i < slab_gaps.size(); i++) {
for (size_t j = i; j > 0 && slab_gaps[j - 1] > slab_gaps[j]; j--) {
std::swap(slab_gaps[j], slab_gaps[j - 1]);
}
}
for (size_t i = 0; i < slab_types.size(); i++) {
// Add the random gap to the address.
address += (i == 0) ? slab_gaps[0] : slab_gaps[i] - slab_gaps[i - 1];
#define INITIALIZE_SLAB_HEAP(NAME, COUNT, ...) \
case KSlabType_##NAME: \
address = InitializeSlabHeap<NAME>(system, memory_layout, address, COUNT); \
break;
// Initialize the slabheap.
switch (slab_types[i]) {
// For each of the slab types, we want to initialize that heap.
FOREACH_SLAB_TYPE(INITIALIZE_SLAB_HEAP)
// If we somehow get an invalid type, abort.
default:
UNREACHABLE();
}
}
}
} // namespace Kernel::Init

View File

@@ -0,0 +1,43 @@
// Copyright 2021 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
namespace Core {
class System;
} // namespace Core
namespace Kernel {
class KernelCore;
class KMemoryLayout;
} // namespace Kernel
namespace Kernel::Init {
struct KSlabResourceCounts {
static KSlabResourceCounts CreateDefault();
size_t num_KProcess;
size_t num_KThread;
size_t num_KEvent;
size_t num_KInterruptEvent;
size_t num_KPort;
size_t num_KSharedMemory;
size_t num_KTransferMemory;
size_t num_KCodeMemory;
size_t num_KDeviceAddressSpace;
size_t num_KSession;
size_t num_KLightSession;
size_t num_KObjectName;
size_t num_KResourceLimit;
size_t num_KDebug;
size_t num_KAlpha;
size_t num_KBeta;
};
void InitializeSlabResourceCounts(KernelCore& kernel);
size_t CalculateTotalSlabHeapSize(const KernelCore& kernel);
void InitializeSlabHeaps(Core::System& system, KMemoryLayout& memory_layout);
} // namespace Kernel::Init

View File

@@ -0,0 +1,14 @@
// Copyright 2021 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "core/hle/kernel/k_auto_object.h"
namespace Kernel {
KAutoObject* KAutoObject::Create(KAutoObject* obj) {
obj->m_ref_count = 1;
return obj;
}
} // namespace Kernel

View File

@@ -0,0 +1,302 @@
// Copyright 2021 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <atomic>
#include <string>
#include "common/assert.h"
#include "common/common_funcs.h"
#include "common/common_types.h"
#include "common/intrusive_red_black_tree.h"
#include "core/hle/kernel/k_class_token.h"
namespace Kernel {
class KernelCore;
class KProcess;
#define KERNEL_AUTOOBJECT_TRAITS(CLASS, BASE_CLASS) \
YUZU_NON_COPYABLE(CLASS); \
YUZU_NON_MOVEABLE(CLASS); \
\
private: \
friend class ::Kernel::KClassTokenGenerator; \
static constexpr inline auto ObjectType = ::Kernel::KClassTokenGenerator::ObjectType::CLASS; \
static constexpr inline const char* const TypeName = #CLASS; \
static constexpr inline ClassTokenType ClassToken() { \
return ::Kernel::ClassToken<CLASS>; \
} \
\
public: \
using BaseClass = BASE_CLASS; \
static constexpr TypeObj GetStaticTypeObj() { \
constexpr ClassTokenType Token = ClassToken(); \
return TypeObj(TypeName, Token); \
} \
static constexpr const char* GetStaticTypeName() { \
return TypeName; \
} \
virtual TypeObj GetTypeObj() const { \
return GetStaticTypeObj(); \
} \
virtual const char* GetTypeName() const { \
return GetStaticTypeName(); \
} \
\
private: \
constexpr bool operator!=(const TypeObj& rhs)
class KAutoObject {
protected:
class TypeObj {
public:
constexpr explicit TypeObj(const char* n, ClassTokenType tok)
: m_name(n), m_class_token(tok) {}
constexpr const char* GetName() const {
return m_name;
}
constexpr ClassTokenType GetClassToken() const {
return m_class_token;
}
constexpr bool operator==(const TypeObj& rhs) const {
return this->GetClassToken() == rhs.GetClassToken();
}
constexpr bool operator!=(const TypeObj& rhs) const {
return this->GetClassToken() != rhs.GetClassToken();
}
constexpr bool IsDerivedFrom(const TypeObj& rhs) const {
return (this->GetClassToken() | rhs.GetClassToken()) == this->GetClassToken();
}
private:
const char* m_name;
ClassTokenType m_class_token;
};
private:
KERNEL_AUTOOBJECT_TRAITS(KAutoObject, KAutoObject);
public:
explicit KAutoObject(KernelCore& kernel_) : kernel(kernel_) {}
virtual ~KAutoObject() = default;
static KAutoObject* Create(KAutoObject* ptr);
// Destroy is responsible for destroying the auto object's resources when ref_count hits zero.
virtual void Destroy() {
UNIMPLEMENTED();
}
// Finalize is responsible for cleaning up resource, but does not destroy the object.
virtual void Finalize() {}
virtual KProcess* GetOwner() const {
return nullptr;
}
u32 GetReferenceCount() const {
return m_ref_count.load();
}
bool IsDerivedFrom(const TypeObj& rhs) const {
return this->GetTypeObj().IsDerivedFrom(rhs);
}
bool IsDerivedFrom(const KAutoObject& rhs) const {
return this->IsDerivedFrom(rhs.GetTypeObj());
}
template <typename Derived>
Derived DynamicCast() {
static_assert(std::is_pointer_v<Derived>);
using DerivedType = std::remove_pointer_t<Derived>;
if (this->IsDerivedFrom(DerivedType::GetStaticTypeObj())) {
return static_cast<Derived>(this);
} else {
return nullptr;
}
}
template <typename Derived>
const Derived DynamicCast() const {
static_assert(std::is_pointer_v<Derived>);
using DerivedType = std::remove_pointer_t<Derived>;
if (this->IsDerivedFrom(DerivedType::GetStaticTypeObj())) {
return static_cast<Derived>(this);
} else {
return nullptr;
}
}
bool Open() {
// Atomically increment the reference count, only if it's positive.
u32 cur_ref_count = m_ref_count.load(std::memory_order_acquire);
do {
if (cur_ref_count == 0) {
return false;
}
ASSERT(cur_ref_count < cur_ref_count + 1);
} while (!m_ref_count.compare_exchange_weak(cur_ref_count, cur_ref_count + 1,
std::memory_order_relaxed));
return true;
}
void Close() {
// Atomically decrement the reference count, not allowing it to become negative.
u32 cur_ref_count = m_ref_count.load(std::memory_order_acquire);
do {
ASSERT(cur_ref_count > 0);
} while (!m_ref_count.compare_exchange_weak(cur_ref_count, cur_ref_count - 1,
std::memory_order_relaxed));
// If ref count hits zero, destroy the object.
if (cur_ref_count - 1 == 0) {
this->Destroy();
}
}
protected:
KernelCore& kernel;
std::string name;
private:
std::atomic<u32> m_ref_count{};
};
class KAutoObjectWithListContainer;
class KAutoObjectWithList : public KAutoObject {
public:
explicit KAutoObjectWithList(KernelCore& kernel_) : KAutoObject(kernel_) {}
static int Compare(const KAutoObjectWithList& lhs, const KAutoObjectWithList& rhs) {
const u64 lid = lhs.GetId();
const u64 rid = rhs.GetId();
if (lid < rid) {
return -1;
} else if (lid > rid) {
return 1;
} else {
return 0;
}
}
public:
virtual u64 GetId() const {
return reinterpret_cast<u64>(this);
}
virtual const std::string& GetName() const {
return name;
}
private:
friend class KAutoObjectWithListContainer;
Common::IntrusiveRedBlackTreeNode list_node;
};
template <typename T>
class KScopedAutoObject {
YUZU_NON_COPYABLE(KScopedAutoObject);
public:
constexpr KScopedAutoObject() = default;
constexpr KScopedAutoObject(T* o) : m_obj(o) {
if (m_obj != nullptr) {
m_obj->Open();
}
}
~KScopedAutoObject() {
if (m_obj != nullptr) {
m_obj->Close();
}
m_obj = nullptr;
}
template <typename U>
requires(std::derived_from<T, U> ||
std::derived_from<U, T>) constexpr KScopedAutoObject(KScopedAutoObject<U>&& rhs) {
if constexpr (std::derived_from<U, T>) {
// Upcast.
m_obj = rhs.m_obj;
rhs.m_obj = nullptr;
} else {
// Downcast.
T* derived = nullptr;
if (rhs.m_obj != nullptr) {
derived = rhs.m_obj->template DynamicCast<T*>();
if (derived == nullptr) {
rhs.m_obj->Close();
}
}
m_obj = derived;
rhs.m_obj = nullptr;
}
}
constexpr KScopedAutoObject<T>& operator=(KScopedAutoObject<T>&& rhs) {
rhs.Swap(*this);
return *this;
}
constexpr T* operator->() {
return m_obj;
}
constexpr T& operator*() {
return *m_obj;
}
constexpr void Reset(T* o) {
KScopedAutoObject(o).Swap(*this);
}
constexpr T* GetPointerUnsafe() {
return m_obj;
}
constexpr T* GetPointerUnsafe() const {
return m_obj;
}
constexpr T* ReleasePointerUnsafe() {
T* ret = m_obj;
m_obj = nullptr;
return ret;
}
constexpr bool IsNull() const {
return m_obj == nullptr;
}
constexpr bool IsNotNull() const {
return m_obj != nullptr;
}
private:
template <typename U>
friend class KScopedAutoObject;
private:
T* m_obj{};
private:
constexpr void Swap(KScopedAutoObject& rhs) noexcept {
std::swap(m_obj, rhs.m_obj);
}
};
} // namespace Kernel

View File

@@ -0,0 +1,28 @@
// Copyright 2021 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "core/hle/kernel/k_auto_object_container.h"
namespace Kernel {
void KAutoObjectWithListContainer::Register(KAutoObjectWithList* obj) {
KScopedLightLock lk(m_lock);
m_object_list.insert(*obj);
}
void KAutoObjectWithListContainer::Unregister(KAutoObjectWithList* obj) {
KScopedLightLock lk(m_lock);
m_object_list.erase(m_object_list.iterator_to(*obj));
}
size_t KAutoObjectWithListContainer::GetOwnedCount(KProcess* owner) {
KScopedLightLock lk(m_lock);
return std::count_if(m_object_list.begin(), m_object_list.end(),
[&](const auto& obj) { return obj.GetOwner() == owner; });
}
} // namespace Kernel

View File

@@ -0,0 +1,70 @@
// Copyright 2021 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <atomic>
#include "common/assert.h"
#include "common/common_funcs.h"
#include "common/common_types.h"
#include "common/intrusive_red_black_tree.h"
#include "core/hle/kernel/k_auto_object.h"
#include "core/hle/kernel/k_light_lock.h"
namespace Kernel {
class KernelCore;
class KProcess;
class KAutoObjectWithListContainer {
YUZU_NON_COPYABLE(KAutoObjectWithListContainer);
YUZU_NON_MOVEABLE(KAutoObjectWithListContainer);
public:
using ListType = Common::IntrusiveRedBlackTreeMemberTraits<
&KAutoObjectWithList::list_node>::TreeType<KAutoObjectWithList>;
public:
class ListAccessor : public KScopedLightLock {
public:
explicit ListAccessor(KAutoObjectWithListContainer* container)
: KScopedLightLock(container->m_lock), m_list(container->m_object_list) {}
explicit ListAccessor(KAutoObjectWithListContainer& container)
: KScopedLightLock(container.m_lock), m_list(container.m_object_list) {}
typename ListType::iterator begin() const {
return m_list.begin();
}
typename ListType::iterator end() const {
return m_list.end();
}
typename ListType::iterator find(typename ListType::const_reference ref) const {
return m_list.find(ref);
}
private:
ListType& m_list;
};
friend class ListAccessor;
public:
KAutoObjectWithListContainer(KernelCore& kernel) : m_lock(kernel), m_object_list() {}
void Initialize() {}
void Finalize() {}
void Register(KAutoObjectWithList* obj);
void Unregister(KAutoObjectWithList* obj);
size_t GetOwnedCount(KProcess* owner);
private:
KLightLock m_lock;
ListType m_object_list;
};
} // namespace Kernel

View File

@@ -0,0 +1,133 @@
// Copyright 2021 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "core/hle/kernel/k_auto_object.h"
#include "core/hle/kernel/k_class_token.h"
#include "core/hle/kernel/k_client_port.h"
#include "core/hle/kernel/k_client_session.h"
#include "core/hle/kernel/k_event.h"
#include "core/hle/kernel/k_port.h"
#include "core/hle/kernel/k_process.h"
#include "core/hle/kernel/k_readable_event.h"
#include "core/hle/kernel/k_resource_limit.h"
#include "core/hle/kernel/k_server_port.h"
#include "core/hle/kernel/k_server_session.h"
#include "core/hle/kernel/k_session.h"
#include "core/hle/kernel/k_shared_memory.h"
#include "core/hle/kernel/k_synchronization_object.h"
#include "core/hle/kernel/k_thread.h"
#include "core/hle/kernel/k_transfer_memory.h"
#include "core/hle/kernel/k_writable_event.h"
namespace Kernel {
// Ensure that we generate correct class tokens for all types.
// Ensure that the absolute token values are correct.
static_assert(ClassToken<KAutoObject> == 0b00000000'00000000);
static_assert(ClassToken<KSynchronizationObject> == 0b00000000'00000001);
static_assert(ClassToken<KReadableEvent> == 0b00000000'00000011);
// static_assert(ClassToken<KInterruptEvent> == 0b00000111'00000011);
// static_assert(ClassToken<KDebug> == 0b00001011'00000001);
static_assert(ClassToken<KThread> == 0b00010011'00000001);
static_assert(ClassToken<KServerPort> == 0b00100011'00000001);
static_assert(ClassToken<KServerSession> == 0b01000011'00000001);
static_assert(ClassToken<KClientPort> == 0b10000011'00000001);
static_assert(ClassToken<KClientSession> == 0b00001101'00000000);
static_assert(ClassToken<KProcess> == 0b00010101'00000001);
static_assert(ClassToken<KResourceLimit> == 0b00100101'00000000);
// static_assert(ClassToken<KLightSession> == 0b01000101'00000000);
static_assert(ClassToken<KPort> == 0b10000101'00000000);
static_assert(ClassToken<KSession> == 0b00011001'00000000);
static_assert(ClassToken<KSharedMemory> == 0b00101001'00000000);
static_assert(ClassToken<KEvent> == 0b01001001'00000000);
static_assert(ClassToken<KWritableEvent> == 0b10001001'00000000);
// static_assert(ClassToken<KLightClientSession> == 0b00110001'00000000);
// static_assert(ClassToken<KLightServerSession> == 0b01010001'00000000);
static_assert(ClassToken<KTransferMemory> == 0b10010001'00000000);
// static_assert(ClassToken<KDeviceAddressSpace> == 0b01100001'00000000);
// static_assert(ClassToken<KSessionRequest> == 0b10100001'00000000);
// static_assert(ClassToken<KCodeMemory> == 0b11000001'00000000);
// Ensure that the token hierarchy is correct.
// Base classes
static_assert(ClassToken<KAutoObject> == (0b00000000));
static_assert(ClassToken<KSynchronizationObject> == (0b00000001 | ClassToken<KAutoObject>));
static_assert(ClassToken<KReadableEvent> == (0b00000010 | ClassToken<KSynchronizationObject>));
// Final classes
// static_assert(ClassToken<KInterruptEvent> == ((0b00000111 << 8) | ClassToken<KReadableEvent>));
// static_assert(ClassToken<KDebug> == ((0b00001011 << 8) | ClassToken<KSynchronizationObject>));
static_assert(ClassToken<KThread> == ((0b00010011 << 8) | ClassToken<KSynchronizationObject>));
static_assert(ClassToken<KServerPort> == ((0b00100011 << 8) | ClassToken<KSynchronizationObject>));
static_assert(ClassToken<KServerSession> ==
((0b01000011 << 8) | ClassToken<KSynchronizationObject>));
static_assert(ClassToken<KClientPort> == ((0b10000011 << 8) | ClassToken<KSynchronizationObject>));
static_assert(ClassToken<KClientSession> == ((0b00001101 << 8) | ClassToken<KAutoObject>));
static_assert(ClassToken<KProcess> == ((0b00010101 << 8) | ClassToken<KSynchronizationObject>));
static_assert(ClassToken<KResourceLimit> == ((0b00100101 << 8) | ClassToken<KAutoObject>));
// static_assert(ClassToken<KLightSession> == ((0b01000101 << 8) | ClassToken<KAutoObject>));
static_assert(ClassToken<KPort> == ((0b10000101 << 8) | ClassToken<KAutoObject>));
static_assert(ClassToken<KSession> == ((0b00011001 << 8) | ClassToken<KAutoObject>));
static_assert(ClassToken<KSharedMemory> == ((0b00101001 << 8) | ClassToken<KAutoObject>));
static_assert(ClassToken<KEvent> == ((0b01001001 << 8) | ClassToken<KAutoObject>));
static_assert(ClassToken<KWritableEvent> == ((0b10001001 << 8) | ClassToken<KAutoObject>));
// static_assert(ClassToken<KLightClientSession> == ((0b00110001 << 8) | ClassToken<KAutoObject>));
// static_assert(ClassToken<KLightServerSession> == ((0b01010001 << 8) | ClassToken<KAutoObject>));
static_assert(ClassToken<KTransferMemory> == ((0b10010001 << 8) | ClassToken<KAutoObject>));
// static_assert(ClassToken<KDeviceAddressSpace> == ((0b01100001 << 8) | ClassToken<KAutoObject>));
// static_assert(ClassToken<KSessionRequest> == ((0b10100001 << 8) | ClassToken<KAutoObject>));
// static_assert(ClassToken<KCodeMemory> == ((0b11000001 << 8) | ClassToken<KAutoObject>));
// Ensure that the token hierarchy reflects the class hierarchy.
// Base classes.
static_assert(!std::is_final<KSynchronizationObject>::value &&
std::is_base_of<KAutoObject, KSynchronizationObject>::value);
static_assert(!std::is_final<KReadableEvent>::value &&
std::is_base_of<KSynchronizationObject, KReadableEvent>::value);
// Final classes
// static_assert(std::is_final<KInterruptEvent>::value &&
// std::is_base_of<KReadableEvent, KInterruptEvent>::value);
// static_assert(std::is_final<KDebug>::value &&
// std::is_base_of<KSynchronizationObject, KDebug>::value);
static_assert(std::is_final<KThread>::value &&
std::is_base_of<KSynchronizationObject, KThread>::value);
static_assert(std::is_final<KServerPort>::value &&
std::is_base_of<KSynchronizationObject, KServerPort>::value);
static_assert(std::is_final<KServerSession>::value &&
std::is_base_of<KSynchronizationObject, KServerSession>::value);
static_assert(std::is_final<KClientPort>::value &&
std::is_base_of<KSynchronizationObject, KClientPort>::value);
static_assert(std::is_final<KClientSession>::value &&
std::is_base_of<KAutoObject, KClientSession>::value);
static_assert(std::is_final<KProcess>::value &&
std::is_base_of<KSynchronizationObject, KProcess>::value);
static_assert(std::is_final<KResourceLimit>::value &&
std::is_base_of<KAutoObject, KResourceLimit>::value);
// static_assert(std::is_final<KLightSession>::value &&
// std::is_base_of<KAutoObject, KLightSession>::value);
static_assert(std::is_final<KPort>::value && std::is_base_of<KAutoObject, KPort>::value);
static_assert(std::is_final<KSession>::value && std::is_base_of<KAutoObject, KSession>::value);
static_assert(std::is_final<KSharedMemory>::value &&
std::is_base_of<KAutoObject, KSharedMemory>::value);
static_assert(std::is_final<KEvent>::value && std::is_base_of<KAutoObject, KEvent>::value);
static_assert(std::is_final<KWritableEvent>::value &&
std::is_base_of<KAutoObject, KWritableEvent>::value);
// static_assert(std::is_final<KLightClientSession>::value &&
// std::is_base_of<KAutoObject, KLightClientSession>::value);
// static_assert(std::is_final<KLightServerSession>::value &&
// std::is_base_of<KAutoObject, KLightServerSession>::value);
static_assert(std::is_final<KTransferMemory>::value &&
std::is_base_of<KAutoObject, KTransferMemory>::value);
// static_assert(std::is_final<KDeviceAddressSpace>::value &&
// std::is_base_of<KAutoObject, KDeviceAddressSpace>::value);
// static_assert(std::is_final<KSessionRequest>::value &&
// std::is_base_of<KAutoObject, KSessionRequest>::value);
// static_assert(std::is_final<KCodeMemory>::value &&
// std::is_base_of<KAutoObject, KCodeMemory>::value);
} // namespace Kernel

View File

@@ -0,0 +1,131 @@
// Copyright 2021 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <atomic>
#include "common/assert.h"
#include "common/bit_util.h"
#include "common/common_types.h"
namespace Kernel {
class KAutoObject;
class KClassTokenGenerator {
public:
using TokenBaseType = u16;
public:
static constexpr size_t BaseClassBits = 8;
static constexpr size_t FinalClassBits = (sizeof(TokenBaseType) * CHAR_BIT) - BaseClassBits;
// One bit per base class.
static constexpr size_t NumBaseClasses = BaseClassBits;
// Final classes are permutations of three bits.
static constexpr size_t NumFinalClasses = [] {
TokenBaseType index = 0;
for (size_t i = 0; i < FinalClassBits; i++) {
for (size_t j = i + 1; j < FinalClassBits; j++) {
for (size_t k = j + 1; k < FinalClassBits; k++) {
index++;
}
}
}
return index;
}();
private:
template <TokenBaseType Index>
static constexpr inline TokenBaseType BaseClassToken = 1U << Index;
template <TokenBaseType Index>
static constexpr inline TokenBaseType FinalClassToken = [] {
TokenBaseType index = 0;
for (size_t i = 0; i < FinalClassBits; i++) {
for (size_t j = i + 1; j < FinalClassBits; j++) {
for (size_t k = j + 1; k < FinalClassBits; k++) {
if ((index++) == Index) {
return static_cast<TokenBaseType>(((1ULL << i) | (1ULL << j) | (1ULL << k))
<< BaseClassBits);
}
}
}
}
}();
template <typename T>
static constexpr inline TokenBaseType GetClassToken() {
static_assert(std::is_base_of<KAutoObject, T>::value);
if constexpr (std::is_same<T, KAutoObject>::value) {
static_assert(T::ObjectType == ObjectType::KAutoObject);
return 0;
} else if constexpr (!std::is_final<T>::value) {
static_assert(ObjectType::BaseClassesStart <= T::ObjectType &&
T::ObjectType < ObjectType::BaseClassesEnd);
constexpr auto ClassIndex = static_cast<TokenBaseType>(T::ObjectType) -
static_cast<TokenBaseType>(ObjectType::BaseClassesStart);
return BaseClassToken<ClassIndex> | GetClassToken<typename T::BaseClass>();
} else if constexpr (ObjectType::FinalClassesStart <= T::ObjectType &&
T::ObjectType < ObjectType::FinalClassesEnd) {
constexpr auto ClassIndex = static_cast<TokenBaseType>(T::ObjectType) -
static_cast<TokenBaseType>(ObjectType::FinalClassesStart);
return FinalClassToken<ClassIndex> | GetClassToken<typename T::BaseClass>();
} else {
static_assert(!std::is_same<T, T>::value, "GetClassToken: Invalid Type");
}
};
public:
enum class ObjectType {
KAutoObject,
BaseClassesStart,
KSynchronizationObject = BaseClassesStart,
KReadableEvent,
BaseClassesEnd,
FinalClassesStart = BaseClassesEnd,
KInterruptEvent = FinalClassesStart,
KDebug,
KThread,
KServerPort,
KServerSession,
KClientPort,
KClientSession,
KProcess,
KResourceLimit,
KLightSession,
KPort,
KSession,
KSharedMemory,
KEvent,
KWritableEvent,
KLightClientSession,
KLightServerSession,
KTransferMemory,
KDeviceAddressSpace,
KSessionRequest,
KCodeMemory,
// NOTE: True order for these has not been determined yet.
KAlpha,
KBeta,
FinalClassesEnd = FinalClassesStart + NumFinalClasses,
};
template <typename T>
static constexpr inline TokenBaseType ClassToken = GetClassToken<T>();
};
using ClassTokenType = KClassTokenGenerator::TokenBaseType;
template <typename T>
static constexpr inline ClassTokenType ClassToken = KClassTokenGenerator::ClassToken<T>;
} // namespace Kernel

View File

@@ -0,0 +1,125 @@
// Copyright 2021 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "common/scope_exit.h"
#include "core/hle/kernel/hle_ipc.h"
#include "core/hle/kernel/k_client_port.h"
#include "core/hle/kernel/k_port.h"
#include "core/hle/kernel/k_scheduler.h"
#include "core/hle/kernel/k_scoped_resource_reservation.h"
#include "core/hle/kernel/k_session.h"
#include "core/hle/kernel/svc_results.h"
namespace Kernel {
KClientPort::KClientPort(KernelCore& kernel_) : KSynchronizationObject{kernel_} {}
KClientPort::~KClientPort() = default;
void KClientPort::Initialize(KPort* parent_, s32 max_sessions_, std::string&& name_) {
// Set member variables.
num_sessions = 0;
peak_sessions = 0;
parent = parent_;
max_sessions = max_sessions_;
name = std::move(name_);
}
void KClientPort::OnSessionFinalized() {
KScopedSchedulerLock sl{kernel};
const auto prev = num_sessions--;
if (prev == max_sessions) {
this->NotifyAvailable();
}
}
void KClientPort::OnServerClosed() {}
bool KClientPort::IsLight() const {
return this->GetParent()->IsLight();
}
bool KClientPort::IsServerClosed() const {
return this->GetParent()->IsServerClosed();
}
void KClientPort::Destroy() {
// Note with our parent that we're closed.
parent->OnClientClosed();
// Close our reference to our parent.
parent->Close();
}
bool KClientPort::IsSignaled() const {
return num_sessions < max_sessions;
}
ResultCode KClientPort::CreateSession(KClientSession** out) {
// Reserve a new session from the resource limit.
// KScopedResourceReservation session_reservation(kernel.CurrentProcess()->GetResourceLimit(),
// LimitableResource::Sessions);
// R_UNLESS(session_reservation.Succeeded(), ResultLimitReached);
// Update the session counts.
{
// Atomically increment the number of sessions.
s32 new_sessions;
{
const auto max = max_sessions;
auto cur_sessions = num_sessions.load(std::memory_order_acquire);
do {
R_UNLESS(cur_sessions < max, ResultOutOfSessions);
new_sessions = cur_sessions + 1;
} while (!num_sessions.compare_exchange_weak(cur_sessions, new_sessions,
std::memory_order_relaxed));
}
// Atomically update the peak session tracking.
{
auto peak = peak_sessions.load(std::memory_order_acquire);
do {
if (peak >= new_sessions) {
break;
}
} while (!peak_sessions.compare_exchange_weak(peak, new_sessions,
std::memory_order_relaxed));
}
}
// Create a new session.
KSession* session = KSession::Create(kernel);
if (session == nullptr) {
// Decrement the session count.
const auto prev = num_sessions--;
if (prev == max_sessions) {
this->NotifyAvailable();
}
return ResultOutOfResource;
}
// Initialize the session.
session->Initialize(this, parent->GetName());
// Commit the session reservation.
// session_reservation.Commit();
// Register the session.
KSession::Register(kernel, session);
auto session_guard = SCOPE_GUARD({
session->GetClientSession().Close();
session->GetServerSession().Close();
});
// Enqueue the session with our parent.
R_TRY(parent->EnqueueSession(std::addressof(session->GetServerSession())));
// We succeeded, so set the output.
session_guard.Cancel();
*out = std::addressof(session->GetClientSession());
return RESULT_SUCCESS;
}
} // namespace Kernel

View File

@@ -0,0 +1,61 @@
// Copyright 2016 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <memory>
#include <string>
#include "common/common_types.h"
#include "core/hle/kernel/k_synchronization_object.h"
#include "core/hle/result.h"
namespace Kernel {
class KClientSession;
class KernelCore;
class KPort;
class KClientPort final : public KSynchronizationObject {
KERNEL_AUTOOBJECT_TRAITS(KClientPort, KSynchronizationObject);
public:
explicit KClientPort(KernelCore& kernel_);
virtual ~KClientPort() override;
void Initialize(KPort* parent_, s32 max_sessions_, std::string&& name_);
void OnSessionFinalized();
void OnServerClosed();
const KPort* GetParent() const {
return parent;
}
s32 GetNumSessions() const {
return num_sessions;
}
s32 GetPeakSessions() const {
return peak_sessions;
}
s32 GetMaxSessions() const {
return max_sessions;
}
bool IsLight() const;
bool IsServerClosed() const;
// Overridden virtual functions.
virtual void Destroy() override;
virtual bool IsSignaled() const override;
ResultCode CreateSession(KClientSession** out);
private:
std::atomic<s32> num_sessions{};
std::atomic<s32> peak_sessions{};
s32 max_sessions{};
KPort* parent{};
};
} // namespace Kernel

View File

@@ -0,0 +1,32 @@
// Copyright 2021 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "core/hle/kernel/hle_ipc.h"
#include "core/hle/kernel/k_client_session.h"
#include "core/hle/kernel/k_server_session.h"
#include "core/hle/kernel/k_session.h"
#include "core/hle/kernel/k_thread.h"
#include "core/hle/kernel/svc_results.h"
#include "core/hle/result.h"
namespace Kernel {
KClientSession::KClientSession(KernelCore& kernel_)
: KAutoObjectWithSlabHeapAndContainer{kernel_} {}
KClientSession::~KClientSession() = default;
void KClientSession::Destroy() {
parent->OnClientClosed();
parent->Close();
}
void KClientSession::OnServerClosed() {}
ResultCode KClientSession::SendSyncRequest(KThread* thread, Core::Memory::Memory& memory,
Core::Timing::CoreTiming& core_timing) {
// Signal the server session that new data is available
return parent->GetServerSession().HandleSyncRequest(thread, memory, core_timing);
}
} // namespace Kernel

View File

@@ -0,0 +1,61 @@
// Copyright 2021 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <memory>
#include <string>
#include "core/hle/kernel/k_auto_object.h"
#include "core/hle/kernel/k_synchronization_object.h"
#include "core/hle/kernel/slab_helpers.h"
#include "core/hle/result.h"
union ResultCode;
namespace Core::Memory {
class Memory;
}
namespace Core::Timing {
class CoreTiming;
}
namespace Kernel {
class KernelCore;
class KSession;
class KThread;
class KClientSession final
: public KAutoObjectWithSlabHeapAndContainer<KClientSession, KAutoObjectWithList> {
KERNEL_AUTOOBJECT_TRAITS(KClientSession, KAutoObject);
public:
explicit KClientSession(KernelCore& kernel_);
virtual ~KClientSession();
void Initialize(KSession* parent_, std::string&& name_) {
// Set member variables.
parent = parent_;
name = std::move(name_);
}
virtual void Destroy() override;
static void PostDestroy([[maybe_unused]] uintptr_t arg) {}
KSession* GetParent() const {
return parent;
}
ResultCode SendSyncRequest(KThread* thread, Core::Memory::Memory& memory,
Core::Timing::CoreTiming& core_timing);
void OnServerClosed();
private:
KSession* parent{};
};
} // namespace Kernel

View File

@@ -7,12 +7,13 @@
#include "core/arm/exclusive_monitor.h"
#include "core/core.h"
#include "core/hle/kernel/k_condition_variable.h"
#include "core/hle/kernel/k_linked_list.h"
#include "core/hle/kernel/k_process.h"
#include "core/hle/kernel/k_scheduler.h"
#include "core/hle/kernel/k_scoped_scheduler_lock_and_sleep.h"
#include "core/hle/kernel/k_synchronization_object.h"
#include "core/hle/kernel/k_thread.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/svc_common.h"
#include "core/hle/kernel/svc_results.h"
#include "core/memory.h"
@@ -107,8 +108,8 @@ ResultCode KConditionVariable::WaitForAddress(Handle handle, VAddr addr, u32 val
// Wait for the address.
{
std::shared_ptr<KThread> owner_thread;
ASSERT(!owner_thread);
KScopedAutoObject<KThread> owner_thread;
ASSERT(owner_thread.IsNull());
{
KScopedSchedulerLock sl(kernel);
cur_thread->SetSyncedObject(nullptr, RESULT_SUCCESS);
@@ -126,8 +127,10 @@ ResultCode KConditionVariable::WaitForAddress(Handle handle, VAddr addr, u32 val
R_UNLESS(test_tag == (handle | Svc::HandleWaitMask), RESULT_SUCCESS);
// Get the lock owner thread.
owner_thread = kernel.CurrentProcess()->GetHandleTable().Get<KThread>(handle);
R_UNLESS(owner_thread, ResultInvalidHandle);
owner_thread =
kernel.CurrentProcess()->GetHandleTable().GetObjectWithoutPseudoHandle<KThread>(
handle);
R_UNLESS(owner_thread.IsNotNull(), ResultInvalidHandle);
// Update the lock.
cur_thread->SetAddressKey(addr, value);
@@ -137,7 +140,7 @@ ResultCode KConditionVariable::WaitForAddress(Handle handle, VAddr addr, u32 val
cur_thread->SetMutexWaitAddressForDebugging(addr);
}
}
ASSERT(owner_thread);
ASSERT(owner_thread.IsNotNull());
}
// Remove the thread as a waiter from the lock owner.
@@ -176,19 +179,22 @@ KThread* KConditionVariable::SignalImpl(KThread* thread) {
KThread* thread_to_close = nullptr;
if (can_access) {
if (prev_tag == InvalidHandle) {
if (prev_tag == Svc::InvalidHandle) {
// If nobody held the lock previously, we're all good.
thread->SetSyncedObject(nullptr, RESULT_SUCCESS);
thread->Wakeup();
} else {
// Get the previous owner.
auto owner_thread = kernel.CurrentProcess()->GetHandleTable().Get<KThread>(
prev_tag & ~Svc::HandleWaitMask);
KThread* owner_thread = kernel.CurrentProcess()
->GetHandleTable()
.GetObjectWithoutPseudoHandle<KThread>(
static_cast<Handle>(prev_tag & ~Svc::HandleWaitMask))
.ReleasePointerUnsafe();
if (owner_thread) {
// Add the thread as a waiter on the owner.
owner_thread->AddWaiter(thread);
thread_to_close = owner_thread.get();
thread_to_close = owner_thread;
} else {
// The lock was tagged with a thread that doesn't exist.
thread->SetSyncedObject(nullptr, ResultInvalidState);
@@ -208,9 +214,7 @@ void KConditionVariable::Signal(u64 cv_key, s32 count) {
// Prepare for signaling.
constexpr int MaxThreads = 16;
// TODO(bunnei): This should just be Thread once we implement KAutoObject instead of using
// std::shared_ptr.
std::vector<std::shared_ptr<KThread>> thread_list;
KLinkedList<KThread> thread_list{kernel};
std::array<KThread*, MaxThreads> thread_array;
s32 num_to_close{};
@@ -228,7 +232,7 @@ void KConditionVariable::Signal(u64 cv_key, s32 count) {
if (num_to_close < MaxThreads) {
thread_array[num_to_close++] = thread;
} else {
thread_list.push_back(SharedFrom(thread));
thread_list.push_back(*thread);
}
}
@@ -251,7 +255,7 @@ void KConditionVariable::Signal(u64 cv_key, s32 count) {
// Close threads in the list.
for (auto it = thread_list.begin(); it != thread_list.end(); it = thread_list.erase(it)) {
(*it)->Close();
(*it).Close();
}
}

View File

@@ -3,30 +3,54 @@
// Refer to the license.txt file included.
#include "core/hle/kernel/k_event.h"
#include "core/hle/kernel/k_readable_event.h"
#include "core/hle/kernel/k_writable_event.h"
#include "core/hle/kernel/k_process.h"
#include "core/hle/kernel/k_resource_limit.h"
namespace Kernel {
KEvent::KEvent(KernelCore& kernel, std::string&& name) : Object{kernel, std::move(name)} {}
KEvent::KEvent(KernelCore& kernel_)
: KAutoObjectWithSlabHeapAndContainer{kernel_}, readable_event{kernel_}, writable_event{
kernel_} {}
KEvent::~KEvent() = default;
std::shared_ptr<KEvent> KEvent::Create(KernelCore& kernel, std::string&& name) {
return std::make_shared<KEvent>(kernel, std::move(name));
}
void KEvent::Initialize(std::string&& name_) {
// Increment reference count.
// Because reference count is one on creation, this will result
// in a reference count of two. Thus, when both readable and
// writable events are closed this object will be destroyed.
Open();
void KEvent::Initialize() {
// Create our sub events.
readable_event = std::make_shared<KReadableEvent>(kernel, GetName() + ":Readable");
writable_event = std::make_shared<KWritableEvent>(kernel, GetName() + ":Writable");
KAutoObject::Create(std::addressof(readable_event));
KAutoObject::Create(std::addressof(writable_event));
// Initialize our sub sessions.
readable_event->Initialize(this);
writable_event->Initialize(this);
readable_event.Initialize(this, name_ + ":Readable");
writable_event.Initialize(this, name_ + ":Writable");
// Set our owner process.
owner = kernel.CurrentProcess();
if (owner) {
owner->Open();
}
// Mark initialized.
name = std::move(name_);
initialized = true;
}
void KEvent::Finalize() {
KAutoObjectWithSlabHeapAndContainer<KEvent, KAutoObjectWithList>::Finalize();
}
void KEvent::PostDestroy(uintptr_t arg) {
// Release the event count resource the owner process holds.
KProcess* owner = reinterpret_cast<KProcess*>(arg);
if (owner) {
owner->GetResourceLimit()->Release(LimitableResource::Events, 1);
owner->Close();
}
}
} // namespace Kernel

View File

@@ -4,53 +4,54 @@
#pragma once
#include "core/hle/kernel/object.h"
#include "core/hle/kernel/k_readable_event.h"
#include "core/hle/kernel/k_writable_event.h"
#include "core/hle/kernel/slab_helpers.h"
namespace Kernel {
class KernelCore;
class KReadableEvent;
class KWritableEvent;
class KProcess;
class KEvent final : public KAutoObjectWithSlabHeapAndContainer<KEvent, KAutoObjectWithList> {
KERNEL_AUTOOBJECT_TRAITS(KEvent, KAutoObject);
class KEvent final : public Object {
public:
explicit KEvent(KernelCore& kernel, std::string&& name);
~KEvent() override;
explicit KEvent(KernelCore& kernel_);
virtual ~KEvent();
static std::shared_ptr<KEvent> Create(KernelCore& kernel, std::string&& name);
void Initialize(std::string&& name);
void Initialize();
virtual void Finalize() override;
void Finalize() override {}
std::string GetTypeName() const override {
return "KEvent";
virtual bool IsInitialized() const override {
return initialized;
}
static constexpr HandleType HANDLE_TYPE = HandleType::Event;
HandleType GetHandleType() const override {
return HANDLE_TYPE;
virtual uintptr_t GetPostDestroyArgument() const override {
return reinterpret_cast<uintptr_t>(owner);
}
std::shared_ptr<KReadableEvent>& GetReadableEvent() {
static void PostDestroy(uintptr_t arg);
virtual KProcess* GetOwner() const override {
return owner;
}
KReadableEvent& GetReadableEvent() {
return readable_event;
}
std::shared_ptr<KWritableEvent>& GetWritableEvent() {
return writable_event;
}
const std::shared_ptr<KReadableEvent>& GetReadableEvent() const {
return readable_event;
}
const std::shared_ptr<KWritableEvent>& GetWritableEvent() const {
KWritableEvent& GetWritableEvent() {
return writable_event;
}
private:
std::shared_ptr<KReadableEvent> readable_event;
std::shared_ptr<KWritableEvent> writable_event;
KReadableEvent readable_event;
KWritableEvent writable_event;
KProcess* owner{};
bool initialized{};
};

View File

@@ -0,0 +1,135 @@
// Copyright 2021 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "core/hle/kernel/k_handle_table.h"
namespace Kernel {
KHandleTable::KHandleTable(KernelCore& kernel_) : kernel{kernel_} {}
KHandleTable ::~KHandleTable() = default;
ResultCode KHandleTable::Finalize() {
// Get the table and clear our record of it.
u16 saved_table_size = 0;
{
KScopedSpinLock lk(m_lock);
std::swap(m_table_size, saved_table_size);
}
// Close and free all entries.
for (size_t i = 0; i < saved_table_size; i++) {
if (KAutoObject* obj = m_objects[i]; obj != nullptr) {
obj->Close();
}
}
return RESULT_SUCCESS;
}
bool KHandleTable::Remove(Handle handle) {
// Don't allow removal of a pseudo-handle.
if (Svc::IsPseudoHandle(handle)) {
return false;
}
// Handles must not have reserved bits set.
const auto handle_pack = HandlePack(handle);
if (handle_pack.reserved != 0) {
return false;
}
// Find the object and free the entry.
KAutoObject* obj = nullptr;
{
KScopedSpinLock lk(m_lock);
if (this->IsValidHandle(handle)) {
const auto index = handle_pack.index;
obj = m_objects[index];
this->FreeEntry(index);
} else {
return false;
}
}
// Close the object.
obj->Close();
return true;
}
ResultCode KHandleTable::Add(Handle* out_handle, KAutoObject* obj, u16 type) {
KScopedSpinLock lk(m_lock);
// Never exceed our capacity.
R_UNLESS(m_count < m_table_size, ResultOutOfHandles);
// Allocate entry, set output handle.
{
const auto linear_id = this->AllocateLinearId();
const auto index = this->AllocateEntry();
m_entry_infos[index].info = {.linear_id = linear_id, .type = type};
m_objects[index] = obj;
obj->Open();
*out_handle = EncodeHandle(static_cast<u16>(index), linear_id);
}
return RESULT_SUCCESS;
}
ResultCode KHandleTable::Reserve(Handle* out_handle) {
KScopedSpinLock lk(m_lock);
// Never exceed our capacity.
R_UNLESS(m_count < m_table_size, ResultOutOfHandles);
*out_handle = EncodeHandle(static_cast<u16>(this->AllocateEntry()), this->AllocateLinearId());
return RESULT_SUCCESS;
}
void KHandleTable::Unreserve(Handle handle) {
KScopedSpinLock lk(m_lock);
// Unpack the handle.
const auto handle_pack = HandlePack(handle);
const auto index = handle_pack.index;
const auto linear_id = handle_pack.linear_id;
const auto reserved = handle_pack.reserved;
ASSERT(reserved == 0);
ASSERT(linear_id != 0);
if (index < m_table_size) {
// NOTE: This code does not check the linear id.
ASSERT(m_objects[index] == nullptr);
this->FreeEntry(index);
}
}
void KHandleTable::Register(Handle handle, KAutoObject* obj, u16 type) {
KScopedSpinLock lk(m_lock);
// Unpack the handle.
const auto handle_pack = HandlePack(handle);
const auto index = handle_pack.index;
const auto linear_id = handle_pack.linear_id;
const auto reserved = handle_pack.reserved;
ASSERT(reserved == 0);
ASSERT(linear_id != 0);
if (index < m_table_size) {
// Set the entry.
ASSERT(m_objects[index] == nullptr);
m_entry_infos[index].info = {.linear_id = static_cast<u16>(linear_id), .type = type};
m_objects[index] = obj;
obj->Open();
}
}
} // namespace Kernel

View File

@@ -0,0 +1,310 @@
// Copyright 2021 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <array>
#include "common/assert.h"
#include "common/bit_field.h"
#include "common/bit_util.h"
#include "common/common_types.h"
#include "core/hle/kernel/k_auto_object.h"
#include "core/hle/kernel/k_spin_lock.h"
#include "core/hle/kernel/k_thread.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/svc_common.h"
#include "core/hle/kernel/svc_results.h"
#include "core/hle/result.h"
namespace Kernel {
class KernelCore;
class KHandleTable {
YUZU_NON_COPYABLE(KHandleTable);
YUZU_NON_MOVEABLE(KHandleTable);
public:
static constexpr size_t MaxTableSize = 1024;
public:
explicit KHandleTable(KernelCore& kernel_);
~KHandleTable();
ResultCode Initialize(s32 size) {
R_UNLESS(size <= static_cast<s32>(MaxTableSize), ResultOutOfMemory);
// Initialize all fields.
m_max_count = 0;
m_table_size = static_cast<u16>((size <= 0) ? MaxTableSize : size);
m_next_linear_id = MinLinearId;
m_count = 0;
m_free_head_index = -1;
// Free all entries.
for (s32 i = 0; i < static_cast<s32>(m_table_size); ++i) {
m_objects[i] = nullptr;
m_entry_infos[i].next_free_index = i - 1;
m_free_head_index = i;
}
return RESULT_SUCCESS;
}
size_t GetTableSize() const {
return m_table_size;
}
size_t GetCount() const {
return m_count;
}
size_t GetMaxCount() const {
return m_max_count;
}
ResultCode Finalize();
bool Remove(Handle handle);
template <typename T = KAutoObject>
KScopedAutoObject<T> GetObjectWithoutPseudoHandle(Handle handle) const {
// Lock and look up in table.
KScopedSpinLock lk(m_lock);
if constexpr (std::is_same_v<T, KAutoObject>) {
return this->GetObjectImpl(handle);
} else {
if (auto* obj = this->GetObjectImpl(handle); obj != nullptr) {
return obj->DynamicCast<T*>();
} else {
return nullptr;
}
}
}
template <typename T = KAutoObject>
KScopedAutoObject<T> GetObject(Handle handle) const {
// Handle pseudo-handles.
if constexpr (std::derived_from<KProcess, T>) {
if (handle == Svc::PseudoHandle::CurrentProcess) {
auto* const cur_process = kernel.CurrentProcess();
ASSERT(cur_process != nullptr);
return cur_process;
}
} else if constexpr (std::derived_from<KThread, T>) {
if (handle == Svc::PseudoHandle::CurrentThread) {
auto* const cur_thread = GetCurrentThreadPointer(kernel);
ASSERT(cur_thread != nullptr);
return cur_thread;
}
}
return this->template GetObjectWithoutPseudoHandle<T>(handle);
}
ResultCode Reserve(Handle* out_handle);
void Unreserve(Handle handle);
template <typename T>
ResultCode Add(Handle* out_handle, T* obj) {
static_assert(std::is_base_of_v<KAutoObject, T>);
return this->Add(out_handle, obj, obj->GetTypeObj().GetClassToken());
}
template <typename T>
void Register(Handle handle, T* obj) {
static_assert(std::is_base_of_v<KAutoObject, T>);
return this->Register(handle, obj, obj->GetTypeObj().GetClassToken());
}
template <typename T>
bool GetMultipleObjects(T** out, const Handle* handles, size_t num_handles) const {
// Try to convert and open all the handles.
size_t num_opened;
{
// Lock the table.
KScopedSpinLock lk(m_lock);
for (num_opened = 0; num_opened < num_handles; num_opened++) {
// Get the current handle.
const auto cur_handle = handles[num_opened];
// Get the object for the current handle.
KAutoObject* cur_object = this->GetObjectImpl(cur_handle);
if (cur_object == nullptr) {
break;
}
// Cast the current object to the desired type.
T* cur_t = cur_object->DynamicCast<T*>();
if (cur_t == nullptr) {
break;
}
// Open a reference to the current object.
cur_t->Open();
out[num_opened] = cur_t;
}
}
// If we converted every object, succeed.
if (num_opened == num_handles) {
return true;
}
// If we didn't convert entry object, close the ones we opened.
for (size_t i = 0; i < num_opened; i++) {
out[i]->Close();
}
return false;
}
private:
ResultCode Add(Handle* out_handle, KAutoObject* obj, u16 type);
void Register(Handle handle, KAutoObject* obj, u16 type);
s32 AllocateEntry() {
ASSERT(m_count < m_table_size);
const auto index = m_free_head_index;
m_free_head_index = m_entry_infos[index].GetNextFreeIndex();
m_max_count = std::max(m_max_count, ++m_count);
return index;
}
void FreeEntry(s32 index) {
ASSERT(m_count > 0);
m_objects[index] = nullptr;
m_entry_infos[index].next_free_index = m_free_head_index;
m_free_head_index = index;
--m_count;
}
u16 AllocateLinearId() {
const u16 id = m_next_linear_id++;
if (m_next_linear_id > MaxLinearId) {
m_next_linear_id = MinLinearId;
}
return id;
}
bool IsValidHandle(Handle handle) const {
// Unpack the handle.
const auto handle_pack = HandlePack(handle);
const auto raw_value = handle_pack.raw;
const auto index = handle_pack.index;
const auto linear_id = handle_pack.linear_id;
const auto reserved = handle_pack.reserved;
ASSERT(reserved == 0);
// Validate our indexing information.
if (raw_value == 0) {
return false;
}
if (linear_id == 0) {
return false;
}
if (index >= m_table_size) {
return false;
}
// Check that there's an object, and our serial id is correct.
if (m_objects[index] == nullptr) {
return false;
}
if (m_entry_infos[index].GetLinearId() != linear_id) {
return false;
}
return true;
}
KAutoObject* GetObjectImpl(Handle handle) const {
// Handles must not have reserved bits set.
const auto handle_pack = HandlePack(handle);
if (handle_pack.reserved != 0) {
return nullptr;
}
if (this->IsValidHandle(handle)) {
return m_objects[handle_pack.index];
} else {
return nullptr;
}
}
KAutoObject* GetObjectByIndexImpl(Handle* out_handle, size_t index) const {
// Index must be in bounds.
if (index >= m_table_size) {
return nullptr;
}
// Ensure entry has an object.
if (KAutoObject* obj = m_objects[index]; obj != nullptr) {
*out_handle = EncodeHandle(static_cast<u16>(index), m_entry_infos[index].GetLinearId());
return obj;
} else {
return nullptr;
}
}
private:
union HandlePack {
HandlePack() = default;
HandlePack(Handle handle) : raw{static_cast<u32>(handle)} {}
u32 raw;
BitField<0, 15, u32> index;
BitField<15, 15, u32> linear_id;
BitField<30, 2, u32> reserved;
};
static constexpr u16 MinLinearId = 1;
static constexpr u16 MaxLinearId = 0x7FFF;
static constexpr Handle EncodeHandle(u16 index, u16 linear_id) {
HandlePack handle{};
handle.index.Assign(index);
handle.linear_id.Assign(linear_id);
handle.reserved.Assign(0);
return handle.raw;
}
union EntryInfo {
struct {
u16 linear_id;
u16 type;
} info;
s32 next_free_index;
constexpr u16 GetLinearId() const {
return info.linear_id;
}
constexpr u16 GetType() const {
return info.type;
}
constexpr s32 GetNextFreeIndex() const {
return next_free_index;
}
};
private:
std::array<EntryInfo, MaxTableSize> m_entry_infos{};
std::array<KAutoObject*, MaxTableSize> m_objects{};
s32 m_free_head_index{-1};
u16 m_table_size{};
u16 m_max_count{};
u16 m_next_linear_id{MinLinearId};
u16 m_count{};
mutable KSpinLock m_lock;
KernelCore& kernel;
};
} // namespace Kernel

View File

@@ -18,7 +18,8 @@ class KernelCore;
class KLightConditionVariable {
public:
explicit KLightConditionVariable(KernelCore& kernel) : thread_queue(kernel), kernel(kernel) {}
explicit KLightConditionVariable(KernelCore& kernel_)
: thread_queue(kernel_), kernel(kernel_) {}
void Wait(KLightLock* lock, s64 timeout = -1) {
WaitImpl(lock, timeout);

View File

@@ -0,0 +1,238 @@
// Copyright 2021 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <boost/intrusive/list.hpp>
#include "common/assert.h"
#include "core/hle/kernel/slab_helpers.h"
namespace Kernel {
class KernelCore;
class KLinkedListNode : public boost::intrusive::list_base_hook<>,
public KSlabAllocated<KLinkedListNode> {
public:
KLinkedListNode() = default;
void Initialize(void* it) {
m_item = it;
}
void* GetItem() const {
return m_item;
}
private:
void* m_item = nullptr;
};
template <typename T>
class KLinkedList : private boost::intrusive::list<KLinkedListNode> {
private:
using BaseList = boost::intrusive::list<KLinkedListNode>;
public:
template <bool Const>
class Iterator;
using value_type = T;
using size_type = size_t;
using difference_type = ptrdiff_t;
using pointer = value_type*;
using const_pointer = const value_type*;
using reference = value_type&;
using const_reference = const value_type&;
using iterator = Iterator<false>;
using const_iterator = Iterator<true>;
using reverse_iterator = std::reverse_iterator<iterator>;
using const_reverse_iterator = std::reverse_iterator<const_iterator>;
template <bool Const>
class Iterator {
private:
using BaseIterator = BaseList::iterator;
friend class KLinkedList;
public:
using iterator_category = std::bidirectional_iterator_tag;
using value_type = typename KLinkedList::value_type;
using difference_type = typename KLinkedList::difference_type;
using pointer = std::conditional_t<Const, KLinkedList::const_pointer, KLinkedList::pointer>;
using reference =
std::conditional_t<Const, KLinkedList::const_reference, KLinkedList::reference>;
public:
explicit Iterator(BaseIterator it) : m_base_it(it) {}
pointer GetItem() const {
return static_cast<pointer>(m_base_it->GetItem());
}
bool operator==(const Iterator& rhs) const {
return m_base_it == rhs.m_base_it;
}
bool operator!=(const Iterator& rhs) const {
return !(*this == rhs);
}
pointer operator->() const {
return this->GetItem();
}
reference operator*() const {
return *this->GetItem();
}
Iterator& operator++() {
++m_base_it;
return *this;
}
Iterator& operator--() {
--m_base_it;
return *this;
}
Iterator operator++(int) {
const Iterator it{*this};
++(*this);
return it;
}
Iterator operator--(int) {
const Iterator it{*this};
--(*this);
return it;
}
operator Iterator<true>() const {
return Iterator<true>(m_base_it);
}
private:
BaseIterator m_base_it;
};
public:
constexpr KLinkedList(KernelCore& kernel_) : BaseList(), kernel{kernel_} {}
~KLinkedList() {
// Erase all elements.
for (auto it = begin(); it != end(); it = erase(it)) {
}
// Ensure we succeeded.
ASSERT(this->empty());
}
// Iterator accessors.
iterator begin() {
return iterator(BaseList::begin());
}
const_iterator begin() const {
return const_iterator(BaseList::begin());
}
iterator end() {
return iterator(BaseList::end());
}
const_iterator end() const {
return const_iterator(BaseList::end());
}
const_iterator cbegin() const {
return this->begin();
}
const_iterator cend() const {
return this->end();
}
reverse_iterator rbegin() {
return reverse_iterator(this->end());
}
const_reverse_iterator rbegin() const {
return const_reverse_iterator(this->end());
}
reverse_iterator rend() {
return reverse_iterator(this->begin());
}
const_reverse_iterator rend() const {
return const_reverse_iterator(this->begin());
}
const_reverse_iterator crbegin() const {
return this->rbegin();
}
const_reverse_iterator crend() const {
return this->rend();
}
// Content management.
using BaseList::empty;
using BaseList::size;
reference back() {
return *(--this->end());
}
const_reference back() const {
return *(--this->end());
}
reference front() {
return *this->begin();
}
const_reference front() const {
return *this->begin();
}
iterator insert(const_iterator pos, reference ref) {
KLinkedListNode* new_node = KLinkedListNode::Allocate(kernel);
ASSERT(new_node != nullptr);
new_node->Initialize(std::addressof(ref));
return iterator(BaseList::insert(pos.m_base_it, *new_node));
}
void push_back(reference ref) {
this->insert(this->end(), ref);
}
void push_front(reference ref) {
this->insert(this->begin(), ref);
}
void pop_back() {
this->erase(--this->end());
}
void pop_front() {
this->erase(this->begin());
}
iterator erase(const iterator pos) {
KLinkedListNode* freed_node = std::addressof(*pos.m_base_it);
iterator ret = iterator(BaseList::erase(pos.m_base_it));
KLinkedListNode::Free(kernel, freed_node);
return ret;
}
private:
KernelCore& kernel;
};
} // namespace Kernel

View File

@@ -134,6 +134,10 @@ enum class KMemoryPermission : u8 {
};
DECLARE_ENUM_FLAG_OPERATORS(KMemoryPermission);
constexpr KMemoryPermission ConvertToKMemoryPermission(Svc::MemoryPermission perm) {
return static_cast<KMemoryPermission>(perm);
}
enum class KMemoryAttribute : u8 {
None = 0x00,
Mask = 0x7F,

View File

@@ -7,8 +7,8 @@
namespace Kernel {
KMemoryBlockManager::KMemoryBlockManager(VAddr start_addr, VAddr end_addr)
: start_addr{start_addr}, end_addr{end_addr} {
KMemoryBlockManager::KMemoryBlockManager(VAddr start_addr_, VAddr end_addr_)
: start_addr{start_addr_}, end_addr{end_addr_} {
const u64 num_pages{(end_addr - start_addr) / PageSize};
memory_block_tree.emplace_back(start_addr, num_pages, KMemoryState::Free,
KMemoryPermission::None, KMemoryAttribute::None);
@@ -17,8 +17,8 @@ KMemoryBlockManager::KMemoryBlockManager(VAddr start_addr, VAddr end_addr)
KMemoryBlockManager::iterator KMemoryBlockManager::FindIterator(VAddr addr) {
auto node{memory_block_tree.begin()};
while (node != end()) {
const VAddr end_addr{node->GetNumPages() * PageSize + node->GetAddress()};
if (node->GetAddress() <= addr && end_addr - 1 >= addr) {
const VAddr node_end_addr{node->GetNumPages() * PageSize + node->GetAddress()};
if (node->GetAddress() <= addr && node_end_addr - 1 >= addr) {
return node;
}
node = std::next(node);
@@ -67,7 +67,7 @@ void KMemoryBlockManager::Update(VAddr addr, std::size_t num_pages, KMemoryState
KMemoryPermission prev_perm, KMemoryAttribute prev_attribute,
KMemoryState state, KMemoryPermission perm,
KMemoryAttribute attribute) {
const VAddr end_addr{addr + num_pages * PageSize};
const VAddr update_end_addr{addr + num_pages * PageSize};
iterator node{memory_block_tree.begin()};
prev_attribute |= KMemoryAttribute::IpcAndDeviceMapped;
@@ -78,7 +78,7 @@ void KMemoryBlockManager::Update(VAddr addr, std::size_t num_pages, KMemoryState
const VAddr cur_addr{block->GetAddress()};
const VAddr cur_end_addr{block->GetNumPages() * PageSize + cur_addr};
if (addr < cur_end_addr && cur_addr < end_addr) {
if (addr < cur_end_addr && cur_addr < update_end_addr) {
if (!block->HasProperties(prev_state, prev_perm, prev_attribute)) {
node = next_node;
continue;
@@ -89,8 +89,8 @@ void KMemoryBlockManager::Update(VAddr addr, std::size_t num_pages, KMemoryState
memory_block_tree.insert(node, block->Split(addr));
}
if (end_addr < cur_end_addr) {
new_node = memory_block_tree.insert(node, block->Split(end_addr));
if (update_end_addr < cur_end_addr) {
new_node = memory_block_tree.insert(node, block->Split(update_end_addr));
}
new_node->Update(state, perm, attribute);
@@ -98,7 +98,7 @@ void KMemoryBlockManager::Update(VAddr addr, std::size_t num_pages, KMemoryState
MergeAdjacent(new_node, next_node);
}
if (cur_end_addr - 1 >= end_addr - 1) {
if (cur_end_addr - 1 >= update_end_addr - 1) {
break;
}
@@ -108,7 +108,7 @@ void KMemoryBlockManager::Update(VAddr addr, std::size_t num_pages, KMemoryState
void KMemoryBlockManager::Update(VAddr addr, std::size_t num_pages, KMemoryState state,
KMemoryPermission perm, KMemoryAttribute attribute) {
const VAddr end_addr{addr + num_pages * PageSize};
const VAddr update_end_addr{addr + num_pages * PageSize};
iterator node{memory_block_tree.begin()};
while (node != memory_block_tree.end()) {
@@ -117,15 +117,15 @@ void KMemoryBlockManager::Update(VAddr addr, std::size_t num_pages, KMemoryState
const VAddr cur_addr{block->GetAddress()};
const VAddr cur_end_addr{block->GetNumPages() * PageSize + cur_addr};
if (addr < cur_end_addr && cur_addr < end_addr) {
if (addr < cur_end_addr && cur_addr < update_end_addr) {
iterator new_node{node};
if (addr > cur_addr) {
memory_block_tree.insert(node, block->Split(addr));
}
if (end_addr < cur_end_addr) {
new_node = memory_block_tree.insert(node, block->Split(end_addr));
if (update_end_addr < cur_end_addr) {
new_node = memory_block_tree.insert(node, block->Split(update_end_addr));
}
new_node->Update(state, perm, attribute);
@@ -133,7 +133,7 @@ void KMemoryBlockManager::Update(VAddr addr, std::size_t num_pages, KMemoryState
MergeAdjacent(new_node, next_node);
}
if (cur_end_addr - 1 >= end_addr - 1) {
if (cur_end_addr - 1 >= update_end_addr - 1) {
break;
}
@@ -143,7 +143,7 @@ void KMemoryBlockManager::Update(VAddr addr, std::size_t num_pages, KMemoryState
void KMemoryBlockManager::UpdateLock(VAddr addr, std::size_t num_pages, LockFunc&& lock_func,
KMemoryPermission perm) {
const VAddr end_addr{addr + num_pages * PageSize};
const VAddr update_end_addr{addr + num_pages * PageSize};
iterator node{memory_block_tree.begin()};
while (node != memory_block_tree.end()) {
@@ -152,15 +152,15 @@ void KMemoryBlockManager::UpdateLock(VAddr addr, std::size_t num_pages, LockFunc
const VAddr cur_addr{block->GetAddress()};
const VAddr cur_end_addr{block->GetNumPages() * PageSize + cur_addr};
if (addr < cur_end_addr && cur_addr < end_addr) {
if (addr < cur_end_addr && cur_addr < update_end_addr) {
iterator new_node{node};
if (addr > cur_addr) {
memory_block_tree.insert(node, block->Split(addr));
}
if (end_addr < cur_end_addr) {
new_node = memory_block_tree.insert(node, block->Split(end_addr));
if (update_end_addr < cur_end_addr) {
new_node = memory_block_tree.insert(node, block->Split(update_end_addr));
}
lock_func(new_node, perm);
@@ -168,7 +168,7 @@ void KMemoryBlockManager::UpdateLock(VAddr addr, std::size_t num_pages, LockFunc
MergeAdjacent(new_node, next_node);
}
if (cur_end_addr - 1 >= end_addr - 1) {
if (cur_end_addr - 1 >= update_end_addr - 1) {
break;
}

View File

@@ -19,7 +19,7 @@ public:
using const_iterator = MemoryBlockTree::const_iterator;
public:
KMemoryBlockManager(VAddr start_addr, VAddr end_addr);
KMemoryBlockManager(VAddr start_addr_, VAddr end_addr_);
iterator end() {
return memory_block_tree.end();

View File

@@ -82,9 +82,9 @@ public:
type_id = type;
}
constexpr bool Contains(u64 address) const {
constexpr bool Contains(u64 addr) const {
ASSERT(this->GetEndAddress() != 0);
return this->GetAddress() <= address && address <= this->GetLastAddress();
return this->GetAddress() <= addr && addr <= this->GetLastAddress();
}
constexpr bool IsDerivedFrom(u32 type) const {

View File

@@ -17,7 +17,7 @@ class KPageLinkedList final {
public:
class Node final {
public:
constexpr Node(u64 addr, std::size_t num_pages) : addr{addr}, num_pages{num_pages} {}
constexpr Node(u64 addr_, std::size_t num_pages_) : addr{addr_}, num_pages{num_pages_} {}
constexpr u64 GetAddress() const {
return addr;

View File

@@ -11,11 +11,11 @@
#include "core/hle/kernel/k_memory_block_manager.h"
#include "core/hle/kernel/k_page_linked_list.h"
#include "core/hle/kernel/k_page_table.h"
#include "core/hle/kernel/k_process.h"
#include "core/hle/kernel/k_resource_limit.h"
#include "core/hle/kernel/k_scoped_resource_reservation.h"
#include "core/hle/kernel/k_system_control.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/svc_results.h"
#include "core/memory.h"
@@ -58,7 +58,7 @@ constexpr std::size_t GetSizeInRange(const KMemoryInfo& info, VAddr start, VAddr
} // namespace
KPageTable::KPageTable(Core::System& system) : system{system} {}
KPageTable::KPageTable(Core::System& system_) : system{system_} {}
ResultCode KPageTable::InitializeForProcess(FileSys::ProgramAddressSpaceType as_type,
bool enable_aslr, VAddr code_addr,
@@ -420,7 +420,7 @@ ResultCode KPageTable::MapPhysicalMemory(VAddr addr, std::size_t size) {
remaining_size);
if (!memory_reservation.Succeeded()) {
LOG_ERROR(Kernel, "Could not reserve remaining {:X} bytes", remaining_size);
return ResultResourceLimitedExceeded;
return ResultLimitReached;
}
KPageLinkedList page_linked_list;
@@ -578,7 +578,7 @@ ResultCode KPageTable::Unmap(VAddr dst_addr, VAddr src_addr, std::size_t size) {
AddRegionToPages(dst_addr, num_pages, dst_pages);
if (!dst_pages.IsEqual(src_pages)) {
return ResultInvalidMemoryRange;
return ResultInvalidMemoryRegion;
}
{
@@ -641,6 +641,45 @@ ResultCode KPageTable::MapPages(VAddr addr, KPageLinkedList& page_linked_list, K
return RESULT_SUCCESS;
}
ResultCode KPageTable::UnmapPages(VAddr addr, const KPageLinkedList& page_linked_list) {
VAddr cur_addr{addr};
for (const auto& node : page_linked_list.Nodes()) {
const std::size_t num_pages{(addr - cur_addr) / PageSize};
if (const auto result{
Operate(addr, num_pages, KMemoryPermission::None, OperationType::Unmap)};
result.IsError()) {
return result;
}
cur_addr += node.GetNumPages() * PageSize;
}
return RESULT_SUCCESS;
}
ResultCode KPageTable::UnmapPages(VAddr addr, KPageLinkedList& page_linked_list,
KMemoryState state) {
std::lock_guard lock{page_table_lock};
const std::size_t num_pages{page_linked_list.GetNumPages()};
const std::size_t size{num_pages * PageSize};
if (!CanContain(addr, size, state)) {
return ResultInvalidCurrentMemory;
}
if (IsRegionMapped(addr, num_pages * PageSize)) {
return ResultInvalidCurrentMemory;
}
CASCADE_CODE(UnmapPages(addr, page_linked_list));
block_manager->Update(addr, num_pages, state, KMemoryPermission::None);
return RESULT_SUCCESS;
}
ResultCode KPageTable::SetCodeMemoryPermission(VAddr addr, std::size_t size,
KMemoryPermission perm) {
@@ -790,7 +829,7 @@ ResultVal<VAddr> KPageTable::SetHeapSize(std::size_t size) {
if (!memory_reservation.Succeeded()) {
LOG_ERROR(Kernel, "Could not reserve heap extension of size {:X} bytes", delta);
return ResultResourceLimitedExceeded;
return ResultLimitReached;
}
KPageLinkedList page_linked_list;
@@ -867,8 +906,8 @@ ResultCode KPageTable::LockForDeviceAddressSpace(VAddr addr, std::size_t size) {
block_manager->UpdateLock(
addr, size / PageSize,
[](KMemoryBlockManager::iterator block, KMemoryPermission perm) {
block->ShareToDevice(perm);
[](KMemoryBlockManager::iterator block, KMemoryPermission permission) {
block->ShareToDevice(permission);
},
perm);
@@ -890,8 +929,8 @@ ResultCode KPageTable::UnlockForDeviceAddressSpace(VAddr addr, std::size_t size)
block_manager->UpdateLock(
addr, size / PageSize,
[](KMemoryBlockManager::iterator block, KMemoryPermission perm) {
block->UnshareToDevice(perm);
[](KMemoryBlockManager::iterator block, KMemoryPermission permission) {
block->UnshareToDevice(permission);
},
perm);
@@ -1067,7 +1106,7 @@ constexpr std::size_t KPageTable::GetRegionSize(KMemoryState state) const {
}
}
constexpr bool KPageTable::CanContain(VAddr addr, std::size_t size, KMemoryState state) const {
bool KPageTable::CanContain(VAddr addr, std::size_t size, KMemoryState state) const {
const VAddr end{addr + size};
const VAddr last{end - 1};
const VAddr region_start{GetRegionAddress(state)};

Some files were not shown because too many files have changed in this diff Show More