Compare commits

..

61 Commits

Author SHA1 Message Date
jacky400
87435e1088 Revert "buffer_cache: reset cached write bits after flushing invalidations" 2022-06-29 00:33:32 +08:00
bunnei
c78f6d4f20 Merge pull request #8504 from comex/mesosphere-current-process
Support `InfoType_MesosphereCurrentProcess`
2022-06-27 13:05:07 -07:00
bunnei
abfd690601 Merge pull request #8475 from liamwhite/x18
kernel: make current thread pointer thread local
2022-06-26 11:38:48 -07:00
comex
bf7e78795f Re-add missing case and braces, and trim whitespace 2022-06-25 18:01:56 -07:00
comex
a14438d013 Update src/core/hle/kernel/svc.cpp
Co-authored-by: liamwhite <liamwhite@users.noreply.github.com>
2022-06-25 18:00:29 -07:00
comex
48737a4bb2 Support InfoType_MesosphereCurrentProcess 2022-06-25 16:23:23 -07:00
bunnei
b321c39371 Merge pull request #8500 from liamwhite/poke
gdbstub: fix register pokes
2022-06-25 12:31:20 -07:00
Liam
19f475fd70 gdbstub: fix register pokes 2022-06-25 12:07:20 -04:00
Liam
2c56e94702 kernel: make current thread pointer thread local 2022-06-23 00:28:00 -04:00
bunnei
95b844dbae Merge pull request #8491 from Morph1984/extra-assert
KPageTable: Remove extraneous assert
2022-06-22 14:47:07 -07:00
bunnei
9da4e62573 Merge pull request #8483 from liamwhite/fire-emblem-three-semaphores
kernel: wait for threads to stop on pause
2022-06-22 14:46:33 -07:00
Morph
1c8f6ba18f KPageTable: Remove extraneous assert
Since start is always 0 and VAddr is unsigned, we can safely remove this assert.
2022-06-21 21:28:54 -04:00
Morph
ab0e71d7cb Merge pull request #8455 from lat9nq/mingw-clang
ci/windows: Use Clang for MinGW builds
2022-06-21 20:21:13 -04:00
bunnei
737c446fc1 Merge pull request #8432 from liamwhite/watchpoint
core/debugger: memory breakpoint support
2022-06-21 16:04:57 -07:00
bunnei
73e13aa090 Merge pull request #8468 from liamwhite/dispatch-tracking
kernel: fix some uses of disable_count
2022-06-21 15:30:27 -07:00
liamwhite
0d5792cc57 Merge pull request #8487 from german77/system-button
service: am: Stub PerformSystemButtonPressingIfInFocus
2022-06-20 16:59:26 -04:00
Narr the Reg
f37b2e6f10 service: am: Stub PerformSystemButtonPressingIfInFocus
Used by Ring Fit Adventure
2022-06-20 12:35:58 -05:00
Liam
24d7aaf43c kernel: wait for threads to stop on pause 2022-06-18 16:54:33 -04:00
Morph
5b2b15091f Merge pull request #8476 from liamwhite/gpu-wasnt-ready
core: fix initialization in single core, sync GPU mode
2022-06-17 03:08:15 -04:00
lat9nq
c42fde2a37 ci/windows: Build using Clang
Uses the MinGWClangCross toolchain script to build yuzu. Disables our
bundled SDL2 to use the system ones that have been modified to not use
`-mwindows`. Also set's `-e` to stop the script on an error (as opposed
to packaging nothing).

Uses LLVM's linker for linking yuzu. Adds -femulated-tls due to a
libstdc++ incompatibility between GCC and Clang in vulkan_common.
2022-06-16 23:57:39 -04:00
lat9nq
fef3d8acb5 CMakeModules: Add MinGWClangCross
Facilitates what programs we need for cross-compiling to Windows from
Linux using LLVM's compilers. Based on MinGWCross
2022-06-16 23:57:39 -04:00
lat9nq
e56410b404 ci/windows: Split up cmake command
Improves readability.
2022-06-16 23:57:39 -04:00
Liam
a6371fb69d core: fix initialization in single core, sync GPU mode 2022-06-16 23:43:35 -04:00
Morph
a33e7c13fa Merge pull request #8472 from german77/tace
common: param_package: Demote DEBUG to TRACE for getters
2022-06-16 16:43:32 -04:00
Morph
945f3222ae Merge pull request #8474 from DCNick3/yuzu-cmd-respect-log-filter
Make yuzu-cmd respect log_filter setting
2022-06-16 16:43:18 -04:00
Nikita Strygin
9e384ed54b Make yuzu-cmd respect log_filter setting
Because logging infrastructure initializes before the loading of the
config, it reads the default setting for log_filter and ignores the one
set in config. To change log_filter after logging initialization some
additional calls need to be made.
2022-06-16 23:39:50 +03:00
liamwhite
561f5c9c14 Merge pull request #8473 from DCNick3/implement-exit-process
Implement ExitProcess svc
2022-06-16 15:45:02 -04:00
Nikita Strygin
cf7e4bda92 Implement ExitProcess svc
Currently this just stops all the emulation
This works under assumption that only application will try to use
ExitProcess, with services not touching it
If application exits - it quite makes sense to end the emulation
2022-06-16 21:35:34 +03:00
Liam
208ed712f4 core/debugger: memory breakpoint support 2022-06-16 13:18:07 -04:00
Narr the Reg
d1f2f5f146 common: param_package: Demote DEBUG to TRACE for getters 2022-06-16 10:27:59 -05:00
Liam
744a208763 kernel: fix some uses of disable_count 2022-06-15 20:53:49 -04:00
Fernando S
f86b770ff7 Merge pull request #8457 from liamwhite/kprocess-suspend
kernel: implement KProcess suspension
2022-06-16 02:41:12 +02:00
liamwhite
0ae4eae9a6 Merge pull request #8460 from Morph1984/bounded-q
bounded_threadsafe_queue: Use constexpr capacity and mask
2022-06-15 19:39:22 -04:00
Morph
25429998e3 bounded_threadsafe_queue: Use constexpr capacity and mask
While this is the primary change, we also:
- Remove the mpsc namespace and rename Queue to MPSCQueue
- Make Slot a private struct within MPSCQueue
- Remove the AlignedAllocator template argument, as we use std::allocator
- Replace instances of mask + 1 with capacity, and mask + 2 with capacity + 1
2022-06-15 16:59:13 -04:00
bunnei
5ace5c1b7a Merge pull request #8317 from german77/notifa
service: notifa: Implement most part of this service
2022-06-15 09:53:50 -07:00
Mai
23514388ed Merge pull request #8464 from liamwhite/break-debug
kernel: notify debugger on break SVC
2022-06-15 11:55:54 -04:00
Mai
f117351783 Merge pull request #8465 from Morph1984/why-msvc
vk_compute_pass: Explicitly cast to VkAccessFlags
2022-06-15 11:55:40 -04:00
Morph
4572634a4e vk_compute_pass: Explicitly cast to VkAccessFlags
According to the standard, a narrowing conversion is an implicit conversion from an integer or unscoped enumeration type to an integer type that cannot represent all the values of the original type, except when the value is a literal or constant expression.
MSVC, unlike GCC or Clang, determines this to be a narrowing conversion despite the enumeration exclusively containing values that fit within the range of a 32 bit integer, emitting a warning since designated initializers prohibit narrowing conversions.
To solve this, explicitly cast to the type we are initializing.
2022-06-15 07:12:16 -04:00
Mai
103997ee56 Merge pull request #8383 from Morph1984/shadow-of-the-past
yuzu: Make variable shadowing a compile-time error
2022-06-14 21:08:58 -04:00
Mai
c9de5474bf Merge pull request #8462 from liamwhite/dynarmic-profile
core: centralize profile scope for Dynarmic
2022-06-14 21:07:47 -04:00
Liam
20eab9fed9 core: centralize profile scope for Dynarmic 2022-06-14 18:19:04 -04:00
Morph
7620e1a631 externals: Update cpp-httplib to latest 2022-06-14 14:09:51 -04:00
Morph
0eeee431dc main: Eliminate variable shadowing 2022-06-14 14:09:51 -04:00
Liam
888f499188 kernel: implement KProcess suspension 2022-06-14 10:04:11 -04:00
Morph
742f021fdf wait_tree: Eliminate variable shadowing 2022-06-14 08:30:09 -04:00
Morph
95bcf6ac38 configure_ringcon: Eliminate variable shadowing 2022-06-14 08:30:09 -04:00
Morph
e371961219 configure_touch_from_button: Eliminate variable shadowing 2022-06-14 08:30:09 -04:00
Morph
5503338f21 configure_per_game: Eliminate variable shadowing 2022-06-14 08:30:08 -04:00
Morph
fe7184c2a8 configure_input_player: Eliminate variable shadowing 2022-06-14 08:30:08 -04:00
Morph
1c83014526 configure_dialog: Eliminate variable shadowing 2022-06-14 08:30:08 -04:00
Morph
2d903e3ce6 bootmanager: Eliminate variable shadowing 2022-06-14 08:30:08 -04:00
Morph
e29e8eec2f game_list: Eliminate variable shadowing 2022-06-14 08:30:07 -04:00
Morph
8b55f2c615 externals: microprofileui: Eliminate variable shadowing 2022-06-14 05:52:15 -04:00
Morph
69d92a19a5 yuzu_cmd: Eliminate variable shadowing 2022-06-13 18:19:23 -04:00
Morph
8671aa8dd0 audio_core: Remove -Werror=unused-parameter
Removing this as we don't enforce unused parameter warnings elsewhere in the project, and explicitly specify -Wno-unused-parameter in the main CMakeLists.
2022-06-13 18:19:23 -04:00
Morph
efc89c032b CMakeLists: Make variable shadowing a compile-time error
Now that the entire project is free of variable shadowing, we can enforce this as a compile time error to prevent any further introduction of this logic bug.
2022-06-13 18:19:23 -04:00
Morph
d0328f49f1 externals: microprofile: Eliminate variable shadowing 2022-06-13 18:19:23 -04:00
Morph
c1bd602e4c common: Eliminate variable shadowing
GCC/Clang treats variables within lambdas as potentially shadowing those outside the lambda, despite them not being captured inside the lambda's capture list.
2022-06-13 18:19:22 -04:00
Morph
b3d6f7bdd8 yuzu: Eliminate variable shadowing 2022-06-13 18:19:22 -04:00
Morph
12156b199a web_service: Eliminate variable shadowing 2022-06-13 18:19:22 -04:00
german77
cc6a4bedfc service: notifa: Implement most part of this service
Implements partially RegisterAlarmSetting, UpdateAlarmSetting, LoadApplicationParameter, DeleteAlarmSetting.
Needed for Fitness `Boxing 2: Rhythm & Exercise` and `Ring Fit Adventure`.
2022-05-09 10:28:04 -05:00
84 changed files with 1414 additions and 671 deletions

View File

@@ -1,12 +1,27 @@
#!/bin/bash -ex
set -e
cd /yuzu
ccache -s
mkdir build || true && cd build
cmake .. -G Ninja -DDISPLAY_VERSION=$1 -DCMAKE_TOOLCHAIN_FILE="$(pwd)/../CMakeModules/MinGWCross.cmake" -DUSE_CCACHE=ON -DENABLE_COMPATIBILITY_LIST_DOWNLOAD=ON -DCMAKE_BUILD_TYPE=Release -DENABLE_QT_TRANSLATION=ON
ninja
LDFLAGS="-fuse-ld=lld"
# -femulated-tls required due to an incompatibility between GCC and Clang
# TODO(lat9nq): If this is widespread, we probably need to add this to CMakeLists where appropriate
cmake .. \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_CXX_FLAGS="-femulated-tls" \
-DCMAKE_TOOLCHAIN_FILE="$(pwd)/../CMakeModules/MinGWClangCross.cmake" \
-DDISPLAY_VERSION=$1 \
-DENABLE_COMPATIBILITY_LIST_DOWNLOAD=ON \
-DENABLE_QT_TRANSLATION=ON \
-DUSE_CCACHE=ON \
-DYUZU_USE_BUNDLED_SDL2=OFF \
-DYUZU_USE_EXTERNAL_SDL2=OFF \
-GNinja
ninja yuzu yuzu-cmd
ccache -s

View File

@@ -0,0 +1,55 @@
set(MINGW_PREFIX /usr/x86_64-w64-mingw32/)
set(CMAKE_SYSTEM_NAME Windows)
set(CMAKE_SYSTEM_PROCESSOR x86_64)
set(CMAKE_FIND_ROOT_PATH ${MINGW_PREFIX})
set(SDL2_PATH ${MINGW_PREFIX})
set(MINGW_TOOL_PREFIX ${CMAKE_SYSTEM_PROCESSOR}-w64-mingw32-)
# Specify the cross compiler
set(CMAKE_C_COMPILER ${MINGW_TOOL_PREFIX}clang)
set(CMAKE_CXX_COMPILER ${MINGW_TOOL_PREFIX}clang++)
set(CMAKE_RC_COMPILER ${MINGW_TOOL_PREFIX}windres)
set(CMAKE_C_COMPILER_AR ${MINGW_TOOL_PREFIX}ar)
set(CMAKE_CXX_COMPILER_AR ${MINGW_TOOL_PREFIX}ar)
set(CMAKE_C_COMPILER_RANLIB ${MINGW_TOOL_PREFIX}ranlib)
set(CMAKE_CXX_COMPILER_RANLIB ${MINGW_TOOL_PREFIX}ranlib)
# Mingw tools
set(STRIP ${MINGW_TOOL_PREFIX}strip)
set(WINDRES ${MINGW_TOOL_PREFIX}windres)
set(ENV{PKG_CONFIG} ${MINGW_TOOL_PREFIX}pkg-config)
# ccache wrapper
option(USE_CCACHE "Use ccache for compilation" OFF)
if(USE_CCACHE)
find_program(CCACHE ccache)
if(CCACHE)
message(STATUS "Using ccache found in PATH")
set_property(GLOBAL PROPERTY RULE_LAUNCH_COMPILE ${CCACHE})
set_property(GLOBAL PROPERTY RULE_LAUNCH_LINK ${CCACHE})
else(CCACHE)
message(WARNING "USE_CCACHE enabled, but no ccache found")
endif(CCACHE)
endif(USE_CCACHE)
# Search for programs in the build host directories
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
# Echo modified cmake vars to screen for debugging purposes
if(NOT DEFINED ENV{MINGW_DEBUG_INFO})
message("")
message("Custom cmake vars: (blank = system default)")
message("-----------------------------------------")
message("* CMAKE_C_COMPILER : ${CMAKE_C_COMPILER}")
message("* CMAKE_CXX_COMPILER : ${CMAKE_CXX_COMPILER}")
message("* CMAKE_RC_COMPILER : ${CMAKE_RC_COMPILER}")
message("* WINDRES : ${WINDRES}")
message("* ENV{PKG_CONFIG} : $ENV{PKG_CONFIG}")
message("* STRIP : ${STRIP}")
message("* USE_CCACHE : ${USE_CCACHE}")
message("")
# So that the debug info only appears once
set(ENV{MINGW_DEBUG_INFO} SHOWN)
endif()

View File

@@ -1246,7 +1246,7 @@ struct MicroProfileScopeLock
{
bool bUseLock;
std::recursive_mutex& m;
MicroProfileScopeLock(std::recursive_mutex& m) : bUseLock(g_bUseLock), m(m)
MicroProfileScopeLock(std::recursive_mutex& m_) : bUseLock(g_bUseLock), m(m_)
{
if(bUseLock)
m.lock();

View File

@@ -213,8 +213,8 @@ struct MicroProfileCustom
struct SOptionDesc
{
SOptionDesc(){}
SOptionDesc(uint8_t nSubType, uint8_t nIndex, const char* fmt, ...):nSubType(nSubType), nIndex(nIndex)
SOptionDesc()=default;
SOptionDesc(uint8_t nSubType_, uint8_t nIndex_, const char* fmt, ...):nSubType(nSubType_), nIndex(nIndex_)
{
va_list args;
va_start (args, fmt);
@@ -573,10 +573,10 @@ inline void MicroProfileToolTipMeta(MicroProfileStringArray* pToolTip)
}
else
{
for(int i = 0; i < MICROPROFILE_META_MAX; ++i)
for(int k = 0; k < MICROPROFILE_META_MAX; ++k)
{
nMetaSumInclusive[i] += nMetaSum[i];
nMetaSum[i] = 0;
nMetaSumInclusive[k] += nMetaSum[k];
nMetaSum[k] = 0;
}
}
break;
@@ -708,10 +708,10 @@ inline void MicroProfileDrawFloatTooltip(uint32_t nX, uint32_t nY, uint32_t nTok
if(UI.nMouseLeftMod)
{
int nIndex = (g_MicroProfileUI.LockedToolTipFront + MICROPROFILE_TOOLTIP_MAX_LOCKED - 1) % MICROPROFILE_TOOLTIP_MAX_LOCKED;
g_MicroProfileUI.nLockedToolTipColor[nIndex] = S.TimerInfo[nTimerId].nColor;
MicroProfileStringArrayCopy(&g_MicroProfileUI.LockedToolTips[nIndex], &ToolTip);
g_MicroProfileUI.LockedToolTipFront = nIndex;
int nToolTipIndex = (g_MicroProfileUI.LockedToolTipFront + MICROPROFILE_TOOLTIP_MAX_LOCKED - 1) % MICROPROFILE_TOOLTIP_MAX_LOCKED;
g_MicroProfileUI.nLockedToolTipColor[nToolTipIndex] = S.TimerInfo[nTimerId].nColor;
MicroProfileStringArrayCopy(&g_MicroProfileUI.LockedToolTips[nToolTipIndex], &ToolTip);
g_MicroProfileUI.LockedToolTipFront = nToolTipIndex;
}
}
@@ -917,9 +917,8 @@ inline void MicroProfileDrawDetailedBars(uint32_t nWidth, uint32_t nHeight, int
float fStart = floor(fMsBase*fRcpStep) * fStep;
for(float f = fStart; f < fMsEnd; )
{
float fStart = f;
float fNext = f + fStep;
MicroProfileDrawBox(((fStart-fMsBase) * fMsToScreen), nBaseY, (fNext-fMsBase) * fMsToScreen+1, nBaseY + nHeight, UI.nOpacityBackground | g_nMicroProfileBackColors[nColorIndex++ & 1]);
MicroProfileDrawBox(((f-fMsBase) * fMsToScreen), nBaseY, (fNext-fMsBase) * fMsToScreen+1, nBaseY + nHeight, UI.nOpacityBackground | g_nMicroProfileBackColors[nColorIndex++ & 1]);
f = fNext;
}
}
@@ -1116,9 +1115,9 @@ inline void MicroProfileDrawDetailedBars(uint32_t nWidth, uint32_t nHeight, int
nMaxStackDepth = MicroProfileMax(nMaxStackDepth, nStackPos);
float fMsStart = fToMs * MicroProfileLogTickDifference(nBaseTicks, nTickStart);
float fMsEnd = fToMs * MicroProfileLogTickDifference(nBaseTicks, nTickEnd);
float fMsEnd2 = fToMs * MicroProfileLogTickDifference(nBaseTicks, nTickEnd);
float fXStart = fMsStart * fMsToScreen;
float fXEnd = fMsEnd * fMsToScreen;
float fXEnd = fMsEnd2 * fMsToScreen;
float fYStart = (float)(nY + nStackPos * nYDelta);
float fYEnd = fYStart + (MICROPROFILE_DETAILED_BAR_HEIGHT);
float fXDist = MicroProfileMax(fXStart - fMouseX, fMouseX - fXEnd);
@@ -1269,22 +1268,22 @@ inline void MicroProfileDrawDetailedBars(uint32_t nWidth, uint32_t nHeight, int
if(UI.nRangeBegin != UI.nRangeEnd)
{
float fMsStart = fToMsCpu * MicroProfileLogTickDifference(nBaseTicksCpu, UI.nRangeBegin);
float fMsEnd = fToMsCpu * MicroProfileLogTickDifference(nBaseTicksCpu, UI.nRangeEnd);
float fMsEnd3 = fToMsCpu * MicroProfileLogTickDifference(nBaseTicksCpu, UI.nRangeEnd);
float fXStart = fMsStart * fMsToScreen;
float fXEnd = fMsEnd * fMsToScreen;
float fXEnd = fMsEnd3 * fMsToScreen;
MicroProfileDrawBox(fXStart, nBaseY, fXEnd, nHeight, MICROPROFILE_FRAME_COLOR_HIGHTLIGHT, MicroProfileBoxTypeFlat);
MicroProfileDrawLineVertical(fXStart, nBaseY, nHeight, MICROPROFILE_FRAME_COLOR_HIGHTLIGHT | 0x44000000);
MicroProfileDrawLineVertical(fXEnd, nBaseY, nHeight, MICROPROFILE_FRAME_COLOR_HIGHTLIGHT | 0x44000000);
fMsStart += fDetailedOffset;
fMsEnd += fDetailedOffset;
fMsEnd3 += fDetailedOffset;
char sBuffer[32];
uint32_t nLenStart = snprintf(sBuffer, sizeof(sBuffer)-1, "%.2fms", fMsStart);
float fStartTextWidth = (float)((1+MICROPROFILE_TEXT_WIDTH) * nLenStart);
float fStartTextX = fXStart - fStartTextWidth - 2;
MicroProfileDrawBox(fStartTextX, nBaseY, fStartTextX + fStartTextWidth + 2, MICROPROFILE_TEXT_HEIGHT + 2 + nBaseY, 0x33000000, MicroProfileBoxTypeFlat);
MicroProfileDrawText(fStartTextX+1, nBaseY, UINT32_MAX, sBuffer, nLenStart);
uint32_t nLenEnd = snprintf(sBuffer, sizeof(sBuffer)-1, "%.2fms", fMsEnd);
uint32_t nLenEnd = snprintf(sBuffer, sizeof(sBuffer)-1, "%.2fms", fMsEnd3);
MicroProfileDrawBox(fXEnd+1, nBaseY, fXEnd+1+(1+MICROPROFILE_TEXT_WIDTH) * nLenEnd + 3, MICROPROFILE_TEXT_HEIGHT + 2 + nBaseY, 0x33000000, MicroProfileBoxTypeFlat);
MicroProfileDrawText(fXEnd+2, nBaseY+1, UINT32_MAX, sBuffer, nLenEnd);
@@ -1297,9 +1296,9 @@ inline void MicroProfileDrawDetailedBars(uint32_t nWidth, uint32_t nHeight, int
if(UI.nRangeBeginGpu != UI.nRangeEndGpu)
{
float fMsStart = fToMsGpu * MicroProfileLogTickDifference(nBaseTicksGpu, UI.nRangeBeginGpu);
float fMsEnd = fToMsGpu * MicroProfileLogTickDifference(nBaseTicksGpu, UI.nRangeEndGpu);
float fMsEnd4 = fToMsGpu * MicroProfileLogTickDifference(nBaseTicksGpu, UI.nRangeEndGpu);
float fXStart = fMsStart * fMsToScreen;
float fXEnd = fMsEnd * fMsToScreen;
float fXEnd = fMsEnd4 * fMsToScreen;
MicroProfileDrawBox(fXStart, nBaseY, fXEnd, nHeight, MICROPROFILE_FRAME_COLOR_HIGHTLIGHT_GPU, MicroProfileBoxTypeFlat);
MicroProfileDrawLineVertical(fXStart, nBaseY, nHeight, MICROPROFILE_FRAME_COLOR_HIGHTLIGHT_GPU | 0x44000000);
MicroProfileDrawLineVertical(fXEnd, nBaseY, nHeight, MICROPROFILE_FRAME_COLOR_HIGHTLIGHT_GPU | 0x44000000);
@@ -1307,14 +1306,14 @@ inline void MicroProfileDrawDetailedBars(uint32_t nWidth, uint32_t nHeight, int
nBaseY += MICROPROFILE_TEXT_HEIGHT+1;
fMsStart += fDetailedOffset;
fMsEnd += fDetailedOffset;
fMsEnd4 += fDetailedOffset;
char sBuffer[32];
uint32_t nLenStart = snprintf(sBuffer, sizeof(sBuffer)-1, "%.2fms", fMsStart);
float fStartTextWidth = (float)((1+MICROPROFILE_TEXT_WIDTH) * nLenStart);
float fStartTextX = fXStart - fStartTextWidth - 2;
MicroProfileDrawBox(fStartTextX, nBaseY, fStartTextX + fStartTextWidth + 2, MICROPROFILE_TEXT_HEIGHT + 2 + nBaseY, 0x33000000, MicroProfileBoxTypeFlat);
MicroProfileDrawText(fStartTextX+1, nBaseY, UINT32_MAX, sBuffer, nLenStart);
uint32_t nLenEnd = snprintf(sBuffer, sizeof(sBuffer)-1, "%.2fms", fMsEnd);
uint32_t nLenEnd = snprintf(sBuffer, sizeof(sBuffer)-1, "%.2fms", fMsEnd4);
MicroProfileDrawBox(fXEnd+1, nBaseY, fXEnd+1+(1+MICROPROFILE_TEXT_WIDTH) * nLenEnd + 3, MICROPROFILE_TEXT_HEIGHT + 2 + nBaseY, 0x33000000, MicroProfileBoxTypeFlat);
MicroProfileDrawText(fXEnd+2, nBaseY+1, UINT32_MAX, sBuffer, nLenEnd);
}
@@ -1716,8 +1715,8 @@ bool MicroProfileDrawGraph(uint32_t nScreenWidth, uint32_t nScreenHeight)
uint32_t nTextCount = 0;
uint32_t nGraphIndex = (S.nGraphPut + MICROPROFILE_GRAPH_HISTORY - int(MICROPROFILE_GRAPH_HISTORY*(1.f - fMouseXPrc))) % MICROPROFILE_GRAPH_HISTORY;
uint32_t nX = UI.nMouseX;
uint32_t nY = UI.nMouseY + 20;
uint32_t nMouseX = UI.nMouseX;
uint32_t nMouseY = UI.nMouseY + 20;
for(uint32_t i = 0; i < MICROPROFILE_MAX_GRAPHS; ++i)
{
@@ -1736,7 +1735,7 @@ bool MicroProfileDrawGraph(uint32_t nScreenWidth, uint32_t nScreenHeight)
}
if(nTextCount)
{
MicroProfileDrawFloatWindow(nX, nY, Strings.ppStrings, Strings.nNumStrings, 0, pColors);
MicroProfileDrawFloatWindow(nMouseX, nMouseY, Strings.ppStrings, Strings.nNumStrings, 0, pColors);
}
if(UI.nMouseRight)
@@ -2321,8 +2320,8 @@ inline void MicroProfileDrawMenu(uint32_t nWidth, uint32_t nHeight)
uint32_t nMenuX[MICROPROFILE_MENU_MAX] = {0};
uint32_t nNumMenuItems = 0;
int nLen = snprintf(buffer, 127, "MicroProfile");
MicroProfileDrawText(nX, nY, UINT32_MAX, buffer, nLen);
int nMPTextLen = snprintf(buffer, 127, "MicroProfile");
MicroProfileDrawText(nX, nY, UINT32_MAX, buffer, nMPTextLen);
nX += (sizeof("MicroProfile")+2) * (MICROPROFILE_TEXT_WIDTH+1);
pMenuText[nNumMenuItems++] = "Mode";
pMenuText[nNumMenuItems++] = "Groups";
@@ -2438,16 +2437,16 @@ inline void MicroProfileDrawMenu(uint32_t nWidth, uint32_t nHeight)
int nNumLines = 0;
bool bSelected = false;
const char* pString = CB(nNumLines, &bSelected);
uint32_t nWidth = 0, nHeight = 0;
uint32_t nTextWidth = 0, nTextHeight = 0;
while(pString)
{
nWidth = MicroProfileMax<int>(nWidth, (int)strlen(pString));
nTextWidth = MicroProfileMax<int>(nTextWidth, (int)strlen(pString));
nNumLines++;
pString = CB(nNumLines, &bSelected);
}
nWidth = (2+nWidth) * (MICROPROFILE_TEXT_WIDTH+1);
nHeight = nNumLines * (MICROPROFILE_TEXT_HEIGHT+1);
if(UI.nMouseY <= nY + nHeight+0 && UI.nMouseY >= nY-0 && UI.nMouseX <= nX + nWidth + 0 && UI.nMouseX >= nX - 0)
nTextWidth = (2+nTextWidth) * (MICROPROFILE_TEXT_WIDTH+1);
nTextHeight = nNumLines * (MICROPROFILE_TEXT_HEIGHT+1);
if(UI.nMouseY <= nY + nTextHeight+0 && UI.nMouseY >= nY-0 && UI.nMouseX <= nX + nTextWidth + 0 && UI.nMouseX >= nX - 0)
{
UI.nActiveMenu = nMenu;
}
@@ -2455,21 +2454,21 @@ inline void MicroProfileDrawMenu(uint32_t nWidth, uint32_t nHeight)
{
UI.nActiveMenu = UINT32_MAX;
}
MicroProfileDrawBox(nX, nY, nX + nWidth, nY + nHeight, 0xff000000|g_nMicroProfileBackColors[1]);
MicroProfileDrawBox(nX, nY, nX + nTextWidth, nY + nTextHeight, 0xff000000|g_nMicroProfileBackColors[1]);
for(int i = 0; i < nNumLines; ++i)
{
bool bSelected = false;
const char* pString = CB(i, &bSelected);
bool bSelected2 = false;
const char* pString2 = CB(i, &bSelected2);
if(UI.nMouseY >= nY && UI.nMouseY < nY + MICROPROFILE_TEXT_HEIGHT + 1)
{
if(UI.nMouseLeft || UI.nMouseRight)
{
CBClick[nMenu](i);
}
MicroProfileDrawBox(nX, nY, nX + nWidth, nY + MICROPROFILE_TEXT_HEIGHT + 1, 0xff888888);
MicroProfileDrawBox(nX, nY, nX + nTextWidth, nY + MICROPROFILE_TEXT_HEIGHT + 1, 0xff888888);
}
int nLen = snprintf(buffer, SBUF_SIZE-1, "%c %s", bSelected ? '*' : ' ' ,pString);
MicroProfileDrawText(nX, nY, UINT32_MAX, buffer, nLen);
int nTextLen = snprintf(buffer, SBUF_SIZE-1, "%c %s", bSelected2 ? '*' : ' ' ,pString2);
MicroProfileDrawText(nX, nY, UINT32_MAX, buffer, nTextLen);
nY += MICROPROFILE_TEXT_HEIGHT+1;
}
}
@@ -2605,7 +2604,7 @@ inline void MicroProfileDrawCustom(uint32_t nWidth, uint32_t nHeight)
for(uint32_t i = 0; i < nCount; ++i)
{
nOffsetY += (1+MICROPROFILE_TEXT_HEIGHT);
uint32_t nWidth = MicroProfileMin(nMaxWidth, (uint32_t)(nMaxWidth * pMs[i] * fRcpReference));
nWidth = MicroProfileMin(nMaxWidth, (uint32_t)(nMaxWidth * pMs[i] * fRcpReference));
MicroProfileDrawBox(nMaxOffsetX, nOffsetY, nMaxOffsetX+nWidth, nOffsetY+MICROPROFILE_TEXT_HEIGHT, pColors[i]|0xff000000);
}
}

View File

@@ -65,6 +65,10 @@ if (MSVC)
/we4305 # 'context': truncation from 'type1' to 'type2'
/we4388 # 'expression': signed/unsigned mismatch
/we4389 # 'operator': signed/unsigned mismatch
/we4456 # Declaration of 'identifier' hides previous local declaration
/we4457 # Declaration of 'identifier' hides function parameter
/we4458 # Declaration of 'identifier' hides class member
/we4459 # Declaration of 'identifier' hides global declaration
/we4505 # 'function': unreferenced local function has been removed
/we4547 # 'operator': operator before comma has no effect; expected operator with side-effect
/we4549 # 'operator1': operator before comma has no effect; did you intend 'operator2'?
@@ -92,6 +96,7 @@ else()
-Werror=missing-declarations
-Werror=missing-field-initializers
-Werror=reorder
-Werror=shadow
-Werror=sign-compare
-Werror=switch
-Werror=uninitialized

View File

@@ -49,9 +49,6 @@ if (NOT MSVC)
target_compile_options(audio_core PRIVATE
-Werror=conversion
-Werror=ignored-qualifiers
-Werror=shadow
-Werror=unused-parameter
-Werror=unused-variable
$<$<CXX_COMPILER_ID:GNU>:-Werror=unused-but-set-parameter>
$<$<CXX_COMPILER_ID:GNU>:-Werror=unused-but-set-variable>

View File

@@ -1,10 +1,7 @@
// SPDX-FileCopyrightText: Copyright (c) 2020 Erik Rigtorp <erik@rigtorp.se>
// SPDX-License-Identifier: MIT
#pragma once
#ifdef _MSC_VER
#pragma warning(push)
#pragma warning(disable : 4324)
#endif
#include <atomic>
#include <bit>
@@ -12,105 +9,63 @@
#include <memory>
#include <mutex>
#include <new>
#include <stdexcept>
#include <stop_token>
#include <type_traits>
#include <utility>
namespace Common {
namespace mpsc {
#if defined(__cpp_lib_hardware_interference_size)
constexpr size_t hardware_interference_size = std::hardware_destructive_interference_size;
#else
constexpr size_t hardware_interference_size = 64;
#endif
template <typename T>
using AlignedAllocator = std::allocator<T>;
#ifdef _MSC_VER
#pragma warning(push)
#pragma warning(disable : 4324)
#endif
template <typename T>
struct Slot {
~Slot() noexcept {
if (turn.test()) {
destroy();
}
}
template <typename... Args>
void construct(Args&&... args) noexcept {
static_assert(std::is_nothrow_constructible_v<T, Args&&...>,
"T must be nothrow constructible with Args&&...");
std::construct_at(reinterpret_cast<T*>(&storage), std::forward<Args>(args)...);
}
void destroy() noexcept {
static_assert(std::is_nothrow_destructible_v<T>, "T must be nothrow destructible");
std::destroy_at(reinterpret_cast<T*>(&storage));
}
T&& move() noexcept {
return reinterpret_cast<T&&>(storage);
}
// Align to avoid false sharing between adjacent slots
alignas(hardware_interference_size) std::atomic_flag turn{};
struct aligned_store {
struct type {
alignas(T) unsigned char data[sizeof(T)];
};
};
typename aligned_store::type storage;
};
template <typename T, typename Allocator = AlignedAllocator<Slot<T>>>
class Queue {
template <typename T, size_t capacity = 0x400>
class MPSCQueue {
public:
explicit Queue(const size_t capacity, const Allocator& allocator = Allocator())
: allocator_(allocator) {
if (capacity < 1) {
throw std::invalid_argument("capacity < 1");
}
// Ensure that the queue length is an integer power of 2
// This is so that idx(i) can be a simple i & mask_ insted of i % capacity
// https://github.com/rigtorp/MPMCQueue/pull/36
if (!std::has_single_bit(capacity)) {
throw std::invalid_argument("capacity must be an integer power of 2");
}
mask_ = capacity - 1;
explicit MPSCQueue() : allocator{std::allocator<Slot<T>>()} {
// Allocate one extra slot to prevent false sharing on the last slot
slots_ = allocator_.allocate(mask_ + 2);
slots = allocator.allocate(capacity + 1);
// Allocators are not required to honor alignment for over-aligned types
// (see http://eel.is/c++draft/allocator.requirements#10) so we verify
// alignment here
if (reinterpret_cast<uintptr_t>(slots_) % alignof(Slot<T>) != 0) {
allocator_.deallocate(slots_, mask_ + 2);
if (reinterpret_cast<uintptr_t>(slots) % alignof(Slot<T>) != 0) {
allocator.deallocate(slots, capacity + 1);
throw std::bad_alloc();
}
for (size_t i = 0; i < mask_ + 1; ++i) {
std::construct_at(&slots_[i]);
for (size_t i = 0; i < capacity; ++i) {
std::construct_at(&slots[i]);
}
static_assert(std::has_single_bit(capacity), "capacity must be an integer power of 2");
static_assert(alignof(Slot<T>) == hardware_interference_size,
"Slot must be aligned to cache line boundary to prevent false sharing");
static_assert(sizeof(Slot<T>) % hardware_interference_size == 0,
"Slot size must be a multiple of cache line size to prevent "
"false sharing between adjacent slots");
static_assert(sizeof(Queue) % hardware_interference_size == 0,
static_assert(sizeof(MPSCQueue) % hardware_interference_size == 0,
"Queue size must be a multiple of cache line size to "
"prevent false sharing between adjacent queues");
}
~Queue() noexcept {
for (size_t i = 0; i < mask_ + 1; ++i) {
slots_[i].~Slot();
~MPSCQueue() noexcept {
for (size_t i = 0; i < capacity; ++i) {
std::destroy_at(&slots[i]);
}
allocator_.deallocate(slots_, mask_ + 2);
allocator.deallocate(slots, capacity + 1);
}
// non-copyable and non-movable
Queue(const Queue&) = delete;
Queue& operator=(const Queue&) = delete;
// The queue must be both non-copyable and non-movable
MPSCQueue(const MPSCQueue&) = delete;
MPSCQueue& operator=(const MPSCQueue&) = delete;
MPSCQueue(MPSCQueue&&) = delete;
MPSCQueue& operator=(MPSCQueue&&) = delete;
void Push(const T& v) noexcept {
static_assert(std::is_nothrow_copy_constructible_v<T>,
@@ -125,8 +80,8 @@ public:
void Pop(T& v, std::stop_token stop) noexcept {
auto const tail = tail_.fetch_add(1);
auto& slot = slots_[idx(tail)];
if (false == slot.turn.test()) {
auto& slot = slots[idx(tail)];
if (!slot.turn.test()) {
std::unique_lock lock{cv_mutex};
cv.wait(lock, stop, [&slot] { return slot.turn.test(); });
}
@@ -137,12 +92,46 @@ public:
}
private:
template <typename U = T>
struct Slot {
~Slot() noexcept {
if (turn.test()) {
destroy();
}
}
template <typename... Args>
void construct(Args&&... args) noexcept {
static_assert(std::is_nothrow_constructible_v<U, Args&&...>,
"T must be nothrow constructible with Args&&...");
std::construct_at(reinterpret_cast<U*>(&storage), std::forward<Args>(args)...);
}
void destroy() noexcept {
static_assert(std::is_nothrow_destructible_v<U>, "T must be nothrow destructible");
std::destroy_at(reinterpret_cast<U*>(&storage));
}
U&& move() noexcept {
return reinterpret_cast<U&&>(storage);
}
// Align to avoid false sharing between adjacent slots
alignas(hardware_interference_size) std::atomic_flag turn{};
struct aligned_store {
struct type {
alignas(U) unsigned char data[sizeof(U)];
};
};
typename aligned_store::type storage;
};
template <typename... Args>
void emplace(Args&&... args) noexcept {
static_assert(std::is_nothrow_constructible_v<T, Args&&...>,
"T must be nothrow constructible with Args&&...");
auto const head = head_.fetch_add(1);
auto& slot = slots_[idx(head)];
auto& slot = slots[idx(head)];
slot.turn.wait(true);
slot.construct(std::forward<Args>(args)...);
slot.turn.test_and_set();
@@ -150,31 +139,29 @@ private:
}
constexpr size_t idx(size_t i) const noexcept {
return i & mask_;
return i & mask;
}
std::conditional_t<true, std::condition_variable_any, std::condition_variable> cv;
std::mutex cv_mutex;
size_t mask_;
Slot<T>* slots_;
[[no_unique_address]] Allocator allocator_;
static constexpr size_t mask = capacity - 1;
// Align to avoid false sharing between head_ and tail_
alignas(hardware_interference_size) std::atomic<size_t> head_{0};
alignas(hardware_interference_size) std::atomic<size_t> tail_{0};
std::mutex cv_mutex;
std::condition_variable_any cv;
Slot<T>* slots;
[[no_unique_address]] std::allocator<Slot<T>> allocator;
static_assert(std::is_nothrow_copy_assignable_v<T> || std::is_nothrow_move_assignable_v<T>,
"T must be nothrow copy or move assignable");
static_assert(std::is_nothrow_destructible_v<T>, "T must be nothrow destructible");
};
} // namespace mpsc
template <typename T, typename Allocator = mpsc::AlignedAllocator<mpsc::Slot<T>>>
using MPSCQueue = mpsc::Queue<T, Allocator>;
} // namespace Common
#ifdef _MSC_VER
#pragma warning(pop)
#endif
} // namespace Common

View File

@@ -33,9 +33,9 @@ void DetachedTasks::AddTask(std::function<void()> task) {
++instance->count;
std::thread([task{std::move(task)}]() {
task();
std::unique_lock lock{instance->mutex};
std::unique_lock thread_lock{instance->mutex};
--instance->count;
std::notify_all_at_thread_exit(instance->cv, std::move(lock));
std::notify_all_at_thread_exit(instance->cv, std::move(thread_lock));
}).detach();
}

View File

@@ -15,6 +15,9 @@ enum class PageType : u8 {
Unmapped,
/// Page is mapped to regular memory. This is the only type you can get pointers to.
Memory,
/// Page is mapped to regular memory, but inaccessible from CPU fastmem and must use
/// the callbacks.
DebugMemory,
/// Page is mapped to regular memory, but also needs to check for rasterizer cache flushing and
/// invalidation
RasterizerCachedMemory,

View File

@@ -76,7 +76,7 @@ std::string ParamPackage::Serialize() const {
std::string ParamPackage::Get(const std::string& key, const std::string& default_value) const {
auto pair = data.find(key);
if (pair == data.end()) {
LOG_DEBUG(Common, "key '{}' not found", key);
LOG_TRACE(Common, "key '{}' not found", key);
return default_value;
}
@@ -86,7 +86,7 @@ std::string ParamPackage::Get(const std::string& key, const std::string& default
int ParamPackage::Get(const std::string& key, int default_value) const {
auto pair = data.find(key);
if (pair == data.end()) {
LOG_DEBUG(Common, "key '{}' not found", key);
LOG_TRACE(Common, "key '{}' not found", key);
return default_value;
}
@@ -101,7 +101,7 @@ int ParamPackage::Get(const std::string& key, int default_value) const {
float ParamPackage::Get(const std::string& key, float default_value) const {
auto pair = data.find(key);
if (pair == data.end()) {
LOG_DEBUG(Common, "key {} not found", key);
LOG_TRACE(Common, "key {} not found", key);
return default_value;
}

View File

@@ -743,16 +743,11 @@ if (MSVC)
/we4244 # 'conversion': conversion from 'type1' to 'type2', possible loss of data
/we4245 # 'conversion': conversion from 'type1' to 'type2', signed/unsigned mismatch
/we4254 # 'operator': conversion from 'type1:field_bits' to 'type2:field_bits', possible loss of data
/we4456 # Declaration of 'identifier' hides previous local declaration
/we4457 # Declaration of 'identifier' hides function parameter
/we4458 # Declaration of 'identifier' hides class member
/we4459 # Declaration of 'identifier' hides global declaration
)
else()
target_compile_options(core PRIVATE
-Werror=conversion
-Werror=ignored-qualifiers
-Werror=shadow
$<$<CXX_COMPILER_ID:GNU>:-Werror=class-memaccess>
$<$<CXX_COMPILER_ID:GNU>:-Werror=unused-but-set-parameter>

View File

@@ -95,7 +95,7 @@ void ARM_Interface::Run() {
using Kernel::SuspendType;
while (true) {
Kernel::KThread* current_thread{system.Kernel().CurrentScheduler()->GetCurrentThread()};
Kernel::KThread* current_thread{Kernel::GetCurrentThreadPointer(system.Kernel())};
Dynarmic::HaltReason hr{};
// Notify the debugger and go to sleep if a step was performed
@@ -107,6 +107,7 @@ void ARM_Interface::Run() {
}
// Otherwise, run the thread.
system.EnterDynarmicProfile();
if (current_thread->GetStepState() == StepState::StepPending) {
hr = StepJit();
@@ -116,11 +117,19 @@ void ARM_Interface::Run() {
} else {
hr = RunJit();
}
system.ExitDynarmicProfile();
// Notify the debugger and go to sleep if a breakpoint was hit.
if (Has(hr, breakpoint)) {
RewindBreakpointInstruction();
system.GetDebugger().NotifyThreadStopped(current_thread);
current_thread->RequestSuspend(Kernel::SuspendType::Debug);
current_thread->RequestSuspend(SuspendType::Debug);
break;
}
if (Has(hr, watchpoint)) {
RewindBreakpointInstruction();
system.GetDebugger().NotifyThreadWatchpoint(current_thread, *HaltedWatchpoint());
current_thread->RequestSuspend(SuspendType::Debug);
break;
}
@@ -134,4 +143,36 @@ void ARM_Interface::Run() {
}
}
void ARM_Interface::LoadWatchpointArray(const WatchpointArray& wp) {
watchpoints = &wp;
}
const Kernel::DebugWatchpoint* ARM_Interface::MatchingWatchpoint(
VAddr addr, u64 size, Kernel::DebugWatchpointType access_type) const {
if (!watchpoints) {
return nullptr;
}
const VAddr start_address{addr};
const VAddr end_address{addr + size};
for (size_t i = 0; i < Core::Hardware::NUM_WATCHPOINTS; i++) {
const auto& watch{(*watchpoints)[i]};
if (end_address <= watch.start_address) {
continue;
}
if (start_address >= watch.end_address) {
continue;
}
if ((access_type & watch.type) == Kernel::DebugWatchpointType::None) {
continue;
}
return &watch;
}
return nullptr;
}
} // namespace Core

View File

@@ -5,6 +5,7 @@
#pragma once
#include <array>
#include <span>
#include <vector>
#include <dynarmic/interface/halt_reason.h>
@@ -19,13 +20,16 @@ struct PageTable;
namespace Kernel {
enum class VMAPermission : u8;
}
enum class DebugWatchpointType : u8;
struct DebugWatchpoint;
} // namespace Kernel
namespace Core {
class System;
class CPUInterruptHandler;
using CPUInterrupts = std::array<CPUInterruptHandler, Core::Hardware::NUM_CPU_CORES>;
using WatchpointArray = std::array<Kernel::DebugWatchpoint, Core::Hardware::NUM_WATCHPOINTS>;
/// Generic ARMv8 CPU interface
class ARM_Interface {
@@ -170,6 +174,7 @@ public:
virtual void SaveContext(ThreadContext64& ctx) = 0;
virtual void LoadContext(const ThreadContext32& ctx) = 0;
virtual void LoadContext(const ThreadContext64& ctx) = 0;
void LoadWatchpointArray(const WatchpointArray& wp);
/// Clears the exclusive monitor's state.
virtual void ClearExclusiveState() = 0;
@@ -198,18 +203,24 @@ public:
static constexpr Dynarmic::HaltReason break_loop = Dynarmic::HaltReason::UserDefined2;
static constexpr Dynarmic::HaltReason svc_call = Dynarmic::HaltReason::UserDefined3;
static constexpr Dynarmic::HaltReason breakpoint = Dynarmic::HaltReason::UserDefined4;
static constexpr Dynarmic::HaltReason watchpoint = Dynarmic::HaltReason::UserDefined5;
protected:
/// System context that this ARM interface is running under.
System& system;
CPUInterrupts& interrupt_handlers;
const WatchpointArray* watchpoints;
bool uses_wall_clock;
static void SymbolicateBacktrace(Core::System& system, std::vector<BacktraceEntry>& out);
const Kernel::DebugWatchpoint* MatchingWatchpoint(
VAddr addr, u64 size, Kernel::DebugWatchpointType access_type) const;
virtual Dynarmic::HaltReason RunJit() = 0;
virtual Dynarmic::HaltReason StepJit() = 0;
virtual u32 GetSvcNumber() const = 0;
virtual const Kernel::DebugWatchpoint* HaltedWatchpoint() const = 0;
virtual void RewindBreakpointInstruction() = 0;
};
} // namespace Core

View File

@@ -29,45 +29,62 @@ using namespace Common::Literals;
class DynarmicCallbacks32 : public Dynarmic::A32::UserCallbacks {
public:
explicit DynarmicCallbacks32(ARM_Dynarmic_32& parent_)
: parent{parent_}, memory(parent.system.Memory()) {}
: parent{parent_},
memory(parent.system.Memory()), debugger_enabled{parent.system.DebuggerEnabled()} {}
u8 MemoryRead8(u32 vaddr) override {
CheckMemoryAccess(vaddr, 1, Kernel::DebugWatchpointType::Read);
return memory.Read8(vaddr);
}
u16 MemoryRead16(u32 vaddr) override {
CheckMemoryAccess(vaddr, 2, Kernel::DebugWatchpointType::Read);
return memory.Read16(vaddr);
}
u32 MemoryRead32(u32 vaddr) override {
CheckMemoryAccess(vaddr, 4, Kernel::DebugWatchpointType::Read);
return memory.Read32(vaddr);
}
u64 MemoryRead64(u32 vaddr) override {
CheckMemoryAccess(vaddr, 8, Kernel::DebugWatchpointType::Read);
return memory.Read64(vaddr);
}
void MemoryWrite8(u32 vaddr, u8 value) override {
memory.Write8(vaddr, value);
if (CheckMemoryAccess(vaddr, 1, Kernel::DebugWatchpointType::Write)) {
memory.Write8(vaddr, value);
}
}
void MemoryWrite16(u32 vaddr, u16 value) override {
memory.Write16(vaddr, value);
if (CheckMemoryAccess(vaddr, 2, Kernel::DebugWatchpointType::Write)) {
memory.Write16(vaddr, value);
}
}
void MemoryWrite32(u32 vaddr, u32 value) override {
memory.Write32(vaddr, value);
if (CheckMemoryAccess(vaddr, 4, Kernel::DebugWatchpointType::Write)) {
memory.Write32(vaddr, value);
}
}
void MemoryWrite64(u32 vaddr, u64 value) override {
memory.Write64(vaddr, value);
if (CheckMemoryAccess(vaddr, 8, Kernel::DebugWatchpointType::Write)) {
memory.Write64(vaddr, value);
}
}
bool MemoryWriteExclusive8(u32 vaddr, u8 value, u8 expected) override {
return memory.WriteExclusive8(vaddr, value, expected);
return CheckMemoryAccess(vaddr, 1, Kernel::DebugWatchpointType::Write) &&
memory.WriteExclusive8(vaddr, value, expected);
}
bool MemoryWriteExclusive16(u32 vaddr, u16 value, u16 expected) override {
return memory.WriteExclusive16(vaddr, value, expected);
return CheckMemoryAccess(vaddr, 2, Kernel::DebugWatchpointType::Write) &&
memory.WriteExclusive16(vaddr, value, expected);
}
bool MemoryWriteExclusive32(u32 vaddr, u32 value, u32 expected) override {
return memory.WriteExclusive32(vaddr, value, expected);
return CheckMemoryAccess(vaddr, 4, Kernel::DebugWatchpointType::Write) &&
memory.WriteExclusive32(vaddr, value, expected);
}
bool MemoryWriteExclusive64(u32 vaddr, u64 value, u64 expected) override {
return memory.WriteExclusive64(vaddr, value, expected);
return CheckMemoryAccess(vaddr, 8, Kernel::DebugWatchpointType::Write) &&
memory.WriteExclusive64(vaddr, value, expected);
}
void InterpreterFallback(u32 pc, std::size_t num_instructions) override {
@@ -77,8 +94,8 @@ public:
}
void ExceptionRaised(u32 pc, Dynarmic::A32::Exception exception) override {
if (parent.system.DebuggerEnabled()) {
parent.jit.load()->Regs()[15] = pc;
if (debugger_enabled) {
parent.SaveContext(parent.breakpoint_context);
parent.jit.load()->HaltExecution(ARM_Interface::breakpoint);
return;
}
@@ -117,9 +134,26 @@ public:
return std::max<s64>(parent.system.CoreTiming().GetDowncount(), 0);
}
bool CheckMemoryAccess(VAddr addr, u64 size, Kernel::DebugWatchpointType type) {
if (!debugger_enabled) {
return true;
}
const auto match{parent.MatchingWatchpoint(addr, size, type)};
if (match) {
parent.SaveContext(parent.breakpoint_context);
parent.jit.load()->HaltExecution(ARM_Interface::watchpoint);
parent.halted_watchpoint = match;
return false;
}
return true;
}
ARM_Dynarmic_32& parent;
Core::Memory::Memory& memory;
std::size_t num_interpreted_instructions{};
bool debugger_enabled{};
static constexpr u64 minimum_run_cycles = 1000U;
};
@@ -154,6 +188,11 @@ std::shared_ptr<Dynarmic::A32::Jit> ARM_Dynarmic_32::MakeJit(Common::PageTable*
config.code_cache_size = 512_MiB;
config.far_code_offset = 400_MiB;
// Allow memory fault handling to work
if (system.DebuggerEnabled()) {
config.check_halt_on_memory_access = true;
}
// null_jit
if (!page_table) {
// Don't waste too much memory on null_jit
@@ -248,6 +287,14 @@ u32 ARM_Dynarmic_32::GetSvcNumber() const {
return svc_swi;
}
const Kernel::DebugWatchpoint* ARM_Dynarmic_32::HaltedWatchpoint() const {
return halted_watchpoint;
}
void ARM_Dynarmic_32::RewindBreakpointInstruction() {
LoadContext(breakpoint_context);
}
ARM_Dynarmic_32::ARM_Dynarmic_32(System& system_, CPUInterrupts& interrupt_handlers_,
bool uses_wall_clock_, ExclusiveMonitor& exclusive_monitor_,
std::size_t core_index_)

View File

@@ -72,6 +72,8 @@ protected:
Dynarmic::HaltReason RunJit() override;
Dynarmic::HaltReason StepJit() override;
u32 GetSvcNumber() const override;
const Kernel::DebugWatchpoint* HaltedWatchpoint() const override;
void RewindBreakpointInstruction() override;
private:
std::shared_ptr<Dynarmic::A32::Jit> MakeJit(Common::PageTable* page_table) const;
@@ -98,6 +100,10 @@ private:
// SVC callback
u32 svc_swi{};
// Watchpoint info
const Kernel::DebugWatchpoint* halted_watchpoint;
ThreadContext32 breakpoint_context;
};
} // namespace Core

View File

@@ -29,55 +29,76 @@ using namespace Common::Literals;
class DynarmicCallbacks64 : public Dynarmic::A64::UserCallbacks {
public:
explicit DynarmicCallbacks64(ARM_Dynarmic_64& parent_)
: parent{parent_}, memory(parent.system.Memory()) {}
: parent{parent_},
memory(parent.system.Memory()), debugger_enabled{parent.system.DebuggerEnabled()} {}
u8 MemoryRead8(u64 vaddr) override {
CheckMemoryAccess(vaddr, 1, Kernel::DebugWatchpointType::Read);
return memory.Read8(vaddr);
}
u16 MemoryRead16(u64 vaddr) override {
CheckMemoryAccess(vaddr, 2, Kernel::DebugWatchpointType::Read);
return memory.Read16(vaddr);
}
u32 MemoryRead32(u64 vaddr) override {
CheckMemoryAccess(vaddr, 4, Kernel::DebugWatchpointType::Read);
return memory.Read32(vaddr);
}
u64 MemoryRead64(u64 vaddr) override {
CheckMemoryAccess(vaddr, 8, Kernel::DebugWatchpointType::Read);
return memory.Read64(vaddr);
}
Vector MemoryRead128(u64 vaddr) override {
CheckMemoryAccess(vaddr, 16, Kernel::DebugWatchpointType::Read);
return {memory.Read64(vaddr), memory.Read64(vaddr + 8)};
}
void MemoryWrite8(u64 vaddr, u8 value) override {
memory.Write8(vaddr, value);
if (CheckMemoryAccess(vaddr, 1, Kernel::DebugWatchpointType::Write)) {
memory.Write8(vaddr, value);
}
}
void MemoryWrite16(u64 vaddr, u16 value) override {
memory.Write16(vaddr, value);
if (CheckMemoryAccess(vaddr, 2, Kernel::DebugWatchpointType::Write)) {
memory.Write16(vaddr, value);
}
}
void MemoryWrite32(u64 vaddr, u32 value) override {
memory.Write32(vaddr, value);
if (CheckMemoryAccess(vaddr, 4, Kernel::DebugWatchpointType::Write)) {
memory.Write32(vaddr, value);
}
}
void MemoryWrite64(u64 vaddr, u64 value) override {
memory.Write64(vaddr, value);
if (CheckMemoryAccess(vaddr, 8, Kernel::DebugWatchpointType::Write)) {
memory.Write64(vaddr, value);
}
}
void MemoryWrite128(u64 vaddr, Vector value) override {
memory.Write64(vaddr, value[0]);
memory.Write64(vaddr + 8, value[1]);
if (CheckMemoryAccess(vaddr, 16, Kernel::DebugWatchpointType::Write)) {
memory.Write64(vaddr, value[0]);
memory.Write64(vaddr + 8, value[1]);
}
}
bool MemoryWriteExclusive8(u64 vaddr, std::uint8_t value, std::uint8_t expected) override {
return memory.WriteExclusive8(vaddr, value, expected);
return CheckMemoryAccess(vaddr, 1, Kernel::DebugWatchpointType::Write) &&
memory.WriteExclusive8(vaddr, value, expected);
}
bool MemoryWriteExclusive16(u64 vaddr, std::uint16_t value, std::uint16_t expected) override {
return memory.WriteExclusive16(vaddr, value, expected);
return CheckMemoryAccess(vaddr, 2, Kernel::DebugWatchpointType::Write) &&
memory.WriteExclusive16(vaddr, value, expected);
}
bool MemoryWriteExclusive32(u64 vaddr, std::uint32_t value, std::uint32_t expected) override {
return memory.WriteExclusive32(vaddr, value, expected);
return CheckMemoryAccess(vaddr, 4, Kernel::DebugWatchpointType::Write) &&
memory.WriteExclusive32(vaddr, value, expected);
}
bool MemoryWriteExclusive64(u64 vaddr, std::uint64_t value, std::uint64_t expected) override {
return memory.WriteExclusive64(vaddr, value, expected);
return CheckMemoryAccess(vaddr, 8, Kernel::DebugWatchpointType::Write) &&
memory.WriteExclusive64(vaddr, value, expected);
}
bool MemoryWriteExclusive128(u64 vaddr, Vector value, Vector expected) override {
return memory.WriteExclusive128(vaddr, value, expected);
return CheckMemoryAccess(vaddr, 16, Kernel::DebugWatchpointType::Write) &&
memory.WriteExclusive128(vaddr, value, expected);
}
void InterpreterFallback(u64 pc, std::size_t num_instructions) override {
@@ -118,8 +139,8 @@ public:
case Dynarmic::A64::Exception::Yield:
return;
default:
if (parent.system.DebuggerEnabled()) {
parent.jit.load()->SetPC(pc);
if (debugger_enabled) {
parent.SaveContext(parent.breakpoint_context);
parent.jit.load()->HaltExecution(ARM_Interface::breakpoint);
return;
}
@@ -160,10 +181,27 @@ public:
return parent.system.CoreTiming().GetClockTicks();
}
bool CheckMemoryAccess(VAddr addr, u64 size, Kernel::DebugWatchpointType type) {
if (!debugger_enabled) {
return true;
}
const auto match{parent.MatchingWatchpoint(addr, size, type)};
if (match) {
parent.SaveContext(parent.breakpoint_context);
parent.jit.load()->HaltExecution(ARM_Interface::watchpoint);
parent.halted_watchpoint = match;
return false;
}
return true;
}
ARM_Dynarmic_64& parent;
Core::Memory::Memory& memory;
u64 tpidrro_el0 = 0;
u64 tpidr_el0 = 0;
bool debugger_enabled{};
static constexpr u64 minimum_run_cycles = 1000U;
};
@@ -214,6 +252,11 @@ std::shared_ptr<Dynarmic::A64::Jit> ARM_Dynarmic_64::MakeJit(Common::PageTable*
config.code_cache_size = 512_MiB;
config.far_code_offset = 400_MiB;
// Allow memory fault handling to work
if (system.DebuggerEnabled()) {
config.check_halt_on_memory_access = true;
}
// null_jit
if (!page_table) {
// Don't waste too much memory on null_jit
@@ -308,6 +351,14 @@ u32 ARM_Dynarmic_64::GetSvcNumber() const {
return svc_swi;
}
const Kernel::DebugWatchpoint* ARM_Dynarmic_64::HaltedWatchpoint() const {
return halted_watchpoint;
}
void ARM_Dynarmic_64::RewindBreakpointInstruction() {
LoadContext(breakpoint_context);
}
ARM_Dynarmic_64::ARM_Dynarmic_64(System& system_, CPUInterrupts& interrupt_handlers_,
bool uses_wall_clock_, ExclusiveMonitor& exclusive_monitor_,
std::size_t core_index_)

View File

@@ -66,6 +66,8 @@ protected:
Dynarmic::HaltReason RunJit() override;
Dynarmic::HaltReason StepJit() override;
u32 GetSvcNumber() const override;
const Kernel::DebugWatchpoint* HaltedWatchpoint() const override;
void RewindBreakpointInstruction() override;
private:
std::shared_ptr<Dynarmic::A64::Jit> MakeJit(Common::PageTable* page_table,
@@ -91,6 +93,10 @@ private:
// SVC callback
u32 svc_swi{};
// Breakpoint info
const Kernel::DebugWatchpoint* halted_watchpoint;
ThreadContext64 breakpoint_context;
};
} // namespace Core

View File

@@ -138,7 +138,6 @@ struct System::Impl {
kernel.Suspend(false);
core_timing.SyncPause(false);
cpu_manager.Pause(false);
is_paused = false;
return status;
@@ -150,25 +149,22 @@ struct System::Impl {
core_timing.SyncPause(true);
kernel.Suspend(true);
cpu_manager.Pause(true);
is_paused = true;
return status;
}
std::unique_lock<std::mutex> StallCPU() {
std::unique_lock<std::mutex> StallProcesses() {
std::unique_lock<std::mutex> lk(suspend_guard);
kernel.Suspend(true);
core_timing.SyncPause(true);
cpu_manager.Pause(true);
return lk;
}
void UnstallCPU() {
void UnstallProcesses() {
if (!is_paused) {
core_timing.SyncPause(false);
kernel.Suspend(false);
cpu_manager.Pause(false);
}
}
@@ -334,6 +330,8 @@ struct System::Impl {
gpu_core->NotifyShutdown();
}
kernel.ShutdownCores();
cpu_manager.Shutdown();
debugger.reset();
services.reset();
service_manager.reset();
@@ -499,12 +497,12 @@ void System::DetachDebugger() {
}
}
std::unique_lock<std::mutex> System::StallCPU() {
return impl->StallCPU();
std::unique_lock<std::mutex> System::StallProcesses() {
return impl->StallProcesses();
}
void System::UnstallCPU() {
impl->UnstallCPU();
void System::UnstallProcesses() {
impl->UnstallProcesses();
}
void System::InitializeDebugger() {

View File

@@ -163,8 +163,8 @@ public:
/// Forcibly detach the debugger if it is running.
void DetachDebugger();
std::unique_lock<std::mutex> StallCPU();
void UnstallCPU();
std::unique_lock<std::mutex> StallProcesses();
void UnstallProcesses();
/**
* Initialize the debugger.

View File

@@ -16,31 +16,29 @@
namespace Core {
CpuManager::CpuManager(System& system_)
: pause_barrier{std::make_unique<Common::Barrier>(1)}, system{system_} {}
CpuManager::CpuManager(System& system_) : system{system_} {}
CpuManager::~CpuManager() = default;
void CpuManager::ThreadStart(std::stop_token stop_token, CpuManager& cpu_manager,
std::size_t core) {
cpu_manager.RunThread(stop_token, core);
cpu_manager.RunThread(core);
}
void CpuManager::Initialize() {
running_mode = true;
if (is_multicore) {
for (std::size_t core = 0; core < Core::Hardware::NUM_CPU_CORES; core++) {
core_data[core].host_thread = std::jthread(ThreadStart, std::ref(*this), core);
}
pause_barrier = std::make_unique<Common::Barrier>(Core::Hardware::NUM_CPU_CORES + 1);
} else {
core_data[0].host_thread = std::jthread(ThreadStart, std::ref(*this), 0);
pause_barrier = std::make_unique<Common::Barrier>(2);
num_cores = is_multicore ? Core::Hardware::NUM_CPU_CORES : 1;
gpu_barrier = std::make_unique<Common::Barrier>(num_cores + 1);
for (std::size_t core = 0; core < num_cores; core++) {
core_data[core].host_thread = std::jthread(ThreadStart, std::ref(*this), core);
}
}
void CpuManager::Shutdown() {
running_mode = false;
Pause(false);
for (std::size_t core = 0; core < num_cores; core++) {
if (core_data[core].host_thread.joinable()) {
core_data[core].host_thread.join();
}
}
}
std::function<void(void*)> CpuManager::GetGuestThreadStartFunc() {
@@ -51,8 +49,8 @@ std::function<void(void*)> CpuManager::GetIdleThreadStartFunc() {
return IdleThreadFunction;
}
std::function<void(void*)> CpuManager::GetSuspendThreadStartFunc() {
return SuspendThreadFunction;
std::function<void(void*)> CpuManager::GetShutdownThreadStartFunc() {
return ShutdownThreadFunction;
}
void CpuManager::GuestThreadFunction(void* cpu_manager_) {
@@ -82,17 +80,12 @@ void CpuManager::IdleThreadFunction(void* cpu_manager_) {
}
}
void CpuManager::SuspendThreadFunction(void* cpu_manager_) {
CpuManager* cpu_manager = static_cast<CpuManager*>(cpu_manager_);
if (cpu_manager->is_multicore) {
cpu_manager->MultiCoreRunSuspendThread();
} else {
cpu_manager->SingleCoreRunSuspendThread();
}
void CpuManager::ShutdownThreadFunction(void* cpu_manager) {
static_cast<CpuManager*>(cpu_manager)->ShutdownThread();
}
void* CpuManager::GetStartFuncParamater() {
return static_cast<void*>(this);
void* CpuManager::GetStartFuncParameter() {
return this;
}
///////////////////////////////////////////////////////////////////////////////
@@ -102,7 +95,7 @@ void* CpuManager::GetStartFuncParamater() {
void CpuManager::MultiCoreRunGuestThread() {
auto& kernel = system.Kernel();
kernel.CurrentScheduler()->OnThreadStart();
auto* thread = kernel.CurrentScheduler()->GetCurrentThread();
auto* thread = kernel.CurrentScheduler()->GetSchedulerCurrentThread();
auto& host_context = thread->GetHostContext();
host_context->SetRewindPoint(GuestRewindFunction, this);
MultiCoreRunGuestLoop();
@@ -113,12 +106,10 @@ void CpuManager::MultiCoreRunGuestLoop() {
while (true) {
auto* physical_core = &kernel.CurrentPhysicalCore();
system.EnterDynarmicProfile();
while (!physical_core->IsInterrupted()) {
physical_core->Run();
physical_core = &kernel.CurrentPhysicalCore();
}
system.ExitDynarmicProfile();
{
Kernel::KScopedDisableDispatch dd(kernel);
physical_core->ArmInterface().ClearExclusiveState();
@@ -134,21 +125,6 @@ void CpuManager::MultiCoreRunIdleThread() {
}
}
void CpuManager::MultiCoreRunSuspendThread() {
auto& kernel = system.Kernel();
kernel.CurrentScheduler()->OnThreadStart();
while (true) {
auto core = kernel.CurrentPhysicalCoreIndex();
auto& scheduler = *kernel.CurrentScheduler();
Kernel::KThread* current_thread = scheduler.GetCurrentThread();
current_thread->DisableDispatch();
Common::Fiber::YieldTo(current_thread->GetHostContext(), *core_data[core].host_context);
ASSERT(core == kernel.CurrentPhysicalCoreIndex());
scheduler.RescheduleCurrentCore();
}
}
///////////////////////////////////////////////////////////////////////////////
/// SingleCore ///
///////////////////////////////////////////////////////////////////////////////
@@ -156,7 +132,7 @@ void CpuManager::MultiCoreRunSuspendThread() {
void CpuManager::SingleCoreRunGuestThread() {
auto& kernel = system.Kernel();
kernel.CurrentScheduler()->OnThreadStart();
auto* thread = kernel.CurrentScheduler()->GetCurrentThread();
auto* thread = kernel.CurrentScheduler()->GetSchedulerCurrentThread();
auto& host_context = thread->GetHostContext();
host_context->SetRewindPoint(GuestRewindFunction, this);
SingleCoreRunGuestLoop();
@@ -166,12 +142,10 @@ void CpuManager::SingleCoreRunGuestLoop() {
auto& kernel = system.Kernel();
while (true) {
auto* physical_core = &kernel.CurrentPhysicalCore();
system.EnterDynarmicProfile();
if (!physical_core->IsInterrupted()) {
physical_core->Run();
physical_core = &kernel.CurrentPhysicalCore();
}
system.ExitDynarmicProfile();
kernel.SetIsPhantomModeForSingleCore(true);
system.CoreTiming().Advance();
kernel.SetIsPhantomModeForSingleCore(false);
@@ -194,26 +168,11 @@ void CpuManager::SingleCoreRunIdleThread() {
}
}
void CpuManager::SingleCoreRunSuspendThread() {
auto& kernel = system.Kernel();
kernel.CurrentScheduler()->OnThreadStart();
while (true) {
auto core = kernel.GetCurrentHostThreadID();
auto& scheduler = *kernel.CurrentScheduler();
Kernel::KThread* current_thread = scheduler.GetCurrentThread();
current_thread->DisableDispatch();
Common::Fiber::YieldTo(current_thread->GetHostContext(), *core_data[0].host_context);
ASSERT(core == kernel.GetCurrentHostThreadID());
scheduler.RescheduleCurrentCore();
}
}
void CpuManager::PreemptSingleCore(bool from_running_enviroment) {
{
auto& kernel = system.Kernel();
auto& scheduler = kernel.Scheduler(current_core);
Kernel::KThread* current_thread = scheduler.GetCurrentThread();
Kernel::KThread* current_thread = scheduler.GetSchedulerCurrentThread();
if (idle_count >= 4 || from_running_enviroment) {
if (!from_running_enviroment) {
system.CoreTiming().Idle();
@@ -225,7 +184,7 @@ void CpuManager::PreemptSingleCore(bool from_running_enviroment) {
}
current_core.store((current_core + 1) % Core::Hardware::NUM_CPU_CORES);
system.CoreTiming().ResetTicks();
scheduler.Unload(scheduler.GetCurrentThread());
scheduler.Unload(scheduler.GetSchedulerCurrentThread());
auto& next_scheduler = kernel.Scheduler(current_core);
Common::Fiber::YieldTo(current_thread->GetHostContext(), *next_scheduler.ControlContext());
@@ -234,31 +193,21 @@ void CpuManager::PreemptSingleCore(bool from_running_enviroment) {
// May have changed scheduler
{
auto& scheduler = system.Kernel().Scheduler(current_core);
scheduler.Reload(scheduler.GetCurrentThread());
if (!scheduler.IsIdle()) {
idle_count = 0;
}
scheduler.Reload(scheduler.GetSchedulerCurrentThread());
idle_count = 0;
}
}
void CpuManager::Pause(bool paused) {
std::scoped_lock lk{pause_lock};
void CpuManager::ShutdownThread() {
auto& kernel = system.Kernel();
auto core = is_multicore ? kernel.CurrentPhysicalCoreIndex() : 0;
auto* current_thread = kernel.GetCurrentEmuThread();
if (pause_state == paused) {
return;
}
// Set the new state
pause_state.store(paused);
// Wake up any waiting threads
pause_state.notify_all();
// Wait for all threads to successfully change state before returning
pause_barrier->Sync();
Common::Fiber::YieldTo(current_thread->GetHostContext(), *core_data[core].host_context);
UNREACHABLE();
}
void CpuManager::RunThread(std::stop_token stop_token, std::size_t core) {
void CpuManager::RunThread(std::size_t core) {
/// Initialization
system.RegisterCoreThread(core);
std::string name;
@@ -272,8 +221,6 @@ void CpuManager::RunThread(std::stop_token stop_token, std::size_t core) {
Common::SetCurrentThreadPriority(Common::ThreadPriority::High);
auto& data = core_data[core];
data.host_context = Common::Fiber::ThreadToFiber();
const bool sc_sync = !is_async_gpu && !is_multicore;
bool sc_sync_first_use = sc_sync;
// Cleanup
SCOPE_EXIT({
@@ -281,32 +228,16 @@ void CpuManager::RunThread(std::stop_token stop_token, std::size_t core) {
MicroProfileOnThreadExit();
});
/// Running
while (running_mode) {
if (pause_state.load(std::memory_order_relaxed)) {
// Wait for caller to acknowledge pausing
pause_barrier->Sync();
// Running
gpu_barrier->Sync();
// Wait until unpaused
pause_state.wait(true, std::memory_order_relaxed);
// Wait for caller to acknowledge unpausing
pause_barrier->Sync();
}
if (sc_sync_first_use) {
system.GPU().ObtainContext();
sc_sync_first_use = false;
}
// Emulation was stopped
if (stop_token.stop_requested()) {
return;
}
auto current_thread = system.Kernel().CurrentScheduler()->GetCurrentThread();
Common::Fiber::YieldTo(data.host_context, *current_thread->GetHostContext());
if (!is_async_gpu && !is_multicore) {
system.GPU().ObtainContext();
}
auto* current_thread = system.Kernel().CurrentScheduler()->GetIdleThread();
Kernel::SetCurrentThread(system.Kernel(), current_thread);
Common::Fiber::YieldTo(data.host_context, *current_thread->GetHostContext());
}
} // namespace Core

View File

@@ -43,15 +43,17 @@ public:
is_async_gpu = is_async;
}
void OnGpuReady() {
gpu_barrier->Sync();
}
void Initialize();
void Shutdown();
void Pause(bool paused);
static std::function<void(void*)> GetGuestThreadStartFunc();
static std::function<void(void*)> GetIdleThreadStartFunc();
static std::function<void(void*)> GetSuspendThreadStartFunc();
void* GetStartFuncParamater();
static std::function<void(void*)> GetShutdownThreadStartFunc();
void* GetStartFuncParameter();
void PreemptSingleCore(bool from_running_enviroment = true);
@@ -63,38 +65,34 @@ private:
static void GuestThreadFunction(void* cpu_manager);
static void GuestRewindFunction(void* cpu_manager);
static void IdleThreadFunction(void* cpu_manager);
static void SuspendThreadFunction(void* cpu_manager);
static void ShutdownThreadFunction(void* cpu_manager);
void MultiCoreRunGuestThread();
void MultiCoreRunGuestLoop();
void MultiCoreRunIdleThread();
void MultiCoreRunSuspendThread();
void SingleCoreRunGuestThread();
void SingleCoreRunGuestLoop();
void SingleCoreRunIdleThread();
void SingleCoreRunSuspendThread();
static void ThreadStart(std::stop_token stop_token, CpuManager& cpu_manager, std::size_t core);
void RunThread(std::stop_token stop_token, std::size_t core);
void ShutdownThread();
void RunThread(std::size_t core);
struct CoreData {
std::shared_ptr<Common::Fiber> host_context;
std::jthread host_thread;
};
std::atomic<bool> running_mode{};
std::atomic<bool> pause_state{};
std::unique_ptr<Common::Barrier> pause_barrier{};
std::mutex pause_lock{};
std::unique_ptr<Common::Barrier> gpu_barrier{};
std::array<CoreData, Core::Hardware::NUM_CPU_CORES> core_data{};
bool is_async_gpu{};
bool is_multicore{};
std::atomic<std::size_t> current_core{};
std::size_t idle_count{};
std::size_t num_cores{};
static constexpr std::size_t max_cycle_runs = 5;
System& system;

View File

@@ -44,12 +44,14 @@ static std::span<const u8> ReceiveInto(Readable& r, Buffer& buffer) {
enum class SignalType {
Stopped,
Watchpoint,
ShuttingDown,
};
struct SignalInfo {
SignalType type;
Kernel::KThread* thread;
const Kernel::DebugWatchpoint* watchpoint;
};
namespace Core {
@@ -67,18 +69,20 @@ public:
}
bool SignalDebugger(SignalInfo signal_info) {
std::scoped_lock lk{connection_lock};
{
std::scoped_lock lk{connection_lock};
if (stopped) {
// Do not notify the debugger about another event.
// It should be ignored.
return false;
if (stopped) {
// Do not notify the debugger about another event.
// It should be ignored.
return false;
}
// Set up the state.
stopped = true;
info = signal_info;
}
// Set up the state.
stopped = true;
info = signal_info;
// Write a single byte into the pipe to wake up the debug interface.
boost::asio::write(signal_pipe, boost::asio::buffer(&stopped, sizeof(stopped)));
return true;
@@ -141,9 +145,6 @@ private:
AsyncReceiveInto(signal_pipe, pipe_data, [&](auto d) { PipeData(d); });
AsyncReceiveInto(client_socket, client_data, [&](auto d) { ClientData(d); });
// Stop the emulated CPU.
AllCoreStop();
// Set the active thread.
UpdateActiveThread();
@@ -158,20 +159,25 @@ private:
void PipeData(std::span<const u8> data) {
switch (info.type) {
case SignalType::Stopped:
case SignalType::Watchpoint:
// Stop emulation.
AllCoreStop();
PauseEmulation();
// Notify the client.
active_thread = info.thread;
UpdateActiveThread();
frontend->Stopped(active_thread);
if (info.type == SignalType::Watchpoint) {
frontend->Watchpoint(active_thread, *info.watchpoint);
} else {
frontend->Stopped(active_thread);
}
break;
case SignalType::ShuttingDown:
frontend->ShuttingDown();
// Wait for emulation to shut down gracefully now.
suspend.reset();
signal_pipe.close();
client_socket.shutdown(boost::asio::socket_base::shutdown_both);
LOG_INFO(Debug_GDBStub, "Shut down server");
@@ -189,32 +195,29 @@ private:
std::scoped_lock lk{connection_lock};
stopped = true;
}
AllCoreStop();
PauseEmulation();
UpdateActiveThread();
frontend->Stopped(active_thread);
break;
}
case DebuggerAction::Continue:
active_thread->SetStepState(Kernel::StepState::NotStepping);
ResumeInactiveThreads();
AllCoreResume();
MarkResumed([&] { ResumeEmulation(); });
break;
case DebuggerAction::StepThreadUnlocked:
active_thread->SetStepState(Kernel::StepState::StepPending);
ResumeInactiveThreads();
AllCoreResume();
MarkResumed([&] {
active_thread->SetStepState(Kernel::StepState::StepPending);
active_thread->Resume(Kernel::SuspendType::Debug);
ResumeEmulation(active_thread);
});
break;
case DebuggerAction::StepThreadLocked:
active_thread->SetStepState(Kernel::StepState::StepPending);
SuspendInactiveThreads();
AllCoreResume();
case DebuggerAction::StepThreadLocked: {
MarkResumed([&] {
active_thread->SetStepState(Kernel::StepState::StepPending);
active_thread->Resume(Kernel::SuspendType::Debug);
});
break;
}
case DebuggerAction::ShutdownEmulation: {
// Suspend all threads and release any locks held
active_thread->RequestSuspend(Kernel::SuspendType::Debug);
SuspendInactiveThreads();
AllCoreResume();
// Spawn another thread that will exit after shutdown,
// to avoid a deadlock
Core::System* system_ref{&system};
@@ -226,33 +229,33 @@ private:
}
}
void AllCoreStop() {
if (!suspend) {
suspend = system.StallCPU();
void PauseEmulation() {
// Put all threads to sleep on next scheduler round.
for (auto* thread : ThreadList()) {
thread->RequestSuspend(Kernel::SuspendType::Debug);
}
// Signal an interrupt so that scheduler will fire.
system.Kernel().InterruptAllPhysicalCores();
}
void ResumeEmulation(Kernel::KThread* except = nullptr) {
// Wake up all threads.
for (auto* thread : ThreadList()) {
if (thread == except) {
continue;
}
thread->SetStepState(Kernel::StepState::NotStepping);
thread->Resume(Kernel::SuspendType::Debug);
}
}
void AllCoreResume() {
template <typename Callback>
void MarkResumed(Callback&& cb) {
std::scoped_lock lk{connection_lock};
stopped = false;
system.UnstallCPU();
suspend.reset();
}
void SuspendInactiveThreads() {
for (auto* thread : ThreadList()) {
if (thread != active_thread) {
thread->RequestSuspend(Kernel::SuspendType::Debug);
}
}
}
void ResumeInactiveThreads() {
for (auto* thread : ThreadList()) {
if (thread != active_thread) {
thread->Resume(Kernel::SuspendType::Debug);
thread->SetStepState(Kernel::StepState::NotStepping);
}
}
cb();
}
void UpdateActiveThread() {
@@ -260,8 +263,6 @@ private:
if (std::find(threads.begin(), threads.end(), active_thread) == threads.end()) {
active_thread = threads[0];
}
active_thread->Resume(Kernel::SuspendType::Debug);
active_thread->SetStepState(Kernel::StepState::NotStepping);
}
const std::vector<Kernel::KThread*>& ThreadList() {
@@ -277,7 +278,6 @@ private:
boost::asio::io_context io_context;
boost::process::async_pipe signal_pipe;
boost::asio::ip::tcp::socket client_socket;
std::optional<std::unique_lock<std::mutex>> suspend;
SignalInfo info;
Kernel::KThread* active_thread;
@@ -298,12 +298,17 @@ Debugger::Debugger(Core::System& system, u16 port) {
Debugger::~Debugger() = default;
bool Debugger::NotifyThreadStopped(Kernel::KThread* thread) {
return impl && impl->SignalDebugger(SignalInfo{SignalType::Stopped, thread});
return impl && impl->SignalDebugger(SignalInfo{SignalType::Stopped, thread, nullptr});
}
bool Debugger::NotifyThreadWatchpoint(Kernel::KThread* thread,
const Kernel::DebugWatchpoint& watch) {
return impl && impl->SignalDebugger(SignalInfo{SignalType::Watchpoint, thread, &watch});
}
void Debugger::NotifyShutdown() {
if (impl) {
impl->SignalDebugger(SignalInfo{SignalType::ShuttingDown, nullptr});
impl->SignalDebugger(SignalInfo{SignalType::ShuttingDown, nullptr, nullptr});
}
}

View File

@@ -9,7 +9,8 @@
namespace Kernel {
class KThread;
}
struct DebugWatchpoint;
} // namespace Kernel
namespace Core {
class System;
@@ -40,6 +41,11 @@ public:
*/
void NotifyShutdown();
/*
* Notify the debugger that the given thread has stopped due to hitting a watchpoint.
*/
bool NotifyThreadWatchpoint(Kernel::KThread* thread, const Kernel::DebugWatchpoint& watch);
private:
std::unique_ptr<DebuggerImpl> impl;
};

View File

@@ -11,7 +11,8 @@
namespace Kernel {
class KThread;
}
struct DebugWatchpoint;
} // namespace Kernel
namespace Core {
@@ -71,6 +72,11 @@ public:
*/
virtual void ShuttingDown() = 0;
/*
* Called when emulation has stopped on a watchpoint.
*/
virtual void Watchpoint(Kernel::KThread* thread, const Kernel::DebugWatchpoint& watch) = 0;
/**
* Called when new data is asynchronously received on the client socket.
* A list of actions to perform is returned.

View File

@@ -112,6 +112,23 @@ void GDBStub::Stopped(Kernel::KThread* thread) {
SendReply(arch->ThreadStatus(thread, GDB_STUB_SIGTRAP));
}
void GDBStub::Watchpoint(Kernel::KThread* thread, const Kernel::DebugWatchpoint& watch) {
const auto status{arch->ThreadStatus(thread, GDB_STUB_SIGTRAP)};
switch (watch.type) {
case Kernel::DebugWatchpointType::Read:
SendReply(fmt::format("{}rwatch:{:x};", status, watch.start_address));
break;
case Kernel::DebugWatchpointType::Write:
SendReply(fmt::format("{}watch:{:x};", status, watch.start_address));
break;
case Kernel::DebugWatchpointType::ReadOrWrite:
default:
SendReply(fmt::format("{}awatch:{:x};", status, watch.start_address));
break;
}
}
std::vector<DebuggerAction> GDBStub::ClientData(std::span<const u8> data) {
std::vector<DebuggerAction> actions;
current_command.insert(current_command.end(), data.begin(), data.end());
@@ -235,6 +252,7 @@ void GDBStub::ExecuteCommand(std::string_view packet, std::vector<DebuggerAction
const auto sep{std::find(command.begin(), command.end(), '=') - command.begin() + 1};
const size_t reg{static_cast<size_t>(strtoll(command.data(), nullptr, 16))};
arch->RegWrite(backend.GetActiveThread(), reg, std::string_view(command).substr(sep));
SendReply(GDB_STUB_REPLY_OK);
break;
}
case 'm': {
@@ -278,44 +296,124 @@ void GDBStub::ExecuteCommand(std::string_view packet, std::vector<DebuggerAction
case 'c':
actions.push_back(DebuggerAction::Continue);
break;
case 'Z': {
const auto addr_sep{std::find(command.begin(), command.end(), ',') - command.begin() + 1};
const size_t addr{static_cast<size_t>(strtoll(command.data() + addr_sep, nullptr, 16))};
if (system.Memory().IsValidVirtualAddress(addr)) {
replaced_instructions[addr] = system.Memory().Read32(addr);
system.Memory().Write32(addr, arch->BreakpointInstruction());
system.InvalidateCpuInstructionCacheRange(addr, sizeof(u32));
SendReply(GDB_STUB_REPLY_OK);
} else {
SendReply(GDB_STUB_REPLY_ERR);
}
case 'Z':
HandleBreakpointInsert(command);
break;
}
case 'z': {
const auto addr_sep{std::find(command.begin(), command.end(), ',') - command.begin() + 1};
const size_t addr{static_cast<size_t>(strtoll(command.data() + addr_sep, nullptr, 16))};
const auto orig_insn{replaced_instructions.find(addr)};
if (system.Memory().IsValidVirtualAddress(addr) &&
orig_insn != replaced_instructions.end()) {
system.Memory().Write32(addr, orig_insn->second);
system.InvalidateCpuInstructionCacheRange(addr, sizeof(u32));
replaced_instructions.erase(addr);
SendReply(GDB_STUB_REPLY_OK);
} else {
SendReply(GDB_STUB_REPLY_ERR);
}
case 'z':
HandleBreakpointRemove(command);
break;
}
default:
SendReply(GDB_STUB_REPLY_EMPTY);
break;
}
}
enum class BreakpointType {
Software = 0,
Hardware = 1,
WriteWatch = 2,
ReadWatch = 3,
AccessWatch = 4,
};
void GDBStub::HandleBreakpointInsert(std::string_view command) {
const auto type{static_cast<BreakpointType>(strtoll(command.data(), nullptr, 16))};
const auto addr_sep{std::find(command.begin(), command.end(), ',') - command.begin() + 1};
const auto size_sep{std::find(command.begin() + addr_sep, command.end(), ',') -
command.begin() + 1};
const size_t addr{static_cast<size_t>(strtoll(command.data() + addr_sep, nullptr, 16))};
const size_t size{static_cast<size_t>(strtoll(command.data() + size_sep, nullptr, 16))};
if (!system.Memory().IsValidVirtualAddressRange(addr, size)) {
SendReply(GDB_STUB_REPLY_ERR);
return;
}
bool success{};
switch (type) {
case BreakpointType::Software:
replaced_instructions[addr] = system.Memory().Read32(addr);
system.Memory().Write32(addr, arch->BreakpointInstruction());
system.InvalidateCpuInstructionCacheRange(addr, sizeof(u32));
success = true;
break;
case BreakpointType::WriteWatch:
success = system.CurrentProcess()->InsertWatchpoint(system, addr, size,
Kernel::DebugWatchpointType::Write);
break;
case BreakpointType::ReadWatch:
success = system.CurrentProcess()->InsertWatchpoint(system, addr, size,
Kernel::DebugWatchpointType::Read);
break;
case BreakpointType::AccessWatch:
success = system.CurrentProcess()->InsertWatchpoint(
system, addr, size, Kernel::DebugWatchpointType::ReadOrWrite);
break;
case BreakpointType::Hardware:
default:
SendReply(GDB_STUB_REPLY_EMPTY);
return;
}
if (success) {
SendReply(GDB_STUB_REPLY_OK);
} else {
SendReply(GDB_STUB_REPLY_ERR);
}
}
void GDBStub::HandleBreakpointRemove(std::string_view command) {
const auto type{static_cast<BreakpointType>(strtoll(command.data(), nullptr, 16))};
const auto addr_sep{std::find(command.begin(), command.end(), ',') - command.begin() + 1};
const auto size_sep{std::find(command.begin() + addr_sep, command.end(), ',') -
command.begin() + 1};
const size_t addr{static_cast<size_t>(strtoll(command.data() + addr_sep, nullptr, 16))};
const size_t size{static_cast<size_t>(strtoll(command.data() + size_sep, nullptr, 16))};
if (!system.Memory().IsValidVirtualAddressRange(addr, size)) {
SendReply(GDB_STUB_REPLY_ERR);
return;
}
bool success{};
switch (type) {
case BreakpointType::Software: {
const auto orig_insn{replaced_instructions.find(addr)};
if (orig_insn != replaced_instructions.end()) {
system.Memory().Write32(addr, orig_insn->second);
system.InvalidateCpuInstructionCacheRange(addr, sizeof(u32));
replaced_instructions.erase(addr);
success = true;
}
break;
}
case BreakpointType::WriteWatch:
success = system.CurrentProcess()->RemoveWatchpoint(system, addr, size,
Kernel::DebugWatchpointType::Write);
break;
case BreakpointType::ReadWatch:
success = system.CurrentProcess()->RemoveWatchpoint(system, addr, size,
Kernel::DebugWatchpointType::Read);
break;
case BreakpointType::AccessWatch:
success = system.CurrentProcess()->RemoveWatchpoint(
system, addr, size, Kernel::DebugWatchpointType::ReadOrWrite);
break;
case BreakpointType::Hardware:
default:
SendReply(GDB_STUB_REPLY_EMPTY);
return;
}
if (success) {
SendReply(GDB_STUB_REPLY_OK);
} else {
SendReply(GDB_STUB_REPLY_ERR);
}
}
// Structure offsets are from Atmosphere
// See osdbg_thread_local_region.os.horizon.hpp and osdbg_thread_type.os.horizon.hpp

View File

@@ -24,6 +24,7 @@ public:
void Connected() override;
void Stopped(Kernel::KThread* thread) override;
void ShuttingDown() override;
void Watchpoint(Kernel::KThread* thread, const Kernel::DebugWatchpoint& watch) override;
std::vector<DebuggerAction> ClientData(std::span<const u8> data) override;
private:
@@ -31,6 +32,8 @@ private:
void ExecuteCommand(std::string_view packet, std::vector<DebuggerAction>& actions);
void HandleVCont(std::string_view command, std::vector<DebuggerAction>& actions);
void HandleQuery(std::string_view command);
void HandleBreakpointInsert(std::string_view command);
void HandleBreakpointRemove(std::string_view command);
std::vector<char>::const_iterator CommandEnd() const;
std::optional<std::string> DetachCommand();
Kernel::KThread* GetThreadByID(u64 thread_id);

View File

@@ -25,6 +25,9 @@ constexpr std::array<s32, Common::BitSize<u64>()> VirtualToPhysicalCoreMap{
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3,
};
// Cortex-A57 supports 4 memory watchpoints
constexpr u64 NUM_WATCHPOINTS = 4;
} // namespace Hardware
} // namespace Core

View File

@@ -234,7 +234,7 @@ ResultCode KAddressArbiter::SignalAndModifyByWaitingCountIfEqual(VAddr addr, s32
ResultCode KAddressArbiter::WaitIfLessThan(VAddr addr, s32 value, bool decrement, s64 timeout) {
// Prepare to wait.
KThread* cur_thread = kernel.CurrentScheduler()->GetCurrentThread();
KThread* cur_thread = GetCurrentThreadPointer(kernel);
ThreadQueueImplForKAddressArbiter wait_queue(kernel, std::addressof(thread_tree));
{
@@ -287,7 +287,7 @@ ResultCode KAddressArbiter::WaitIfLessThan(VAddr addr, s32 value, bool decrement
ResultCode KAddressArbiter::WaitIfEqual(VAddr addr, s32 value, s64 timeout) {
// Prepare to wait.
KThread* cur_thread = kernel.CurrentScheduler()->GetCurrentThread();
KThread* cur_thread = GetCurrentThreadPointer(kernel);
ThreadQueueImplForKAddressArbiter wait_queue(kernel, std::addressof(thread_tree));
{

View File

@@ -106,7 +106,7 @@ KConditionVariable::KConditionVariable(Core::System& system_)
KConditionVariable::~KConditionVariable() = default;
ResultCode KConditionVariable::SignalToAddress(VAddr addr) {
KThread* owner_thread = kernel.CurrentScheduler()->GetCurrentThread();
KThread* owner_thread = GetCurrentThreadPointer(kernel);
// Signal the address.
{
@@ -147,7 +147,7 @@ ResultCode KConditionVariable::SignalToAddress(VAddr addr) {
}
ResultCode KConditionVariable::WaitForAddress(Handle handle, VAddr addr, u32 value) {
KThread* cur_thread = kernel.CurrentScheduler()->GetCurrentThread();
KThread* cur_thread = GetCurrentThreadPointer(kernel);
ThreadQueueImplForKConditionVariableWaitForAddress wait_queue(kernel);
// Wait for the address.

View File

@@ -15,8 +15,7 @@ void HandleInterrupt(KernelCore& kernel, s32 core_id) {
return;
}
auto& scheduler = kernel.Scheduler(core_id);
auto& current_thread = *scheduler.GetCurrentThread();
auto& current_thread = GetCurrentThread(kernel);
// If the user disable count is set, we may need to pin the current thread.
if (current_thread.GetUserDisableCount() && !process->GetPinnedThread(core_id)) {
@@ -26,7 +25,7 @@ void HandleInterrupt(KernelCore& kernel, s32 core_id) {
process->PinCurrentThread(core_id);
// Set the interrupt flag for the thread.
scheduler.GetCurrentThread()->SetInterruptFlag();
GetCurrentThread(kernel).SetInterruptFlag();
}
}

View File

@@ -65,7 +65,6 @@ ResultCode KPageTable::InitializeForProcess(FileSys::ProgramAddressSpaceType as_
std::size_t alias_region_size{GetSpaceSize(KAddressSpaceInfo::Type::Alias)};
std::size_t heap_region_size{GetSpaceSize(KAddressSpaceInfo::Type::Heap)};
ASSERT(start <= code_addr);
ASSERT(code_addr < code_addr + code_size);
ASSERT(code_addr + code_size - 1 <= end - 1);

View File

@@ -57,18 +57,13 @@ void SetupMainThread(Core::System& system, KProcess& owner_process, u32 priority
thread->GetContext64().cpu_registers[0] = 0;
thread->GetContext32().cpu_registers[1] = thread_handle;
thread->GetContext64().cpu_registers[1] = thread_handle;
thread->DisableDispatch();
auto& kernel = system.Kernel();
// Threads by default are dormant, wake up the main thread so it runs when the scheduler fires
{
KScopedSchedulerLock lock{kernel};
thread->SetState(ThreadState::Runnable);
if (system.DebuggerEnabled()) {
thread->RequestSuspend(SuspendType::Debug);
}
if (system.DebuggerEnabled()) {
thread->RequestSuspend(SuspendType::Debug);
}
// Run our thread.
void(thread->Run());
}
} // Anonymous namespace
@@ -181,7 +176,8 @@ void KProcess::PinCurrentThread(s32 core_id) {
ASSERT(kernel.GlobalSchedulerContext().IsLocked());
// Get the current thread.
KThread* cur_thread = kernel.Scheduler(static_cast<std::size_t>(core_id)).GetCurrentThread();
KThread* cur_thread =
kernel.Scheduler(static_cast<std::size_t>(core_id)).GetSchedulerCurrentThread();
// If the thread isn't terminated, pin it.
if (!cur_thread->IsTerminationRequested()) {
@@ -198,7 +194,8 @@ void KProcess::UnpinCurrentThread(s32 core_id) {
ASSERT(kernel.GlobalSchedulerContext().IsLocked());
// Get the current thread.
KThread* cur_thread = kernel.Scheduler(static_cast<std::size_t>(core_id)).GetCurrentThread();
KThread* cur_thread =
kernel.Scheduler(static_cast<std::size_t>(core_id)).GetSchedulerCurrentThread();
// Unpin it.
cur_thread->Unpin();
@@ -275,11 +272,15 @@ void KProcess::RemoveSharedMemory(KSharedMemory* shmem, [[maybe_unused]] VAddr a
shmem->Close();
}
void KProcess::RegisterThread(const KThread* thread) {
void KProcess::RegisterThread(KThread* thread) {
KScopedLightLock lk{list_lock};
thread_list.push_back(thread);
}
void KProcess::UnregisterThread(const KThread* thread) {
void KProcess::UnregisterThread(KThread* thread) {
KScopedLightLock lk{list_lock};
thread_list.remove(thread);
}
@@ -297,6 +298,50 @@ ResultCode KProcess::Reset() {
return ResultSuccess;
}
ResultCode KProcess::SetActivity(ProcessActivity activity) {
// Lock ourselves and the scheduler.
KScopedLightLock lk{state_lock};
KScopedLightLock list_lk{list_lock};
KScopedSchedulerLock sl{kernel};
// Validate our state.
R_UNLESS(status != ProcessStatus::Exiting, ResultInvalidState);
R_UNLESS(status != ProcessStatus::Exited, ResultInvalidState);
// Either pause or resume.
if (activity == ProcessActivity::Paused) {
// Verify that we're not suspended.
if (is_suspended) {
return ResultInvalidState;
}
// Suspend all threads.
for (auto* thread : GetThreadList()) {
thread->RequestSuspend(SuspendType::Process);
}
// Set ourselves as suspended.
SetSuspended(true);
} else {
ASSERT(activity == ProcessActivity::Runnable);
// Verify that we're suspended.
if (!is_suspended) {
return ResultInvalidState;
}
// Resume all threads.
for (auto* thread : GetThreadList()) {
thread->Resume(SuspendType::Process);
}
// Set ourselves as resumed.
SetSuspended(false);
}
return ResultSuccess;
}
ResultCode KProcess::LoadFromMetadata(const FileSys::ProgramMetadata& metadata,
std::size_t code_size) {
program_id = metadata.GetTitleID();
@@ -377,11 +422,11 @@ void KProcess::PrepareForTermination() {
ChangeStatus(ProcessStatus::Exiting);
const auto stop_threads = [this](const std::vector<KThread*>& in_thread_list) {
for (auto& thread : in_thread_list) {
for (auto* thread : in_thread_list) {
if (thread->GetOwnerProcess() != this)
continue;
if (thread == kernel.CurrentScheduler()->GetCurrentThread())
if (thread == GetCurrentThreadPointer(kernel))
continue;
// TODO(Subv): When are the other running/ready threads terminated?
@@ -536,6 +581,52 @@ ResultCode KProcess::DeleteThreadLocalRegion(VAddr addr) {
return ResultSuccess;
}
bool KProcess::InsertWatchpoint(Core::System& system, VAddr addr, u64 size,
DebugWatchpointType type) {
const auto watch{std::find_if(watchpoints.begin(), watchpoints.end(), [&](const auto& wp) {
return wp.type == DebugWatchpointType::None;
})};
if (watch == watchpoints.end()) {
return false;
}
watch->start_address = addr;
watch->end_address = addr + size;
watch->type = type;
for (VAddr page = Common::AlignDown(addr, PageSize); page < addr + size; page += PageSize) {
debug_page_refcounts[page]++;
system.Memory().MarkRegionDebug(page, PageSize, true);
}
return true;
}
bool KProcess::RemoveWatchpoint(Core::System& system, VAddr addr, u64 size,
DebugWatchpointType type) {
const auto watch{std::find_if(watchpoints.begin(), watchpoints.end(), [&](const auto& wp) {
return wp.start_address == addr && wp.end_address == addr + size && wp.type == type;
})};
if (watch == watchpoints.end()) {
return false;
}
watch->start_address = 0;
watch->end_address = 0;
watch->type = DebugWatchpointType::None;
for (VAddr page = Common::AlignDown(addr, PageSize); page < addr + size; page += PageSize) {
debug_page_refcounts[page]--;
if (!debug_page_refcounts[page]) {
system.Memory().MarkRegionDebug(page, PageSize, false);
}
}
return true;
}
void KProcess::LoadModule(CodeSet code_set, VAddr base_addr) {
const auto ReprotectSegment = [&](const CodeSet::Segment& segment,
Svc::MemoryPermission permission) {
@@ -556,9 +647,10 @@ bool KProcess::IsSignaled() const {
}
KProcess::KProcess(KernelCore& kernel_)
: KAutoObjectWithSlabHeapAndContainer{kernel_},
page_table{std::make_unique<KPageTable>(kernel_.System())}, handle_table{kernel_},
address_arbiter{kernel_.System()}, condition_var{kernel_.System()}, state_lock{kernel_} {}
: KAutoObjectWithSlabHeapAndContainer{kernel_}, page_table{std::make_unique<KPageTable>(
kernel_.System())},
handle_table{kernel_}, address_arbiter{kernel_.System()}, condition_var{kernel_.System()},
state_lock{kernel_}, list_lock{kernel_} {}
KProcess::~KProcess() = default;

View File

@@ -7,6 +7,7 @@
#include <array>
#include <cstddef>
#include <list>
#include <map>
#include <string>
#include "common/common_types.h"
#include "core/hle/kernel/k_address_arbiter.h"
@@ -63,6 +64,25 @@ enum class ProcessStatus {
DebugBreak,
};
enum class ProcessActivity : u32 {
Runnable,
Paused,
};
enum class DebugWatchpointType : u8 {
None = 0,
Read = 1 << 0,
Write = 1 << 1,
ReadOrWrite = Read | Write,
};
DECLARE_ENUM_FLAG_OPERATORS(DebugWatchpointType);
struct DebugWatchpoint {
VAddr start_address;
VAddr end_address;
DebugWatchpointType type;
};
class KProcess final : public KAutoObjectWithSlabHeapAndContainer<KProcess, KWorkerTask> {
KERNEL_AUTOOBJECT_TRAITS(KProcess, KSynchronizationObject);
@@ -282,17 +302,17 @@ public:
u64 GetTotalPhysicalMemoryUsedWithoutSystemResource() const;
/// Gets the list of all threads created with this process as their owner.
const std::list<const KThread*>& GetThreadList() const {
std::list<KThread*>& GetThreadList() {
return thread_list;
}
/// Registers a thread as being created under this process,
/// adding it to this process' thread list.
void RegisterThread(const KThread* thread);
void RegisterThread(KThread* thread);
/// Unregisters a thread from this process, removing it
/// from this process' thread list.
void UnregisterThread(const KThread* thread);
void UnregisterThread(KThread* thread);
/// Clears the signaled state of the process if and only if it's signaled.
///
@@ -347,6 +367,8 @@ public:
void DoWorkerTaskImpl();
ResultCode SetActivity(ProcessActivity activity);
void PinCurrentThread(s32 core_id);
void UnpinCurrentThread(s32 core_id);
void UnpinThread(KThread* thread);
@@ -367,6 +389,19 @@ public:
// Frees a used TLS slot identified by the given address
ResultCode DeleteThreadLocalRegion(VAddr addr);
///////////////////////////////////////////////////////////////////////////////////////////////
// Debug watchpoint management
// Attempts to insert a watchpoint into a free slot. Returns false if none are available.
bool InsertWatchpoint(Core::System& system, VAddr addr, u64 size, DebugWatchpointType type);
// Attempts to remove the watchpoint specified by the given parameters.
bool RemoveWatchpoint(Core::System& system, VAddr addr, u64 size, DebugWatchpointType type);
const std::array<DebugWatchpoint, Core::Hardware::NUM_WATCHPOINTS>& GetWatchpoints() const {
return watchpoints;
}
private:
void PinThread(s32 core_id, KThread* thread) {
ASSERT(0 <= core_id && core_id < static_cast<s32>(Core::Hardware::NUM_CPU_CORES));
@@ -442,7 +477,7 @@ private:
std::array<u64, RANDOM_ENTROPY_SIZE> random_entropy{};
/// List of threads that are running with this process as their owner.
std::list<const KThread*> thread_list;
std::list<KThread*> thread_list;
/// List of shared memory that are running with this process as their owner.
std::list<KSharedMemoryInfo*> shared_memory_list;
@@ -471,10 +506,13 @@ private:
std::array<KThread*, Core::Hardware::NUM_CPU_CORES> running_threads{};
std::array<u64, Core::Hardware::NUM_CPU_CORES> running_thread_idle_counts{};
std::array<KThread*, Core::Hardware::NUM_CPU_CORES> pinned_threads{};
std::array<DebugWatchpoint, Core::Hardware::NUM_WATCHPOINTS> watchpoints{};
std::map<VAddr, u64> debug_page_refcounts;
KThread* exception_thread{};
KLightLock state_lock;
KLightLock list_lock;
using TLPTree =
Common::IntrusiveRedBlackTreeBaseTraits<KThreadLocalPage>::TreeType<KThreadLocalPage>;

View File

@@ -317,7 +317,7 @@ void KScheduler::RotateScheduledQueue(s32 cpu_core_id, s32 priority) {
{
KThread* best_thread = priority_queue.GetScheduledFront(cpu_core_id);
if (best_thread == GetCurrentThread()) {
if (best_thread == GetCurrentThreadPointer(kernel)) {
best_thread = priority_queue.GetScheduledNext(cpu_core_id, best_thread);
}
@@ -424,7 +424,7 @@ void KScheduler::YieldWithoutCoreMigration(KernelCore& kernel) {
ASSERT(kernel.CurrentProcess() != nullptr);
// Get the current thread and process.
KThread& cur_thread = Kernel::GetCurrentThread(kernel);
KThread& cur_thread = GetCurrentThread(kernel);
KProcess& cur_process = *kernel.CurrentProcess();
// If the thread's yield count matches, there's nothing for us to do.
@@ -463,7 +463,7 @@ void KScheduler::YieldWithCoreMigration(KernelCore& kernel) {
ASSERT(kernel.CurrentProcess() != nullptr);
// Get the current thread and process.
KThread& cur_thread = Kernel::GetCurrentThread(kernel);
KThread& cur_thread = GetCurrentThread(kernel);
KProcess& cur_process = *kernel.CurrentProcess();
// If the thread's yield count matches, there's nothing for us to do.
@@ -551,7 +551,7 @@ void KScheduler::YieldToAnyThread(KernelCore& kernel) {
ASSERT(kernel.CurrentProcess() != nullptr);
// Get the current thread and process.
KThread& cur_thread = Kernel::GetCurrentThread(kernel);
KThread& cur_thread = GetCurrentThread(kernel);
KProcess& cur_process = *kernel.CurrentProcess();
// If the thread's yield count matches, there's nothing for us to do.
@@ -642,7 +642,7 @@ KScheduler::~KScheduler() {
ASSERT(!idle_thread);
}
KThread* KScheduler::GetCurrentThread() const {
KThread* KScheduler::GetSchedulerCurrentThread() const {
if (auto result = current_thread.load(); result) {
return result;
}
@@ -654,7 +654,7 @@ u64 KScheduler::GetLastContextSwitchTicks() const {
}
void KScheduler::RescheduleCurrentCore() {
ASSERT(GetCurrentThread()->GetDisableDispatchCount() == 1);
ASSERT(GetCurrentThread(system.Kernel()).GetDisableDispatchCount() == 1);
auto& phys_core = system.Kernel().PhysicalCore(core_id);
if (phys_core.IsInterrupted()) {
@@ -665,7 +665,7 @@ void KScheduler::RescheduleCurrentCore() {
if (state.needs_scheduling.load()) {
Schedule();
} else {
GetCurrentThread()->EnableDispatch();
GetCurrentThread(system.Kernel()).EnableDispatch();
guard.Unlock();
}
}
@@ -710,6 +710,7 @@ void KScheduler::Reload(KThread* thread) {
Core::ARM_Interface& cpu_core = system.ArmInterface(core_id);
cpu_core.LoadContext(thread->GetContext32());
cpu_core.LoadContext(thread->GetContext64());
cpu_core.LoadWatchpointArray(thread->GetOwnerProcess()->GetWatchpoints());
cpu_core.SetTlsAddress(thread->GetTLSAddress());
cpu_core.SetTPIDR_EL0(thread->GetTPIDR_EL0());
cpu_core.ClearExclusiveState();
@@ -717,13 +718,18 @@ void KScheduler::Reload(KThread* thread) {
void KScheduler::SwitchContextStep2() {
// Load context of new thread
Reload(GetCurrentThread());
Reload(GetCurrentThreadPointer(system.Kernel()));
RescheduleCurrentCore();
}
void KScheduler::Schedule() {
ASSERT(GetCurrentThread(system.Kernel()).GetDisableDispatchCount() == 1);
this->ScheduleImpl();
}
void KScheduler::ScheduleImpl() {
KThread* previous_thread = GetCurrentThread();
KThread* previous_thread = GetCurrentThreadPointer(system.Kernel());
KThread* next_thread = state.highest_priority_thread;
state.needs_scheduling.store(false);
@@ -761,6 +767,7 @@ void KScheduler::ScheduleImpl() {
old_context = &previous_thread->GetHostContext();
// Set the new thread.
SetCurrentThread(system.Kernel(), next_thread);
current_thread.store(next_thread);
guard.Unlock();
@@ -804,6 +811,7 @@ void KScheduler::SwitchToCurrent() {
}
}
auto thread = next_thread ? next_thread : idle_thread;
SetCurrentThread(system.Kernel(), thread);
Common::Fiber::YieldTo(switch_fiber, *thread->GetHostContext());
} while (!is_switch_pending());
}
@@ -829,6 +837,7 @@ void KScheduler::Initialize() {
idle_thread = KThread::Create(system.Kernel());
ASSERT(KThread::InitializeIdleThread(system, idle_thread, core_id).IsSuccess());
idle_thread->SetName(fmt::format("IdleThread:{}", core_id));
idle_thread->EnableDispatch();
}
KScopedSchedulerLock::KScopedSchedulerLock(KernelCore& kernel)

View File

@@ -48,18 +48,13 @@ public:
void Reload(KThread* thread);
/// Gets the current running thread
[[nodiscard]] KThread* GetCurrentThread() const;
[[nodiscard]] KThread* GetSchedulerCurrentThread() const;
/// Gets the idle thread
[[nodiscard]] KThread* GetIdleThread() const {
return idle_thread;
}
/// Returns true if the scheduler is idle
[[nodiscard]] bool IsIdle() const {
return GetCurrentThread() == idle_thread;
}
/// Gets the timestamp for the last context switch in ticks.
[[nodiscard]] u64 GetLastContextSwitchTicks() const;
@@ -149,10 +144,7 @@ private:
void RotateScheduledQueue(s32 cpu_core_id, s32 priority);
void Schedule() {
ASSERT(GetCurrentThread()->GetDisableDispatchCount() == 1);
this->ScheduleImpl();
}
void Schedule();
/// Switches the CPU's active thread context to that of the specified thread
void ScheduleImpl();

View File

@@ -225,7 +225,7 @@ ResultCode KThread::Initialize(KThreadFunction func, uintptr_t arg, VAddr user_s
// Setup the stack parameters.
StackParameters& sp = GetStackParameters();
sp.cur_thread = this;
sp.disable_count = 0;
sp.disable_count = 1;
SetInExceptionHandler();
// Set thread ID.
@@ -267,15 +267,15 @@ ResultCode KThread::InitializeDummyThread(KThread* thread) {
ResultCode KThread::InitializeIdleThread(Core::System& system, KThread* thread, s32 virt_core) {
return InitializeThread(thread, {}, {}, {}, IdleThreadPriority, virt_core, {}, ThreadType::Main,
Core::CpuManager::GetIdleThreadStartFunc(),
system.GetCpuManager().GetStartFuncParamater());
system.GetCpuManager().GetStartFuncParameter());
}
ResultCode KThread::InitializeHighPriorityThread(Core::System& system, KThread* thread,
KThreadFunction func, uintptr_t arg,
s32 virt_core) {
return InitializeThread(thread, func, arg, {}, {}, virt_core, nullptr, ThreadType::HighPriority,
Core::CpuManager::GetSuspendThreadStartFunc(),
system.GetCpuManager().GetStartFuncParamater());
Core::CpuManager::GetShutdownThreadStartFunc(),
system.GetCpuManager().GetStartFuncParameter());
}
ResultCode KThread::InitializeUserThread(Core::System& system, KThread* thread,
@@ -284,7 +284,7 @@ ResultCode KThread::InitializeUserThread(Core::System& system, KThread* thread,
system.Kernel().GlobalSchedulerContext().AddThread(thread);
return InitializeThread(thread, func, arg, user_stack_top, prio, virt_core, owner,
ThreadType::User, Core::CpuManager::GetGuestThreadStartFunc(),
system.GetCpuManager().GetStartFuncParamater());
system.GetCpuManager().GetStartFuncParameter());
}
void KThread::PostDestroy(uintptr_t arg) {
@@ -382,7 +382,7 @@ void KThread::FinishTermination() {
for (std::size_t i = 0; i < static_cast<std::size_t>(Core::Hardware::NUM_CPU_CORES); ++i) {
KThread* core_thread{};
do {
core_thread = kernel.Scheduler(i).GetCurrentThread();
core_thread = kernel.Scheduler(i).GetSchedulerCurrentThread();
} while (core_thread == this);
}
}
@@ -631,7 +631,7 @@ ResultCode KThread::SetCoreMask(s32 core_id_, u64 v_affinity_mask) {
s32 thread_core;
for (thread_core = 0; thread_core < static_cast<s32>(Core::Hardware::NUM_CPU_CORES);
++thread_core) {
if (kernel.Scheduler(thread_core).GetCurrentThread() == this) {
if (kernel.Scheduler(thread_core).GetSchedulerCurrentThread() == this) {
thread_is_current = true;
break;
}
@@ -748,6 +748,19 @@ void KThread::Continue() {
KScheduler::OnThreadStateChanged(kernel, this, old_state);
}
void KThread::WaitUntilSuspended() {
// Make sure we have a suspend requested.
ASSERT(IsSuspendRequested());
// Loop until the thread is not executing on any core.
for (std::size_t i = 0; i < static_cast<std::size_t>(Core::Hardware::NUM_CPU_CORES); ++i) {
KThread* core_thread{};
do {
core_thread = kernel.Scheduler(i).GetSchedulerCurrentThread();
} while (core_thread == this);
}
}
ResultCode KThread::SetActivity(Svc::ThreadActivity activity) {
// Lock ourselves.
KScopedLightLock lk(activity_pause_lock);
@@ -809,7 +822,7 @@ ResultCode KThread::SetActivity(Svc::ThreadActivity activity) {
// Check if the thread is currently running.
// If it is, we'll need to retry.
for (auto i = 0; i < static_cast<s32>(Core::Hardware::NUM_CPU_CORES); ++i) {
if (kernel.Scheduler(i).GetCurrentThread() == this) {
if (kernel.Scheduler(i).GetSchedulerCurrentThread() == this) {
thread_is_current = true;
break;
}
@@ -1014,8 +1027,6 @@ ResultCode KThread::Run() {
// Set our state and finish.
SetState(ThreadState::Runnable);
DisableDispatch();
return ResultSuccess;
}
}
@@ -1164,6 +1175,10 @@ std::shared_ptr<Common::Fiber>& KThread::GetHostContext() {
return host_context;
}
void SetCurrentThread(KernelCore& kernel, KThread* thread) {
kernel.SetCurrentEmuThread(thread);
}
KThread* GetCurrentThreadPointer(KernelCore& kernel) {
return kernel.GetCurrentEmuThread();
}

View File

@@ -106,6 +106,7 @@ enum class StepState : u32 {
StepPerformed, ///< Thread has stepped, waiting to be scheduled again
};
void SetCurrentThread(KernelCore& kernel, KThread* thread);
[[nodiscard]] KThread* GetCurrentThreadPointer(KernelCore& kernel);
[[nodiscard]] KThread& GetCurrentThread(KernelCore& kernel);
[[nodiscard]] s32 GetCurrentCoreId(KernelCore& kernel);
@@ -207,6 +208,8 @@ public:
void Continue();
void WaitUntilSuspended();
constexpr void SetSyncedIndex(s32 index) {
synced_index = index;
}

View File

@@ -76,7 +76,7 @@ struct KernelCore::Impl {
InitializeMemoryLayout();
Init::InitializeKPageBufferSlabHeap(system);
InitializeSchedulers();
InitializeSuspendThreads();
InitializeShutdownThreads();
InitializePreemption(kernel);
RegisterHostThread();
@@ -143,9 +143,9 @@ struct KernelCore::Impl {
CleanupObject(system_resource_limit);
for (u32 core_id = 0; core_id < Core::Hardware::NUM_CPU_CORES; core_id++) {
if (suspend_threads[core_id]) {
suspend_threads[core_id]->Close();
suspend_threads[core_id] = nullptr;
if (shutdown_threads[core_id]) {
shutdown_threads[core_id]->Close();
shutdown_threads[core_id] = nullptr;
}
schedulers[core_id]->Finalize();
@@ -247,14 +247,13 @@ struct KernelCore::Impl {
system.CoreTiming().ScheduleEvent(time_interval, preemption_event);
}
void InitializeSuspendThreads() {
void InitializeShutdownThreads() {
for (u32 core_id = 0; core_id < Core::Hardware::NUM_CPU_CORES; core_id++) {
suspend_threads[core_id] = KThread::Create(system.Kernel());
ASSERT(KThread::InitializeHighPriorityThread(system, suspend_threads[core_id], {}, {},
shutdown_threads[core_id] = KThread::Create(system.Kernel());
ASSERT(KThread::InitializeHighPriorityThread(system, shutdown_threads[core_id], {}, {},
core_id)
.IsSuccess());
suspend_threads[core_id]->SetName(fmt::format("SuspendThread:{}", core_id));
suspend_threads[core_id]->DisableDispatch();
shutdown_threads[core_id]->SetName(fmt::format("SuspendThread:{}", core_id));
}
}
@@ -332,6 +331,8 @@ struct KernelCore::Impl {
return is_shutting_down.load(std::memory_order_relaxed);
}
static inline thread_local KThread* current_thread{nullptr};
KThread* GetCurrentEmuThread() {
// If we are shutting down the kernel, none of this is relevant anymore.
if (IsShuttingDown()) {
@@ -342,7 +343,12 @@ struct KernelCore::Impl {
if (thread_id >= Core::Hardware::NUM_CPU_CORES) {
return GetHostDummyThread();
}
return schedulers[thread_id]->GetCurrentThread();
return current_thread;
}
void SetCurrentEmuThread(KThread* thread) {
current_thread = thread;
}
void DeriveInitialMemoryLayout() {
@@ -769,7 +775,7 @@ struct KernelCore::Impl {
std::weak_ptr<ServiceThread> default_service_thread;
Common::ThreadWorker service_threads_manager;
std::array<KThread*, Core::Hardware::NUM_CPU_CORES> suspend_threads;
std::array<KThread*, Core::Hardware::NUM_CPU_CORES> shutdown_threads;
std::array<Core::CPUInterruptHandler, Core::Hardware::NUM_CPU_CORES> interrupts{};
std::array<std::unique_ptr<Kernel::KScheduler>, Core::Hardware::NUM_CPU_CORES> schedulers{};
@@ -920,6 +926,12 @@ const KAutoObjectWithListContainer& KernelCore::ObjectListContainer() const {
return *impl->global_object_list_container;
}
void KernelCore::InterruptAllPhysicalCores() {
for (auto& physical_core : impl->cores) {
physical_core.Interrupt();
}
}
void KernelCore::InvalidateAllInstructionCaches() {
for (auto& physical_core : impl->cores) {
physical_core.ArmInterface().ClearInstructionCache();
@@ -1019,6 +1031,10 @@ KThread* KernelCore::GetCurrentEmuThread() const {
return impl->GetCurrentEmuThread();
}
void KernelCore::SetCurrentEmuThread(KThread* thread) {
impl->SetCurrentEmuThread(thread);
}
KMemoryManager& KernelCore::MemoryManager() {
return *impl->memory_manager;
}
@@ -1067,19 +1083,29 @@ const Kernel::KSharedMemory& KernelCore::GetHidBusSharedMem() const {
return *impl->hidbus_shared_mem;
}
void KernelCore::Suspend(bool in_suspention) {
const bool should_suspend = exception_exited || in_suspention;
{
KScopedSchedulerLock lock(*this);
const auto state = should_suspend ? ThreadState::Runnable : ThreadState::Waiting;
for (u32 core_id = 0; core_id < Core::Hardware::NUM_CPU_CORES; core_id++) {
impl->suspend_threads[core_id]->SetState(state);
impl->suspend_threads[core_id]->SetWaitReasonForDebugging(
ThreadWaitReasonForDebugging::Suspended);
void KernelCore::Suspend(bool suspended) {
const bool should_suspend{exception_exited || suspended};
const auto activity = should_suspend ? ProcessActivity::Paused : ProcessActivity::Runnable;
for (auto* process : GetProcessList()) {
process->SetActivity(activity);
if (should_suspend) {
// Wait for execution to stop
for (auto* thread : process->GetThreadList()) {
thread->WaitUntilSuspended();
}
}
}
}
void KernelCore::ShutdownCores() {
for (auto* thread : impl->shutdown_threads) {
void(thread->Run());
}
InterruptAllPhysicalCores();
}
bool KernelCore::IsMulticore() const {
return impl->is_multicore;
}

View File

@@ -184,6 +184,8 @@ public:
const std::array<Core::CPUInterruptHandler, Core::Hardware::NUM_CPU_CORES>& Interrupts() const;
void InterruptAllPhysicalCores();
void InvalidateAllInstructionCaches();
void InvalidateCpuInstructionCacheRange(VAddr addr, std::size_t size);
@@ -224,6 +226,9 @@ public:
/// Gets the current host_thread/guest_thread pointer.
KThread* GetCurrentEmuThread() const;
/// Sets the current guest_thread pointer.
void SetCurrentEmuThread(KThread* thread);
/// Gets the current host_thread handle.
u32 GetCurrentHostThreadID() const;
@@ -269,12 +274,15 @@ public:
/// Gets the shared memory object for HIDBus services.
const Kernel::KSharedMemory& GetHidBusSharedMem() const;
/// Suspend/unsuspend the OS.
void Suspend(bool in_suspention);
/// Suspend/unsuspend all processes.
void Suspend(bool suspend);
/// Exceptional exit the OS.
/// Exceptional exit all processes.
void ExceptionalExit();
/// Notify emulated CPU cores to shut down.
void ShutdownCores();
bool IsMulticore() const;
bool IsShuttingDown() const;

View File

@@ -327,7 +327,6 @@ static ResultCode SendSyncRequest(Core::System& system, Handle handle) {
LOG_TRACE(Kernel_SVC, "called handle=0x{:08X}({})", handle, session->GetName());
auto thread = kernel.CurrentScheduler()->GetCurrentThread();
{
KScopedSchedulerLock lock(kernel);
@@ -337,7 +336,7 @@ static ResultCode SendSyncRequest(Core::System& system, Handle handle) {
session->SendSyncRequest(&GetCurrentThread(kernel), system.Memory(), system.CoreTiming());
}
return thread->GetWaitResult();
return GetCurrentThread(kernel).GetWaitResult();
}
static ResultCode SendSyncRequest32(Core::System& system, Handle handle) {
@@ -624,7 +623,7 @@ static void Break(Core::System& system, u32 reason, u64 info1, u64 info2) {
handle_debug_buffer(info1, info2);
auto* const current_thread = system.Kernel().CurrentScheduler()->GetCurrentThread();
auto* const current_thread = GetCurrentThreadPointer(system.Kernel());
const auto thread_processor_id = current_thread->GetActiveCore();
system.ArmInterface(static_cast<std::size_t>(thread_processor_id)).LogBacktrace();
}
@@ -692,6 +691,9 @@ static ResultCode GetInfo(Core::System& system, u64* result, u64 info_id, Handle
// 6.0.0+
TotalPhysicalMemoryAvailableWithoutSystemResource = 21,
TotalPhysicalMemoryUsedWithoutSystemResource = 22,
// Homebrew only
MesosphereCurrentProcess = 65001,
};
const auto info_id_type = static_cast<GetInfoType>(info_id);
@@ -884,7 +886,7 @@ static ResultCode GetInfo(Core::System& system, u64* result, u64 info_id, Handle
const auto& core_timing = system.CoreTiming();
const auto& scheduler = *system.Kernel().CurrentScheduler();
const auto* const current_thread = scheduler.GetCurrentThread();
const auto* const current_thread = GetCurrentThreadPointer(system.Kernel());
const bool same_thread = current_thread == thread.GetPointerUnsafe();
const u64 prev_ctx_ticks = scheduler.GetLastContextSwitchTicks();
@@ -914,6 +916,27 @@ static ResultCode GetInfo(Core::System& system, u64* result, u64 info_id, Handle
*result = system.Kernel().CurrentScheduler()->GetIdleThread()->GetCpuTime();
return ResultSuccess;
}
case GetInfoType::MesosphereCurrentProcess: {
// Verify the input handle is invalid.
R_UNLESS(handle == InvalidHandle, ResultInvalidHandle);
// Verify the sub-type is valid.
R_UNLESS(info_sub_id == 0, ResultInvalidCombination);
// Get the handle table.
KProcess* current_process = system.Kernel().CurrentProcess();
KHandleTable& handle_table = current_process->GetHandleTable();
// Get a new handle for the current process.
Handle tmp;
R_TRY(handle_table.Add(&tmp, current_process));
// Set the output.
*result = tmp;
// We succeeded.
return ResultSuccess;
}
default:
LOG_ERROR(Kernel_SVC, "Unimplemented svcGetInfo id=0x{:016X}", info_id);
return ResultInvalidEnumValue;
@@ -1103,7 +1126,7 @@ static ResultCode GetThreadContext(Core::System& system, VAddr out_context, Hand
if (thread->GetRawState() != ThreadState::Runnable) {
bool current = false;
for (auto i = 0; i < static_cast<s32>(Core::Hardware::NUM_CPU_CORES); ++i) {
if (thread.GetPointerUnsafe() == kernel.Scheduler(i).GetCurrentThread()) {
if (thread.GetPointerUnsafe() == kernel.Scheduler(i).GetSchedulerCurrentThread()) {
current = true;
break;
}
@@ -1726,11 +1749,12 @@ static ResultCode UnmapProcessCodeMemory(Core::System& system, Handle process_ha
/// Exits the current process
static void ExitProcess(Core::System& system) {
auto* current_process = system.Kernel().CurrentProcess();
UNIMPLEMENTED();
LOG_INFO(Kernel_SVC, "Process {} exiting", current_process->GetProcessID());
ASSERT_MSG(current_process->GetStatus() == ProcessStatus::Running,
"Process has already exited");
system.Exit();
}
static void ExitProcess32(Core::System& system) {
@@ -1850,7 +1874,7 @@ static ResultCode StartThread32(Core::System& system, Handle thread_handle) {
static void ExitThread(Core::System& system) {
LOG_DEBUG(Kernel_SVC, "called, pc=0x{:08X}", system.CurrentArmInterface().GetPC());
auto* const current_thread = system.Kernel().CurrentScheduler()->GetCurrentThread();
auto* const current_thread = GetCurrentThreadPointer(system.Kernel());
system.GlobalSchedulerContext().RemoveThread(current_thread);
current_thread->Exit();
system.Kernel().UnregisterInUseObject(current_thread);
@@ -2537,7 +2561,7 @@ static ResultCode GetThreadList(Core::System& system, u32* out_num_threads, VAdd
return ResultOutOfRange;
}
const auto* const current_process = system.Kernel().CurrentProcess();
auto* const current_process = system.Kernel().CurrentProcess();
const auto total_copy_size = out_thread_ids_size * sizeof(u64);
if (out_thread_ids_size > 0 &&
@@ -2989,11 +3013,10 @@ static const FunctionDef* GetSVCInfo64(u32 func_num) {
}
void Call(Core::System& system, u32 immediate) {
system.ExitDynarmicProfile();
auto& kernel = system.Kernel();
kernel.EnterSVCProfile();
auto* thread = kernel.CurrentScheduler()->GetCurrentThread();
auto* thread = GetCurrentThreadPointer(kernel);
thread->SetIsCallingSvc();
const FunctionDef* info = system.CurrentProcess()->Is64BitProcess() ? GetSVCInfo64(immediate)
@@ -3014,8 +3037,6 @@ void Call(Core::System& system, u32 immediate) {
auto* host_context = thread->GetHostContext().get();
host_context->Rewind();
}
system.EnterDynarmicProfile();
}
} // namespace Kernel::Svc

View File

@@ -686,7 +686,7 @@ ICommonStateGetter::ICommonStateGetter(Core::System& system_,
{66, &ICommonStateGetter::SetCpuBoostMode, "SetCpuBoostMode"},
{67, nullptr, "CancelCpuBoostMode"},
{68, nullptr, "GetBuiltInDisplayType"},
{80, nullptr, "PerformSystemButtonPressingIfInFocus"},
{80, &ICommonStateGetter::PerformSystemButtonPressingIfInFocus, "PerformSystemButtonPressingIfInFocus"},
{90, nullptr, "SetPerformanceConfigurationChangedNotification"},
{91, nullptr, "GetCurrentPerformanceConfiguration"},
{100, nullptr, "SetHandlingHomeButtonShortPressedEnabled"},
@@ -826,6 +826,16 @@ void ICommonStateGetter::SetCpuBoostMode(Kernel::HLERequestContext& ctx) {
apm_sys->SetCpuBoostMode(ctx);
}
void ICommonStateGetter::PerformSystemButtonPressingIfInFocus(Kernel::HLERequestContext& ctx) {
IPC::RequestParser rp{ctx};
const auto system_button{rp.PopEnum<SystemButtonType>()};
LOG_WARNING(Service_AM, "(STUBBED) called, system_button={}", system_button);
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(ResultSuccess);
}
void ICommonStateGetter::SetRequestExitToLibraryAppletAtExecuteNextProgramEnabled(
Kernel::HLERequestContext& ctx) {
LOG_WARNING(Service_AM, "(STUBBED) called");

View File

@@ -220,6 +220,18 @@ private:
Docked = 1,
};
// This is nn::am::service::SystemButtonType
enum class SystemButtonType {
None,
HomeButtonShortPressing,
HomeButtonLongPressing,
PowerButtonShortPressing,
PowerButtonLongPressing,
ShutdownSystem,
CaptureButtonShortPressing,
CaptureButtonLongPressing,
};
void GetEventHandle(Kernel::HLERequestContext& ctx);
void ReceiveMessage(Kernel::HLERequestContext& ctx);
void GetCurrentFocusState(Kernel::HLERequestContext& ctx);
@@ -234,6 +246,7 @@ private:
void EndVrModeEx(Kernel::HLERequestContext& ctx);
void GetDefaultDisplayResolution(Kernel::HLERequestContext& ctx);
void SetCpuBoostMode(Kernel::HLERequestContext& ctx);
void PerformSystemButtonPressingIfInFocus(Kernel::HLERequestContext& ctx);
void SetRequestExitToLibraryAppletAtExecuteNextProgramEnabled(Kernel::HLERequestContext& ctx);
std::shared_ptr<AppletMessageQueue> msg_queue;

View File

@@ -1,6 +1,11 @@
// SPDX-FileCopyrightText: Copyright 2021 yuzu Emulator Project
// SPDX-License-Identifier: GPL-2.0-or-later
#include <algorithm>
#include <cstring>
#include "common/assert.h"
#include "common/logging/log.h"
#include "core/hle/ipc_helpers.h"
#include "core/hle/service/glue/notif.h"
@@ -9,11 +14,11 @@ namespace Service::Glue {
NOTIF_A::NOTIF_A(Core::System& system_) : ServiceFramework{system_, "notif:a"} {
// clang-format off
static const FunctionInfo functions[] = {
{500, nullptr, "RegisterAlarmSetting"},
{510, nullptr, "UpdateAlarmSetting"},
{500, &NOTIF_A::RegisterAlarmSetting, "RegisterAlarmSetting"},
{510, &NOTIF_A::UpdateAlarmSetting, "UpdateAlarmSetting"},
{520, &NOTIF_A::ListAlarmSettings, "ListAlarmSettings"},
{530, nullptr, "LoadApplicationParameter"},
{540, nullptr, "DeleteAlarmSetting"},
{530, &NOTIF_A::LoadApplicationParameter, "LoadApplicationParameter"},
{540, &NOTIF_A::DeleteAlarmSetting, "DeleteAlarmSetting"},
{1000, &NOTIF_A::Initialize, "Initialize"},
};
// clang-format on
@@ -23,21 +28,132 @@ NOTIF_A::NOTIF_A(Core::System& system_) : ServiceFramework{system_, "notif:a"} {
NOTIF_A::~NOTIF_A() = default;
void NOTIF_A::ListAlarmSettings(Kernel::HLERequestContext& ctx) {
// Returns an array of AlarmSetting
constexpr s32 alarm_count = 0;
void NOTIF_A::RegisterAlarmSetting(Kernel::HLERequestContext& ctx) {
const auto alarm_setting_buffer_size = ctx.GetReadBufferSize(0);
const auto application_parameter_size = ctx.GetReadBufferSize(1);
LOG_WARNING(Service_NOTIF, "(STUBBED) called");
ASSERT_MSG(alarm_setting_buffer_size == sizeof(AlarmSetting),
"alarm_setting_buffer_size is not 0x40 bytes");
ASSERT_MSG(application_parameter_size <= sizeof(ApplicationParameter),
"application_parameter_size is bigger than 0x400 bytes");
AlarmSetting new_alarm{};
memcpy(&new_alarm, ctx.ReadBuffer(0).data(), sizeof(AlarmSetting));
// TODO: Count alarms per game id
if (alarms.size() >= max_alarms) {
LOG_ERROR(Service_NOTIF, "Alarm limit reached");
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(ResultUnknown);
return;
}
new_alarm.alarm_setting_id = last_alarm_setting_id++;
alarms.push_back(new_alarm);
// TODO: Save application parameter data
LOG_WARNING(Service_NOTIF,
"(STUBBED) called, application_parameter_size={}, setting_id={}, kind={}, muted={}",
application_parameter_size, new_alarm.alarm_setting_id, new_alarm.kind,
new_alarm.muted);
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(ResultSuccess);
rb.Push(new_alarm.alarm_setting_id);
}
void NOTIF_A::UpdateAlarmSetting(Kernel::HLERequestContext& ctx) {
const auto alarm_setting_buffer_size = ctx.GetReadBufferSize(0);
const auto application_parameter_size = ctx.GetReadBufferSize(1);
ASSERT_MSG(alarm_setting_buffer_size == sizeof(AlarmSetting),
"alarm_setting_buffer_size is not 0x40 bytes");
ASSERT_MSG(application_parameter_size <= sizeof(ApplicationParameter),
"application_parameter_size is bigger than 0x400 bytes");
AlarmSetting alarm_setting{};
memcpy(&alarm_setting, ctx.ReadBuffer(0).data(), sizeof(AlarmSetting));
const auto alarm_it = GetAlarmFromId(alarm_setting.alarm_setting_id);
if (alarm_it != alarms.end()) {
LOG_DEBUG(Service_NOTIF, "Alarm updated");
*alarm_it = alarm_setting;
// TODO: Save application parameter data
}
LOG_WARNING(Service_NOTIF,
"(STUBBED) called, application_parameter_size={}, setting_id={}, kind={}, muted={}",
application_parameter_size, alarm_setting.alarm_setting_id, alarm_setting.kind,
alarm_setting.muted);
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(ResultSuccess);
}
void NOTIF_A::ListAlarmSettings(Kernel::HLERequestContext& ctx) {
LOG_INFO(Service_NOTIF, "called, alarm_count={}", alarms.size());
// TODO: Only return alarms of this game id
ctx.WriteBuffer(alarms);
IPC::ResponseBuilder rb{ctx, 3};
rb.Push(ResultSuccess);
rb.Push(alarm_count);
rb.Push(static_cast<u32>(alarms.size()));
}
void NOTIF_A::LoadApplicationParameter(Kernel::HLERequestContext& ctx) {
IPC::RequestParser rp{ctx};
const auto alarm_setting_id{rp.Pop<AlarmSettingId>()};
const auto alarm_it = GetAlarmFromId(alarm_setting_id);
if (alarm_it == alarms.end()) {
LOG_ERROR(Service_NOTIF, "Invalid alarm setting id={}", alarm_setting_id);
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(ResultUnknown);
return;
}
// TODO: Read application parameter related to this setting id
ApplicationParameter application_parameter{};
LOG_WARNING(Service_NOTIF, "(STUBBED) called, alarm_setting_id={}", alarm_setting_id);
ctx.WriteBuffer(application_parameter);
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(ResultSuccess);
rb.Push(static_cast<u32>(application_parameter.size()));
}
void NOTIF_A::DeleteAlarmSetting(Kernel::HLERequestContext& ctx) {
IPC::RequestParser rp{ctx};
const auto alarm_setting_id{rp.Pop<AlarmSettingId>()};
std::erase_if(alarms, [alarm_setting_id](const AlarmSetting& alarm) {
return alarm.alarm_setting_id == alarm_setting_id;
});
LOG_INFO(Service_NOTIF, "called, alarm_setting_id={}", alarm_setting_id);
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(ResultSuccess);
}
void NOTIF_A::Initialize(Kernel::HLERequestContext& ctx) {
// TODO: Load previous alarms from config
LOG_WARNING(Service_NOTIF, "(STUBBED) called");
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(ResultSuccess);
}
std::vector<NOTIF_A::AlarmSetting>::iterator NOTIF_A::GetAlarmFromId(
AlarmSettingId alarm_setting_id) {
return std::find_if(alarms.begin(), alarms.end(),
[alarm_setting_id](const AlarmSetting& alarm) {
return alarm.alarm_setting_id == alarm_setting_id;
});
}
} // namespace Service::Glue

View File

@@ -3,6 +3,10 @@
#pragma once
#include <array>
#include <vector>
#include "common/uuid.h"
#include "core/hle/service/service.h"
namespace Core {
@@ -17,8 +21,52 @@ public:
~NOTIF_A() override;
private:
static constexpr std::size_t max_alarms = 8;
// This is nn::notification::AlarmSettingId
using AlarmSettingId = u16;
static_assert(sizeof(AlarmSettingId) == 0x2, "AlarmSettingId is an invalid size");
using ApplicationParameter = std::array<u8, 0x400>;
static_assert(sizeof(ApplicationParameter) == 0x400, "ApplicationParameter is an invalid size");
struct DailyAlarmSetting {
s8 hour;
s8 minute;
};
static_assert(sizeof(DailyAlarmSetting) == 0x2, "DailyAlarmSetting is an invalid size");
struct WeeklyScheduleAlarmSetting {
INSERT_PADDING_BYTES(0xA);
std::array<DailyAlarmSetting, 0x7> day_of_week;
};
static_assert(sizeof(WeeklyScheduleAlarmSetting) == 0x18,
"WeeklyScheduleAlarmSetting is an invalid size");
// This is nn::notification::AlarmSetting
struct AlarmSetting {
AlarmSettingId alarm_setting_id;
u8 kind;
u8 muted;
INSERT_PADDING_BYTES(0x4);
Common::UUID account_id;
u64 application_id;
INSERT_PADDING_BYTES(0x8);
WeeklyScheduleAlarmSetting schedule;
};
static_assert(sizeof(AlarmSetting) == 0x40, "AlarmSetting is an invalid size");
void RegisterAlarmSetting(Kernel::HLERequestContext& ctx);
void UpdateAlarmSetting(Kernel::HLERequestContext& ctx);
void ListAlarmSettings(Kernel::HLERequestContext& ctx);
void LoadApplicationParameter(Kernel::HLERequestContext& ctx);
void DeleteAlarmSetting(Kernel::HLERequestContext& ctx);
void Initialize(Kernel::HLERequestContext& ctx);
std::vector<AlarmSetting>::iterator GetAlarmFromId(AlarmSettingId alarm_setting_id);
std::vector<AlarmSetting> alarms{};
AlarmSettingId last_alarm_setting_id{};
};
} // namespace Service::Glue

View File

@@ -150,9 +150,9 @@ NvResult nvhost_ctrl::IocCtrlEventWait(const std::vector<u8>& input, std::vector
event.event->GetWritableEvent().Clear();
if (events_interface.failed[event_id]) {
{
auto lk = system.StallCPU();
auto lk = system.StallProcesses();
gpu.WaitFence(params.syncpt_id, target_value);
system.UnstallCPU();
system.UnstallProcesses();
}
std::memcpy(output.data(), &params, sizeof(params));
events_interface.failed[event_id] = false;

View File

@@ -67,6 +67,16 @@ struct Memory::Impl {
return system.DeviceMemory().GetPointer(paddr) + vaddr;
}
[[nodiscard]] u8* GetPointerFromDebugMemory(VAddr vaddr) const {
const PAddr paddr{current_page_table->backing_addr[vaddr >> PAGE_BITS]};
if (paddr == 0) {
return {};
}
return system.DeviceMemory().GetPointer(paddr) + vaddr;
}
u8 Read8(const VAddr addr) {
return Read<u8>(addr);
}
@@ -187,6 +197,12 @@ struct Memory::Impl {
on_memory(copy_amount, mem_ptr);
break;
}
case Common::PageType::DebugMemory: {
DEBUG_ASSERT(pointer);
u8* const mem_ptr{GetPointerFromDebugMemory(current_vaddr)};
on_memory(copy_amount, mem_ptr);
break;
}
case Common::PageType::RasterizerCachedMemory: {
u8* const host_ptr{GetPointerFromRasterizerCachedMemory(current_vaddr)};
on_rasterizer(current_vaddr, copy_amount, host_ptr);
@@ -316,6 +332,58 @@ struct Memory::Impl {
});
}
void MarkRegionDebug(VAddr vaddr, u64 size, bool debug) {
if (vaddr == 0) {
return;
}
// Iterate over a contiguous CPU address space, marking/unmarking the region.
// The region is at a granularity of CPU pages.
const u64 num_pages = ((vaddr + size - 1) >> PAGE_BITS) - (vaddr >> PAGE_BITS) + 1;
for (u64 i = 0; i < num_pages; ++i, vaddr += PAGE_SIZE) {
const Common::PageType page_type{
current_page_table->pointers[vaddr >> PAGE_BITS].Type()};
if (debug) {
// Switch page type to debug if now debug
switch (page_type) {
case Common::PageType::Unmapped:
ASSERT_MSG(false, "Attempted to mark unmapped pages as debug");
break;
case Common::PageType::RasterizerCachedMemory:
case Common::PageType::DebugMemory:
// Page is already marked.
break;
case Common::PageType::Memory:
current_page_table->pointers[vaddr >> PAGE_BITS].Store(
nullptr, Common::PageType::DebugMemory);
break;
default:
UNREACHABLE();
}
} else {
// Switch page type to non-debug if now non-debug
switch (page_type) {
case Common::PageType::Unmapped:
ASSERT_MSG(false, "Attempted to mark unmapped pages as non-debug");
break;
case Common::PageType::RasterizerCachedMemory:
case Common::PageType::Memory:
// Don't mess with already non-debug or rasterizer memory.
break;
case Common::PageType::DebugMemory: {
u8* const pointer{GetPointerFromDebugMemory(vaddr & ~PAGE_MASK)};
current_page_table->pointers[vaddr >> PAGE_BITS].Store(
pointer - (vaddr & ~PAGE_MASK), Common::PageType::Memory);
break;
}
default:
UNREACHABLE();
}
}
}
}
void RasterizerMarkRegionCached(VAddr vaddr, u64 size, bool cached) {
if (vaddr == 0) {
return;
@@ -342,6 +410,7 @@ struct Memory::Impl {
// It is not necessary for a process to have this region mapped into its address
// space, for example, a system module need not have a VRAM mapping.
break;
case Common::PageType::DebugMemory:
case Common::PageType::Memory:
current_page_table->pointers[vaddr >> PAGE_BITS].Store(
nullptr, Common::PageType::RasterizerCachedMemory);
@@ -360,6 +429,7 @@ struct Memory::Impl {
// It is not necessary for a process to have this region mapped into its address
// space, for example, a system module need not have a VRAM mapping.
break;
case Common::PageType::DebugMemory:
case Common::PageType::Memory:
// There can be more than one GPU region mapped per CPU region, so it's common
// that this area is already unmarked as cached.
@@ -460,6 +530,8 @@ struct Memory::Impl {
case Common::PageType::Memory:
ASSERT_MSG(false, "Mapped memory page without a pointer @ 0x{:016X}", vaddr);
return nullptr;
case Common::PageType::DebugMemory:
return GetPointerFromDebugMemory(vaddr);
case Common::PageType::RasterizerCachedMemory: {
u8* const host_ptr{GetPointerFromRasterizerCachedMemory(vaddr)};
on_rasterizer();
@@ -591,7 +663,8 @@ bool Memory::IsValidVirtualAddress(const VAddr vaddr) const {
return false;
}
const auto [pointer, type] = page_table.pointers[page].PointerType();
return pointer != nullptr || type == Common::PageType::RasterizerCachedMemory;
return pointer != nullptr || type == Common::PageType::RasterizerCachedMemory ||
type == Common::PageType::DebugMemory;
}
bool Memory::IsValidVirtualAddressRange(VAddr base, u64 size) const {
@@ -707,4 +780,8 @@ void Memory::RasterizerMarkRegionCached(VAddr vaddr, u64 size, bool cached) {
impl->RasterizerMarkRegionCached(vaddr, size, cached);
}
void Memory::MarkRegionDebug(VAddr vaddr, u64 size, bool debug) {
impl->MarkRegionDebug(vaddr, size, debug);
}
} // namespace Core::Memory

View File

@@ -446,6 +446,17 @@ public:
*/
void RasterizerMarkRegionCached(VAddr vaddr, u64 size, bool cached);
/**
* Marks each page within the specified address range as debug or non-debug.
* Debug addresses are not accessible from fastmem pointers.
*
* @param vaddr The virtual address indicating the start of the address range.
* @param size The size of the address range in bytes.
* @param debug Whether or not any pages within the address range should be
* marked as debug or non-debug.
*/
void MarkRegionDebug(VAddr vaddr, u64 size, bool debug);
private:
Core::System& system;

View File

@@ -44,7 +44,6 @@ else()
-Werror
-Werror=conversion
-Werror=ignored-qualifiers
-Werror=shadow
$<$<CXX_COMPILER_ID:GNU>:-Werror=unused-but-set-parameter>
$<$<CXX_COMPILER_ID:GNU>:-Werror=unused-but-set-variable>
-Werror=unused-variable

View File

@@ -253,9 +253,6 @@ else()
-Werror
-Werror=conversion
-Werror=ignored-qualifiers
-Werror=implicit-fallthrough
-Werror=shadow
-Werror=sign-compare
$<$<CXX_COMPILER_ID:GNU>:-Werror=unused-but-set-parameter>
$<$<CXX_COMPILER_ID:GNU>:-Werror=unused-but-set-variable>
-Werror=unused-variable

View File

@@ -258,10 +258,6 @@ if (MSVC)
target_compile_options(video_core PRIVATE
/we4242 # 'identifier': conversion from 'type1' to 'type2', possible loss of data
/we4244 # 'conversion': conversion from 'type1' to 'type2', possible loss of data
/we4456 # Declaration of 'identifier' hides previous local declaration
/we4457 # Declaration of 'identifier' hides function parameter
/we4458 # Declaration of 'identifier' hides class member
/we4459 # Declaration of 'identifier' hides global declaration
)
else()
target_compile_options(video_core PRIVATE
@@ -269,7 +265,6 @@ else()
-Wno-error=sign-conversion
-Werror=pessimizing-move
-Werror=redundant-move
-Werror=shadow
-Werror=type-limits
$<$<CXX_COMPILER_ID:GNU>:-Werror=class-memaccess>

View File

@@ -211,7 +211,7 @@ public:
void FlushCachedWrites() noexcept {
flags &= ~BufferFlagBits::CachedWrites;
const u64 num_words = NumWords();
u64* const cached_words = Array<Type::CachedCPU>();
const u64* const cached_words = Array<Type::CachedCPU>();
u64* const untracked_words = Array<Type::Untracked>();
u64* const cpu_words = Array<Type::CPU>();
for (u64 word_index = 0; word_index < num_words; ++word_index) {
@@ -219,7 +219,6 @@ public:
NotifyRasterizer<false>(word_index, untracked_words[word_index], cached_bits);
untracked_words[word_index] |= cached_bits;
cpu_words[word_index] |= cached_bits;
cached_words[word_index] = 0;
}
}

View File

@@ -98,7 +98,7 @@ struct CommandDataContainer {
struct SynchState final {
using CommandQueue = Common::MPSCQueue<CommandDataContainer>;
std::mutex write_lock;
CommandQueue queue{512}; // size must be 2^n
CommandQueue queue;
u64 last_fence{};
std::atomic<u64> signaled_fence{};
std::condition_variable_any cv;

View File

@@ -328,31 +328,32 @@ void ASTCDecoderPass::Assemble(Image& image, const StagingBufferRef& map,
const VkImageAspectFlags aspect_mask = image.AspectMask();
const VkImage vk_image = image.Handle();
const bool is_initialized = image.ExchangeInitialization();
scheduler.Record(
[vk_pipeline, vk_image, aspect_mask, is_initialized](vk::CommandBuffer cmdbuf) {
const VkImageMemoryBarrier image_barrier{
.sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,
.pNext = nullptr,
.srcAccessMask = is_initialized ? VK_ACCESS_SHADER_WRITE_BIT : VK_ACCESS_NONE,
.dstAccessMask = VK_ACCESS_SHADER_READ_BIT | VK_ACCESS_SHADER_WRITE_BIT,
.oldLayout = is_initialized ? VK_IMAGE_LAYOUT_GENERAL : VK_IMAGE_LAYOUT_UNDEFINED,
.newLayout = VK_IMAGE_LAYOUT_GENERAL,
.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
.image = vk_image,
.subresourceRange{
.aspectMask = aspect_mask,
.baseMipLevel = 0,
.levelCount = VK_REMAINING_MIP_LEVELS,
.baseArrayLayer = 0,
.layerCount = VK_REMAINING_ARRAY_LAYERS,
},
};
cmdbuf.PipelineBarrier(is_initialized ? VK_PIPELINE_STAGE_ALL_COMMANDS_BIT
: VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT,
VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT, 0, image_barrier);
cmdbuf.BindPipeline(VK_PIPELINE_BIND_POINT_COMPUTE, vk_pipeline);
});
scheduler.Record([vk_pipeline, vk_image, aspect_mask,
is_initialized](vk::CommandBuffer cmdbuf) {
const VkImageMemoryBarrier image_barrier{
.sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,
.pNext = nullptr,
.srcAccessMask = static_cast<VkAccessFlags>(is_initialized ? VK_ACCESS_SHADER_WRITE_BIT
: VK_ACCESS_NONE),
.dstAccessMask = VK_ACCESS_SHADER_READ_BIT | VK_ACCESS_SHADER_WRITE_BIT,
.oldLayout = is_initialized ? VK_IMAGE_LAYOUT_GENERAL : VK_IMAGE_LAYOUT_UNDEFINED,
.newLayout = VK_IMAGE_LAYOUT_GENERAL,
.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
.image = vk_image,
.subresourceRange{
.aspectMask = aspect_mask,
.baseMipLevel = 0,
.levelCount = VK_REMAINING_MIP_LEVELS,
.baseArrayLayer = 0,
.layerCount = VK_REMAINING_ARRAY_LAYERS,
},
};
cmdbuf.PipelineBarrier(is_initialized ? VK_PIPELINE_STAGE_ALL_COMMANDS_BIT
: VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT,
VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT, 0, image_barrier);
cmdbuf.BindPipeline(VK_PIPELINE_BIND_POINT_COMPUTE, vk_pipeline);
});
for (const VideoCommon::SwizzleParameters& swizzle : swizzles) {
const size_t input_offset = swizzle.buffer_offset + map.offset;
const u32 num_dispatches_x = Common::DivCeil(swizzle.num_tiles.width, 8U);

View File

@@ -13,8 +13,8 @@ namespace WebService {
namespace Telemetry = Common::Telemetry;
struct TelemetryJson::Impl {
Impl(std::string host, std::string username, std::string token)
: host{std::move(host)}, username{std::move(username)}, token{std::move(token)} {}
Impl(std::string host_, std::string username_, std::string token_)
: host{std::move(host_)}, username{std::move(username_)}, token{std::move(token_)} {}
nlohmann::json& TopSection() {
return sections[static_cast<u8>(Telemetry::FieldType::None)];

View File

@@ -30,10 +30,10 @@ constexpr std::array<const char, 1> API_VERSION{'1'};
constexpr std::size_t TIMEOUT_SECONDS = 30;
struct Client::Impl {
Impl(std::string host, std::string username, std::string token)
: host{std::move(host)}, username{std::move(username)}, token{std::move(token)} {
Impl(std::string host_, std::string username_, std::string token_)
: host{std::move(host_)}, username{std::move(username_)}, token{std::move(token_)} {
std::scoped_lock lock{jwt_cache.mutex};
if (this->username == jwt_cache.username && this->token == jwt_cache.token) {
if (username == jwt_cache.username && token == jwt_cache.token) {
jwt = jwt_cache.jwt;
}
}
@@ -69,8 +69,8 @@ struct Client::Impl {
*/
WebResult GenericRequest(const std::string& method, const std::string& path,
const std::string& data, const std::string& accept,
const std::string& jwt = "", const std::string& username = "",
const std::string& token = "") {
const std::string& jwt_ = "", const std::string& username_ = "",
const std::string& token_ = "") {
if (cli == nullptr) {
cli = std::make_unique<httplib::Client>(host.c_str());
}
@@ -85,14 +85,14 @@ struct Client::Impl {
cli->set_write_timeout(TIMEOUT_SECONDS);
httplib::Headers params;
if (!jwt.empty()) {
if (!jwt_.empty()) {
params = {
{std::string("Authorization"), fmt::format("Bearer {}", jwt)},
{std::string("Authorization"), fmt::format("Bearer {}", jwt_)},
};
} else if (!username.empty()) {
} else if (!username_.empty()) {
params = {
{std::string("x-username"), username},
{std::string("x-token"), token},
{std::string("x-username"), username_},
{std::string("x-token"), token_},
};
}

View File

@@ -29,6 +29,7 @@
#include "common/scm_rev.h"
#include "common/settings.h"
#include "core/core.h"
#include "core/cpu_manager.h"
#include "core/frontend/framebuffer_layout.h"
#include "input_common/drivers/keyboard.h"
#include "input_common/drivers/mouse.h"
@@ -73,6 +74,8 @@ void EmuThread::run() {
gpu.ReleaseContext();
system.GetCpuManager().OnGpuReady();
// Holds whether the cpu was running during the last iteration,
// so that the DebugModeLeft signal can be emitted before the
// next execution step
@@ -127,7 +130,7 @@ void EmuThread::run() {
class OpenGLSharedContext : public Core::Frontend::GraphicsContext {
public:
/// Create the original context that should be shared from
explicit OpenGLSharedContext(QSurface* surface) : surface(surface) {
explicit OpenGLSharedContext(QSurface* surface_) : surface{surface_} {
QSurfaceFormat format;
format.setVersion(4, 6);
format.setProfile(QSurfaceFormat::CompatibilityProfile);
@@ -364,9 +367,9 @@ void GRenderWindow::RestoreGeometry() {
QWidget::restoreGeometry(geometry);
}
void GRenderWindow::restoreGeometry(const QByteArray& geometry) {
void GRenderWindow::restoreGeometry(const QByteArray& geometry_) {
// Make sure users of this class don't need to deal with backing up the geometry themselves
QWidget::restoreGeometry(geometry);
QWidget::restoreGeometry(geometry_);
BackupGeometry();
}
@@ -1014,8 +1017,8 @@ QStringList GRenderWindow::GetUnsupportedGLExtensions() const {
return unsupported_ext;
}
void GRenderWindow::OnEmulationStarting(EmuThread* emu_thread) {
this->emu_thread = emu_thread;
void GRenderWindow::OnEmulationStarting(EmuThread* emu_thread_) {
emu_thread = emu_thread_;
}
void GRenderWindow::OnEmulationStopping() {

View File

@@ -56,12 +56,12 @@ public:
/**
* Sets whether the emulation thread is running or not
* @param running Boolean value, set the emulation thread to running if true
* @param running_ Boolean value, set the emulation thread to running if true
* @note This function is thread-safe
*/
void SetRunning(bool running) {
void SetRunning(bool running_) {
std::unique_lock lock{running_mutex};
this->running = running;
running = running_;
lock.unlock();
running_cv.notify_all();
if (!running) {
@@ -138,8 +138,8 @@ public:
void BackupGeometry();
void RestoreGeometry();
void restoreGeometry(const QByteArray& geometry); // overridden
QByteArray saveGeometry(); // overridden
void restoreGeometry(const QByteArray& geometry_); // overridden
QByteArray saveGeometry(); // overridden
qreal windowPixelRatio() const;
@@ -189,7 +189,7 @@ public:
void Exit();
public slots:
void OnEmulationStarting(EmuThread* emu_thread);
void OnEmulationStarting(EmuThread* emu_thread_);
void OnEmulationStopping();
void OnFramebufferSizeChanged();

View File

@@ -27,12 +27,11 @@
#include "yuzu/hotkeys.h"
#include "yuzu/uisettings.h"
ConfigureDialog::ConfigureDialog(QWidget* parent, HotkeyRegistry& registry,
ConfigureDialog::ConfigureDialog(QWidget* parent, HotkeyRegistry& registry_,
InputCommon::InputSubsystem* input_subsystem,
Core::System& system_)
: QDialog(parent), ui{std::make_unique<Ui::ConfigureDialog>()},
registry(registry), system{system_}, audio_tab{std::make_unique<ConfigureAudio>(system_,
this)},
: QDialog(parent), ui{std::make_unique<Ui::ConfigureDialog>()}, registry{registry_},
system{system_}, audio_tab{std::make_unique<ConfigureAudio>(system_, this)},
cpu_tab{std::make_unique<ConfigureCpu>(system_, this)},
debug_tab_tab{std::make_unique<ConfigureDebugTab>(system_, this)},
filesystem_tab{std::make_unique<ConfigureFilesystem>(this)},

View File

@@ -40,7 +40,7 @@ class ConfigureDialog : public QDialog {
Q_OBJECT
public:
explicit ConfigureDialog(QWidget* parent, HotkeyRegistry& registry,
explicit ConfigureDialog(QWidget* parent, HotkeyRegistry& registry_,
InputCommon::InputSubsystem* input_subsystem, Core::System& system_);
~ConfigureDialog() override;

View File

@@ -264,15 +264,16 @@ QString ConfigureInputPlayer::AnalogToText(const Common::ParamPackage& param,
return QObject::tr("[unknown]");
}
ConfigureInputPlayer::ConfigureInputPlayer(QWidget* parent, std::size_t player_index,
QWidget* bottom_row,
ConfigureInputPlayer::ConfigureInputPlayer(QWidget* parent, std::size_t player_index_,
QWidget* bottom_row_,
InputCommon::InputSubsystem* input_subsystem_,
InputProfiles* profiles_, Core::HID::HIDCore& hid_core_,
bool is_powered_on_, bool debug)
: QWidget(parent), ui(std::make_unique<Ui::ConfigureInputPlayer>()), player_index(player_index),
debug(debug), is_powered_on{is_powered_on_}, input_subsystem{input_subsystem_},
profiles(profiles_), timeout_timer(std::make_unique<QTimer>()),
poll_timer(std::make_unique<QTimer>()), bottom_row(bottom_row), hid_core{hid_core_} {
bool is_powered_on_, bool debug_)
: QWidget(parent),
ui(std::make_unique<Ui::ConfigureInputPlayer>()), player_index{player_index_}, debug{debug_},
is_powered_on{is_powered_on_}, input_subsystem{input_subsystem_}, profiles(profiles_),
timeout_timer(std::make_unique<QTimer>()),
poll_timer(std::make_unique<QTimer>()), bottom_row{bottom_row_}, hid_core{hid_core_} {
if (player_index == 0) {
auto* emulated_controller_p1 =
hid_core.GetEmulatedController(Core::HID::NpadIdType::Player1);
@@ -696,39 +697,38 @@ ConfigureInputPlayer::ConfigureInputPlayer(QWidget* parent, std::size_t player_i
UpdateControllerEnabledButtons();
UpdateControllerButtonNames();
UpdateMotionButtons();
connect(ui->comboControllerType, qOverload<int>(&QComboBox::currentIndexChanged),
[this, player_index](int) {
UpdateControllerAvailableButtons();
UpdateControllerEnabledButtons();
UpdateControllerButtonNames();
UpdateMotionButtons();
const Core::HID::NpadStyleIndex type =
GetControllerTypeFromIndex(ui->comboControllerType->currentIndex());
connect(ui->comboControllerType, qOverload<int>(&QComboBox::currentIndexChanged), [this](int) {
UpdateControllerAvailableButtons();
UpdateControllerEnabledButtons();
UpdateControllerButtonNames();
UpdateMotionButtons();
const Core::HID::NpadStyleIndex type =
GetControllerTypeFromIndex(ui->comboControllerType->currentIndex());
if (player_index == 0) {
auto* emulated_controller_p1 =
hid_core.GetEmulatedController(Core::HID::NpadIdType::Player1);
auto* emulated_controller_handheld =
hid_core.GetEmulatedController(Core::HID::NpadIdType::Handheld);
bool is_connected = emulated_controller->IsConnected(true);
if (player_index == 0) {
auto* emulated_controller_p1 =
hid_core.GetEmulatedController(Core::HID::NpadIdType::Player1);
auto* emulated_controller_handheld =
hid_core.GetEmulatedController(Core::HID::NpadIdType::Handheld);
bool is_connected = emulated_controller->IsConnected(true);
emulated_controller_p1->SetNpadStyleIndex(type);
emulated_controller_handheld->SetNpadStyleIndex(type);
if (is_connected) {
if (type == Core::HID::NpadStyleIndex::Handheld) {
emulated_controller_p1->Disconnect();
emulated_controller_handheld->Connect(true);
emulated_controller = emulated_controller_handheld;
} else {
emulated_controller_handheld->Disconnect();
emulated_controller_p1->Connect(true);
emulated_controller = emulated_controller_p1;
}
}
ui->controllerFrame->SetController(emulated_controller);
emulated_controller_p1->SetNpadStyleIndex(type);
emulated_controller_handheld->SetNpadStyleIndex(type);
if (is_connected) {
if (type == Core::HID::NpadStyleIndex::Handheld) {
emulated_controller_p1->Disconnect();
emulated_controller_handheld->Connect(true);
emulated_controller = emulated_controller_handheld;
} else {
emulated_controller_handheld->Disconnect();
emulated_controller_p1->Connect(true);
emulated_controller = emulated_controller_p1;
}
emulated_controller->SetNpadStyleIndex(type);
});
}
ui->controllerFrame->SetController(emulated_controller);
}
emulated_controller->SetNpadStyleIndex(type);
});
connect(ui->comboDevices, qOverload<int>(&QComboBox::activated), this,
&ConfigureInputPlayer::UpdateMappingWithDefaults);

View File

@@ -35,10 +35,10 @@
#include "yuzu/uisettings.h"
#include "yuzu/util/util.h"
ConfigurePerGame::ConfigurePerGame(QWidget* parent, u64 title_id, const std::string& file_name,
ConfigurePerGame::ConfigurePerGame(QWidget* parent, u64 title_id_, const std::string& file_name,
Core::System& system_)
: QDialog(parent), ui(std::make_unique<Ui::ConfigurePerGame>()),
title_id(title_id), system{system_} {
: QDialog(parent),
ui(std::make_unique<Ui::ConfigurePerGame>()), title_id{title_id_}, system{system_} {
const auto file_path = std::filesystem::path(Common::FS::ToU8String(file_name));
const auto config_file_name = title_id == 0 ? Common::FS::PathToUTF8String(file_path.filename())
: fmt::format("{:016X}", title_id);
@@ -116,8 +116,8 @@ void ConfigurePerGame::HandleApplyButtonClicked() {
ApplyConfiguration();
}
void ConfigurePerGame::LoadFromFile(FileSys::VirtualFile file) {
this->file = std::move(file);
void ConfigurePerGame::LoadFromFile(FileSys::VirtualFile file_) {
file = std::move(file_);
LoadConfiguration();
}

View File

@@ -39,14 +39,14 @@ class ConfigurePerGame : public QDialog {
public:
// Cannot use std::filesystem::path due to https://bugreports.qt.io/browse/QTBUG-73263
explicit ConfigurePerGame(QWidget* parent, u64 title_id, const std::string& file_name,
explicit ConfigurePerGame(QWidget* parent, u64 title_id_, const std::string& file_name,
Core::System& system_);
~ConfigurePerGame() override;
/// Save all button configurations to settings file
void ApplyConfiguration();
void LoadFromFile(FileSys::VirtualFile file);
void LoadFromFile(FileSys::VirtualFile file_);
private:
void changeEvent(QEvent* event) override;

View File

@@ -89,8 +89,8 @@ void ConfigurePerGameAddons::ApplyConfiguration() {
Settings::values.disabled_addons[title_id] = disabled_addons;
}
void ConfigurePerGameAddons::LoadFromFile(FileSys::VirtualFile file) {
this->file = std::move(file);
void ConfigurePerGameAddons::LoadFromFile(FileSys::VirtualFile file_) {
file = std::move(file_);
LoadConfiguration();
}

View File

@@ -35,7 +35,7 @@ public:
/// Save all button configurations to settings file
void ApplyConfiguration();
void LoadFromFile(FileSys::VirtualFile file);
void LoadFromFile(FileSys::VirtualFile file_);
void SetTitleId(u64 id);

View File

@@ -165,10 +165,10 @@ ConfigureRingController::ConfigureRingController(QWidget* parent,
const std::string invert_str = invert_value ? "+" : "-";
param.Set("invert_x", invert_str);
emulated_device->SetRingParam(param);
for (int sub_button_id = 0; sub_button_id < ANALOG_SUB_BUTTONS_NUM;
++sub_button_id) {
analog_map_buttons[sub_button_id]->setText(
AnalogToText(param, analog_sub_buttons[sub_button_id]));
for (int sub_button_id2 = 0; sub_button_id2 < ANALOG_SUB_BUTTONS_NUM;
++sub_button_id2) {
analog_map_buttons[sub_button_id2]->setText(
AnalogToText(param, analog_sub_buttons[sub_button_id2]));
}
});
context_menu.exec(

View File

@@ -68,10 +68,10 @@ static QString ButtonToText(const Common::ParamPackage& param) {
}
ConfigureTouchFromButton::ConfigureTouchFromButton(
QWidget* parent, const std::vector<Settings::TouchFromButtonMap>& touch_maps,
QWidget* parent, const std::vector<Settings::TouchFromButtonMap>& touch_maps_,
InputCommon::InputSubsystem* input_subsystem_, const int default_index)
: QDialog(parent), ui(std::make_unique<Ui::ConfigureTouchFromButton>()),
touch_maps(touch_maps), input_subsystem{input_subsystem_}, selected_index(default_index),
touch_maps{touch_maps_}, input_subsystem{input_subsystem_}, selected_index{default_index},
timeout_timer(std::make_unique<QTimer>()), poll_timer(std::make_unique<QTimer>()) {
ui->setupUi(this);
binding_list_model = new QStandardItemModel(0, 3, this);

View File

@@ -37,7 +37,7 @@ class ConfigureTouchFromButton : public QDialog {
public:
explicit ConfigureTouchFromButton(QWidget* parent,
const std::vector<Settings::TouchFromButtonMap>& touch_maps,
const std::vector<Settings::TouchFromButtonMap>& touch_maps_,
InputCommon::InputSubsystem* input_subsystem_,
int default_index = 0);
~ConfigureTouchFromButton() override;

View File

@@ -113,9 +113,9 @@ QString WaitTreeText::GetText() const {
return text;
}
WaitTreeMutexInfo::WaitTreeMutexInfo(VAddr mutex_address, const Kernel::KHandleTable& handle_table,
WaitTreeMutexInfo::WaitTreeMutexInfo(VAddr mutex_address_, const Kernel::KHandleTable& handle_table,
Core::System& system_)
: mutex_address(mutex_address), system{system_} {
: mutex_address{mutex_address_}, system{system_} {
mutex_value = system.Memory().Read32(mutex_address);
owner_handle = static_cast<Kernel::Handle>(mutex_value & Kernel::Svc::HandleWaitMask);
owner = handle_table.GetObject<Kernel::KThread>(owner_handle).GetPointerUnsafe();
@@ -140,8 +140,8 @@ std::vector<std::unique_ptr<WaitTreeItem>> WaitTreeMutexInfo::GetChildren() cons
return list;
}
WaitTreeCallstack::WaitTreeCallstack(const Kernel::KThread& thread, Core::System& system_)
: thread(thread), system{system_} {}
WaitTreeCallstack::WaitTreeCallstack(const Kernel::KThread& thread_, Core::System& system_)
: thread{thread_}, system{system_} {}
WaitTreeCallstack::~WaitTreeCallstack() = default;
QString WaitTreeCallstack::GetText() const {
@@ -171,8 +171,8 @@ std::vector<std::unique_ptr<WaitTreeItem>> WaitTreeCallstack::GetChildren() cons
}
WaitTreeSynchronizationObject::WaitTreeSynchronizationObject(
const Kernel::KSynchronizationObject& o, Core::System& system_)
: object(o), system{system_} {}
const Kernel::KSynchronizationObject& object_, Core::System& system_)
: object{object_}, system{system_} {}
WaitTreeSynchronizationObject::~WaitTreeSynchronizationObject() = default;
WaitTreeExpandableItem::WaitTreeExpandableItem() = default;
@@ -380,8 +380,8 @@ std::vector<std::unique_ptr<WaitTreeItem>> WaitTreeThread::GetChildren() const {
return list;
}
WaitTreeEvent::WaitTreeEvent(const Kernel::KReadableEvent& object, Core::System& system_)
: WaitTreeSynchronizationObject(object, system_) {}
WaitTreeEvent::WaitTreeEvent(const Kernel::KReadableEvent& object_, Core::System& system_)
: WaitTreeSynchronizationObject(object_, system_) {}
WaitTreeEvent::~WaitTreeEvent() = default;
WaitTreeThreadList::WaitTreeThreadList(std::vector<Kernel::KThread*>&& list, Core::System& system_)

View File

@@ -78,7 +78,7 @@ public:
class WaitTreeMutexInfo : public WaitTreeExpandableItem {
Q_OBJECT
public:
explicit WaitTreeMutexInfo(VAddr mutex_address, const Kernel::KHandleTable& handle_table,
explicit WaitTreeMutexInfo(VAddr mutex_address_, const Kernel::KHandleTable& handle_table,
Core::System& system_);
~WaitTreeMutexInfo() override;
@@ -97,7 +97,7 @@ private:
class WaitTreeCallstack : public WaitTreeExpandableItem {
Q_OBJECT
public:
explicit WaitTreeCallstack(const Kernel::KThread& thread, Core::System& system_);
explicit WaitTreeCallstack(const Kernel::KThread& thread_, Core::System& system_);
~WaitTreeCallstack() override;
QString GetText() const override;
@@ -112,7 +112,7 @@ private:
class WaitTreeSynchronizationObject : public WaitTreeExpandableItem {
Q_OBJECT
public:
explicit WaitTreeSynchronizationObject(const Kernel::KSynchronizationObject& object,
explicit WaitTreeSynchronizationObject(const Kernel::KSynchronizationObject& object_,
Core::System& system_);
~WaitTreeSynchronizationObject() override;
@@ -162,7 +162,7 @@ private:
class WaitTreeEvent : public WaitTreeSynchronizationObject {
Q_OBJECT
public:
explicit WaitTreeEvent(const Kernel::KReadableEvent& object, Core::System& system_);
explicit WaitTreeEvent(const Kernel::KReadableEvent& object_, Core::System& system_);
~WaitTreeEvent() override;
};

View File

@@ -28,8 +28,8 @@
#include "yuzu/uisettings.h"
#include "yuzu/util/controller_navigation.h"
GameListSearchField::KeyReleaseEater::KeyReleaseEater(GameList* gamelist, QObject* parent)
: QObject(parent), gamelist{gamelist} {}
GameListSearchField::KeyReleaseEater::KeyReleaseEater(GameList* gamelist_, QObject* parent)
: QObject(parent), gamelist{gamelist_} {}
// EventFilter in order to process systemkeys while editing the searchfield
bool GameListSearchField::KeyReleaseEater::eventFilter(QObject* obj, QEvent* event) {
@@ -80,9 +80,9 @@ bool GameListSearchField::KeyReleaseEater::eventFilter(QObject* obj, QEvent* eve
return QObject::eventFilter(obj, event);
}
void GameListSearchField::setFilterResult(int visible, int total) {
this->visible = visible;
this->total = total;
void GameListSearchField::setFilterResult(int visible_, int total_) {
visible = visible_;
total = total_;
label_filter_result->setText(tr("%1 of %n result(s)", "", total).arg(visible));
}
@@ -309,9 +309,9 @@ void GameList::OnFilterCloseClicked() {
main_window->filterBarSetChecked(false);
}
GameList::GameList(FileSys::VirtualFilesystem vfs, FileSys::ManualContentProvider* provider,
GameList::GameList(FileSys::VirtualFilesystem vfs_, FileSys::ManualContentProvider* provider_,
Core::System& system_, GMainWindow* parent)
: QWidget{parent}, vfs(std::move(vfs)), provider(provider), system{system_} {
: QWidget{parent}, vfs{std::move(vfs_)}, provider{provider_}, system{system_} {
watcher = new QFileSystemWatcher(this);
connect(watcher, &QFileSystemWatcher::directoryChanged, this, &GameList::RefreshGameDirectory);

View File

@@ -67,8 +67,8 @@ public:
COLUMN_COUNT, // Number of columns
};
explicit GameList(std::shared_ptr<FileSys::VfsFilesystem> vfs,
FileSys::ManualContentProvider* provider, Core::System& system_,
explicit GameList(std::shared_ptr<FileSys::VfsFilesystem> vfs_,
FileSys::ManualContentProvider* provider_, Core::System& system_,
GMainWindow* parent = nullptr);
~GameList() override;

View File

@@ -225,8 +225,8 @@ public:
static constexpr int GameDirRole = Qt::UserRole + 2;
explicit GameListDir(UISettings::GameDir& directory,
GameListItemType dir_type = GameListItemType::CustomDir)
: dir_type{dir_type} {
GameListItemType dir_type_ = GameListItemType::CustomDir)
: dir_type{dir_type_} {
setData(type(), TypeRole);
UISettings::GameDir* game_dir = &directory;
@@ -348,7 +348,7 @@ public:
explicit GameListSearchField(GameList* parent = nullptr);
QString filterText() const;
void setFilterResult(int visible, int total);
void setFilterResult(int visible_, int total_);
void clear();
void setFocus();
@@ -356,7 +356,7 @@ public:
private:
class KeyReleaseEater : public QObject {
public:
explicit KeyReleaseEater(GameList* gamelist, QObject* parent = nullptr);
explicit KeyReleaseEater(GameList* gamelist_, QObject* parent = nullptr);
private:
GameList* gamelist = nullptr;

View File

@@ -223,12 +223,12 @@ QList<QStandardItem*> MakeGameListEntry(const std::string& path, const std::stri
}
} // Anonymous namespace
GameListWorker::GameListWorker(FileSys::VirtualFilesystem vfs,
FileSys::ManualContentProvider* provider,
QVector<UISettings::GameDir>& game_dirs,
const CompatibilityList& compatibility_list, Core::System& system_)
: vfs(std::move(vfs)), provider(provider), game_dirs(game_dirs),
compatibility_list(compatibility_list), system{system_} {}
GameListWorker::GameListWorker(FileSys::VirtualFilesystem vfs_,
FileSys::ManualContentProvider* provider_,
QVector<UISettings::GameDir>& game_dirs_,
const CompatibilityList& compatibility_list_, Core::System& system_)
: vfs{std::move(vfs_)}, provider{provider_}, game_dirs{game_dirs_},
compatibility_list{compatibility_list_}, system{system_} {}
GameListWorker::~GameListWorker() = default;

View File

@@ -33,10 +33,10 @@ class GameListWorker : public QObject, public QRunnable {
Q_OBJECT
public:
explicit GameListWorker(std::shared_ptr<FileSys::VfsFilesystem> vfs,
FileSys::ManualContentProvider* provider,
QVector<UISettings::GameDir>& game_dirs,
const CompatibilityList& compatibility_list, Core::System& system_);
explicit GameListWorker(std::shared_ptr<FileSys::VfsFilesystem> vfs_,
FileSys::ManualContentProvider* provider_,
QVector<UISettings::GameDir>& game_dirs_,
const CompatibilityList& compatibility_list_, Core::System& system_);
~GameListWorker() override;
/// Starts the processing of directory tree information.

View File

@@ -934,8 +934,7 @@ void GMainWindow::InitializeWidgets() {
Settings::values.renderer_backend.SetValue(Settings::RendererBackend::Vulkan);
} else {
Settings::values.renderer_backend.SetValue(Settings::RendererBackend::OpenGL);
const auto filter = Settings::values.scaling_filter.GetValue();
if (filter == Settings::ScalingFilter::Fsr) {
if (Settings::values.scaling_filter.GetValue() == Settings::ScalingFilter::Fsr) {
Settings::values.scaling_filter.SetValue(Settings::ScalingFilter::NearestNeighbor);
UpdateFilterText();
}
@@ -1442,7 +1441,7 @@ bool GMainWindow::LoadROM(const QString& filename, u64 program_id, std::size_t p
}
return false;
}
game_path = filename;
current_game_path = filename;
system->TelemetrySession().AddField(Common::Telemetry::FieldType::App, "Frontend", "Qt");
return true;
@@ -1508,7 +1507,7 @@ void GMainWindow::BootGame(const QString& filename, u64 program_id, std::size_t
// Register an ExecuteProgram callback such that Core can execute a sub-program
system->RegisterExecuteProgramCallback(
[this](std::size_t program_index) { render_window->ExecuteProgram(program_index); });
[this](std::size_t program_index_) { render_window->ExecuteProgram(program_index_); });
// Register an Exit callback such that Core can exit the currently running application.
system->RegisterExitCallback([this]() { render_window->Exit(); });
@@ -1641,7 +1640,7 @@ void GMainWindow::ShutdownGame() {
emu_frametime_label->setVisible(false);
renderer_status_button->setEnabled(!UISettings::values.has_broken_vulkan);
game_path.clear();
current_game_path.clear();
// When closing the game, destroy the GLWindow to clear the context after the game is closed
render_window->ReleaseRenderTarget();
@@ -2560,7 +2559,7 @@ void GMainWindow::OnRestartGame() {
return;
}
// Make a copy since BootGame edits game_path
BootGame(QString(game_path));
BootGame(QString(current_game_path));
}
void GMainWindow::OnPauseGame() {
@@ -2989,7 +2988,7 @@ void GMainWindow::OnToggleAdaptingFilter() {
void GMainWindow::OnConfigurePerGame() {
const u64 title_id = system->GetCurrentProcessProgramID();
OpenPerGameConfiguration(title_id, game_path.toStdString());
OpenPerGameConfiguration(title_id, current_game_path.toStdString());
}
void GMainWindow::OpenPerGameConfiguration(u64 title_id, const std::string& file_name) {

View File

@@ -369,7 +369,7 @@ private:
bool emulation_running = false;
std::unique_ptr<EmuThread> emu_thread;
// The path to the game currently running
QString game_path;
QString current_game_path;
bool auto_paused = false;
bool auto_muted = false;

View File

@@ -20,7 +20,7 @@ enum class MouseButton;
class EmuWindow_SDL2 : public Core::Frontend::EmuWindow {
public:
explicit EmuWindow_SDL2(InputCommon::InputSubsystem* input_subsystem, Core::System& system_);
explicit EmuWindow_SDL2(InputCommon::InputSubsystem* input_subsystem_, Core::System& system_);
~EmuWindow_SDL2();
/// Whether the window is still open, and a close request hasn't yet been sent

View File

@@ -73,9 +73,9 @@ bool EmuWindow_SDL2_GL::SupportsRequiredGLExtensions() {
return unsupported_ext.empty();
}
EmuWindow_SDL2_GL::EmuWindow_SDL2_GL(InputCommon::InputSubsystem* input_subsystem,
EmuWindow_SDL2_GL::EmuWindow_SDL2_GL(InputCommon::InputSubsystem* input_subsystem_,
Core::System& system_, bool fullscreen)
: EmuWindow_SDL2{input_subsystem, system_} {
: EmuWindow_SDL2{input_subsystem_, system_} {
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 4);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 6);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_COMPATIBILITY);

View File

@@ -17,7 +17,7 @@ class InputSubsystem;
class EmuWindow_SDL2_GL final : public EmuWindow_SDL2 {
public:
explicit EmuWindow_SDL2_GL(InputCommon::InputSubsystem* input_subsystem, Core::System& system_,
explicit EmuWindow_SDL2_GL(InputCommon::InputSubsystem* input_subsystem_, Core::System& system_,
bool fullscreen);
~EmuWindow_SDL2_GL();

View File

@@ -21,9 +21,9 @@
#include <SDL.h>
#include <SDL_syswm.h>
EmuWindow_SDL2_VK::EmuWindow_SDL2_VK(InputCommon::InputSubsystem* input_subsystem,
EmuWindow_SDL2_VK::EmuWindow_SDL2_VK(InputCommon::InputSubsystem* input_subsystem_,
Core::System& system_, bool fullscreen)
: EmuWindow_SDL2{input_subsystem, system_} {
: EmuWindow_SDL2{input_subsystem_, system_} {
const std::string window_title = fmt::format("yuzu {} | {}-{} (Vulkan)", Common::g_build_name,
Common::g_scm_branch, Common::g_scm_desc);
render_window =

View File

@@ -18,7 +18,7 @@ class InputSubsystem;
class EmuWindow_SDL2_VK final : public EmuWindow_SDL2 {
public:
explicit EmuWindow_SDL2_VK(InputCommon::InputSubsystem* input_subsystem, Core::System& system,
explicit EmuWindow_SDL2_VK(InputCommon::InputSubsystem* input_subsystem_, Core::System& system,
bool fullscreen);
~EmuWindow_SDL2_VK() override;

View File

@@ -21,6 +21,7 @@
#include "common/string_util.h"
#include "common/telemetry.h"
#include "core/core.h"
#include "core/cpu_manager.h"
#include "core/crypto/key_manager.h"
#include "core/file_sys/registered_cache.h"
#include "core/file_sys/vfs_real.h"
@@ -138,6 +139,12 @@ int main(int argc, char** argv) {
Config config{config_path};
// apply the log_filter setting
// the logger was initialized before and doesn't pick up the filter on its own
Common::Log::Filter filter;
filter.ParseFilterString(Settings::values.log_filter.GetValue());
Common::Log::SetGlobalFilter(filter);
if (!program_args.empty()) {
Settings::values.program_args = program_args;
}
@@ -210,6 +217,7 @@ int main(int argc, char** argv) {
// Core is loaded, start the GPU (makes the GPU contexts current to this thread)
system.GPU().Start();
system.GetCpuManager().OnGpuReady();
if (Settings::values.use_disk_shader_cache.GetValue()) {
system.Renderer().ReadRasterizer()->LoadDiskResources(